You are on page 1of 67

1AC

Econ
Advantage one is the economy:

A healthcare bubble is coming that will be bigger than the housing market---
deflating prices is key to prevent collapse
Ron Howrigon 16, President and CEO of Fulcrum Strategies, Masters in Economics from
North Carolina State University, has held Senior Management level positions with three of the
largest Managed Care Companies in the country, including Kaiser Permanente, CIGNA
HealthCare and BlueCross BlueShield, former Director of Community Medical Services with
Kaiser Permanente, 12-30-16, “Flatlining: How Healthcare Could Kill the US Economy,”
Greenbranch Publishing, pages 8-11
In 2010, the United States GDP was $15 trillion. The total healthcare expenditures in the United States
for 2010 were $2.6 trillion . At $2.6 trillion, the U.S. healthcare market has moved up from 15th and now ranks as the 5th
largest world economy, just behind Germany and just ahead of both France and the United Kingdom. That means that while
healthcare was only 5% of GDP in 1960, it has risen to over 17% of GDP in only 50 years. Over that same time, the
Defense Department has gone from 10% of GDP to less than 5% of GDP. This means that in terms of its portion of the US. economy,
defense spending has been reduced by half while healthcare spending has more than tripled. If healthcare continues
to trend at the same pace it has for the last 50 years, it will consume more than 50% of the US.
economy by the year 2060. Every economist worth their salt will tell you that healthcare will never reach
50% of the economy. It’s simply not possible because of all the other things it would have to crowd
out to reach that point. So, if we know healthcare can’t grow to 50% of our economy, where is
the breaking point? At what point does healthcare consume so much of the economy that it
breaks the bank , so to speak? This is the big question when it comes to healthcare. If something doesn’t happen to reverse
the 50-year trend we’ve been riding, when will the healthcare bubble burst? How bad will it be and how exactly will it
happen? While no one knows the exact answers to those questions, economists and healthcare experts agree that
something needs to happen, because we simply can’t continue on this trend forever .
Another way to look at healthcare is to study its impact on the federal budget and the national debt. In 1998, federal healthcare
spending accounted for 19% of the revenue taken in by the government. Just eight years later, in 2006, healthcare spending had
increased to 24% of federal revenue. In 2010, the Affordable Healthcare Act passed and significantly increased federal spending for
healthcare—so much so that in2016, healthcare spending accounted for almost one-third of all
revenue received by the government and surpassed Social Security as the largest single budget
category. What makes this trend even more alarming is the fact that revenue to the federal government doubled from 1998 to
2016. That means healthcare spending by the federal government has almost quadrupled in terms
of actual dollars in that same time period. If this trend continues for the next 20 years, healthcare spending
will account for over half the revenue received by the government by the year 2035. Again, that
simply can’t happen without causing significant issues for the financial wellbeing of our
country . In recent history, the U.S. economy has experienced the near catastrophic failure of two
major market segments. The first was the auto industry and the second was the housing industry. While each of these
reached their breaking point for different reasons, they both required a significant government bailout to keep them from completely
melting down. What is also true about both of those market failures is that, looking back, it’s easy to see the warning signs. What
happens if healthcare is the next industry to suffer a major failure and collapse? It’s safe to say
that a healthcare meltdown would make both the automotive and housing industries’ experiences
seem minor in comparison. While that may be hard to believe, it becomes clear if you look at the numbers. The auto
industry contributes around 3.5% of this country’s GDP and employs 1.7 million people. This industry was deemed “too big to fail”
which is the rationale the U.S. government used to finance its bail out. From 2009 through 2014, the federal government invested
Healthcare is five times larger than the
around $80 billion in the U.S. auto industry to keep it from collapsing.
auto industry in terms of its percentage of GDP, and is ten times larger than the auto
industry in terms of the number of people it employs. The construction industry (which includes all construction,
not just housing) contributes about 6% of our country’s GDP and employs 6.1 million people. Again, the healthcare market
dwarfs this industry. It’s three times larger in terms of GDP production and, with 18 million people employed in the healthcare
construction in this area, too. These comparisons give you an idea of just how significant a
sector, it’s three times larger than
portion healthcare comprises of the U.S. economy. It also begins to help us understand the impact it would have
on the economy if healthcare melted down like the auto and housing industries did. So, let’s
continue the comparison and use our experience with the auto and housing industries to suggest to what order of magnitude the
impact a failure in the healthcare market would cause our economy. The bailout in the auto industry cost the federal government
$80 billion over five years. Imagine a similar failure in healthcare that prompted the federal government to propose a similar bailout
program. Let’s imagine the government felt the need to inject cash into hospital systems and
doctors’ offices to keep them afloat like they did with General Motors. Since healthcare is five times the size of the auto
industry, a similar bailout could easily cost in excess of $400 billion. That’s about the same amount of money the federal
government spends on welfare programs. To
pay for a bailout of the healthcare industry, we’d have to
eliminate all welfare programs in this country . Can you imagine the impact it would
have on the economy if there were suddenly none of the assistance programs so many have come to rely upon? When the
housing market crashed, it caused the loss of about 3 million jobs from its peak employment level of 7.4 million in 1996. Again, if we
transfer that experience to the healthcare market, we come up with a truly frightening scenario. If healthcare lost 40% of its jobs like
housing did, it would mean 7.2 million jobs lost. That’s more than four times the number of people who are employed by the entire
auto industry—an industry that was considered too big to be allowed to fail. The loss of 7.2 million jobs would increase the
unemployment rate by 5%. That means we could easily top the all-time high unemployment rate for our country. In November of
1982, the U.S. unemployment rate was 10.8%. A failure in the healthcare sector could push unemployment to those levels or higher.
The only time in our country’s history when unemployment was higher was during the Great Depression. It should also be noted that
in 1982, home mortgage interest rates were close to 20%! The U.S. Federal Funds Rate, or the interest rate the government pays on
our national debt, was also close to 20% in 1982. Economists fear that a large increase in unemployment could
cause interest rates to escalate to levels approaching those of the early 1980s. If that were to happen
today, with a $19 trillion national debt, it would mean that our annual debt service would be $3.8 trillion. Keep in mind that the
federal government only takes in $3.4 trillion in total revenue. That’s right, in our nightmare scenario where healthcare
fails and eliminates 7.2 million jobs, which pushes unemployment above 10% and causes interest rates to climb to almost 20%, we
would be in a situation where the interest payments on our current debt would be more
than our entire federal tax revenue . Basically, we would be Greece, but on a much larger
scale . Ok, now it’s time to take a deep breath. I’m not convinced that healthcare is fated to unavoidable failure and economic
catastrophe. That’s a worst-case scenario. The problem is that at even a fraction the severity of the auto or
housing industry crises we’ve already faced, a healthcare collapse would still be
devastating. Healthcare can’t be allowed to continue its current inflationary trending . I believe
we are on the verge of some major changes in healthcare, and that how they’re implemented will
determine their impact on the overall economic picture in this country and around the
world. Continued failure to recognize the truth about healthcare will only cause the resulting
market corrections to be worse than they need to be. I don’t want to diminish the pain and anguish that
many people caught up in the housing crash experienced. I think an argument can be made, though, that if the healthcare
market crashes and millions of people end up with no healthcare, the resulting fallout could be
much worse than even the housing crisis.
It'll go global and trigger war
Stein Tønnesson 15, Research Professor, Peace Research Institute Oslo; Leader of East Asia
Peace program, Uppsala University, 2015, “Deterrence, interdependence and Sino–US peace,”
International Area Studies Review, Vol. 18, No. 3, p. 297-311
Several recent works on China and Sino–US relations have made substantial contributions to the current
understanding of how and under what circumstances a combination of nuclear deterrence and
economic interdependence may reduce the risk of war between major powers. At least four conclusions
can be drawn from the review above: first, those who say that interdependence may both inhibit and drive
conflict are right. Interdependence raises the cost of conflict for all sides but asymmetrical or
unbalanced dependencies and negative trade expectations may generate tensions leading to
trade wars among inter-dependent states that in turn increase the risk of military conflict (Copeland,
2015: 1, 14, 437; Roach, 2014). The risk may increase if one of the interdependent countries is governed by an inward-looking socio-
economic coalition (Solingen, 2015); second, the risk of war between China and the US should not just be analysed bilaterally but
include their allies and partners. Third party countries could drag China or the US into confrontation; third, in this context it is of
some comfort that the three main economic powers in Northeast Asia (China, Japan and South Korea) are all deeply integrated
economically through production networks within a global system of trade and finance (Ravenhill, 2014; Yoshimatsu, 2014: 576);
and fourth, decisions for war and peace are taken by very few people, who act on the basis of their
future expectations. International relations theory must be supplemented by foreign policy analysis in order to assess the
value attributed by national decision-makers to economic development and their assessments of risks and opportunities. If
leaders on either side of the Atlantic begin to seriously fear or anticipate their own nation’s
decline then they may blame this on external dependence, appeal to anti-foreign sentiments,
contemplate the use of force to gain respect or credibility, adopt protectionist policies, and ultimately
refuse to be deterred by either nuclear arms or prospects of socioeconomic calamities. Such
a dangerous shift could happen abruptly , i.e. under the instigation of actions by a third party – or against a third
party. Yet as long as there is both nuclear deterrence and interdependence, the tensions in East Asia are unlikely to escalate to
war. As Chan (2013) says, all states in the region are aware that they cannot count on support from either China or the US if they
make provocative moves. The
greatest risk is not that a territorial dispute leads to war under present
circumstances but that changes in the world economy alter those circumstances in ways that
render inter-state peace more precarious. If China and the US fail to rebalance their financial and trading relations
(Roach, 2014) then a trade war could result, interrupting transnational production networks, provoking social distress, and
exacerbating nationalist emotions. This could have unforeseen consequences in the field of security, with
nuclear deterrence remaining the only factor to protect the world from Armageddon, and
unreliably so . Deterrence could lose its credibility : one of the two great powers might gamble
that the other yield in a cyber-war or conventional limited war, or third party countries might engage in conflict
with each other, with a view to obliging Washington or Beijing to intervene.

Plan solves:

1. Excess profit margins---single payer eliminates them through bargaining


Laurence Seidman 15, Professor of Economics, University of Delaware, Ph.D. in economics,
University of California, Berkeley, August 2015, “The Affordable Care Act versus Medicare for
All,” Journal of Health Politics, Policy & Law, Vol. 40, No. 4
For several decades the United States has been an extreme outlier among high-income countries with
respect to medical cost as a percentage of GDP . Virtually all high-income countries have
used government singlepayer bargaining power to limit the rise in prices of medical goods and
services. Payer bargaining power has been used to limit prices set by hospitals and drug
companies and fees set by doctors and to set budgets— total spending caps— for hospitals, drugs, and doctors. Why is
government needed to negotiate prices for medical care but not for most other goods and services? For most goods and services,
consumers pay the price, can judge quality, and are able to shop around, so if one firm sets its price higher than a rival firm but its
quality is no higher, consumers will switch to competitors. But for most medical care, most patients (consumers) don’t pay the price
(except for a small co-payment), can’t judge quality, and are in no condition to shop around. Soconsumers are
incapable of limiting prices for medical care . Of course, private insurers who pay most
medical bills often refuse to pay the full price that medical providers charge. But when there are many
private insurers, each insurer has weak bargaining power to restrain price increases because a
provider can refuse to take a patient covered by an insurer who won’t pay a high enough share of
the price. Each insurer fears that patients will tell their employer to get another insurer who will pay a high enough share of the
price so that medical providers will treat them. With many private insurers, no single insurer has sufficient
bargaining power to significantly hold down prices. Merging private insurers into one is the wrong solution
because that single private insurer would use its enormous monopoly power to charge very high premiums to employers and
individuals.The best solution is for the government to become the single payer of medical
providers. High price, not high quantity, is the main reason that US medical expenditure—
which equals price times quantity— is so high. That is the conclusion of an empirical study of OECD countries (Anderson et al.
2003), titled “It’s the Prices, Stupid: Why the United States Is So Different from Other Countries.” The study’s authors analyze the
split between price and quantity in 2000, presenting comparisons of different quantity measures including the number of doctors,
nurses, hospital beds, hospital admissions, and hospital days. In most of these, the quantity per capita in the United States was at or
below the OECD median. They conclude that prices, not quantities, are the drivers of cross-national
differences in health spending and that a major cause of the difference in prices is the
difference in the bargaining power of the payers of medical providers. They emphasize the
difference between the United States and other OECD countries in the degree of bargaining power on the buyers’ side of markets for
medical care, writing: Although the huge federal Medicare program and the federal-state Medicaid programs do possess some
monopsonistic purchasing power, and large private insurers may enjoy some degree of monopsony power as well in some localities,
the highly fragmented buy side of the U.S. health system is relatively weak by international
standards. It is one factor, among others, that could explain the relatively high prices paid for health
care and for health professionals in the United States. In comparison, the government-controlled health systems
of Canada, Europe, and Japan allocate considerably more market power to the buy side. (Anderson et al. 2003: 102) But will
government single-payer bargaining power under Medicare for All lead to waiting lists and low
quality? It depends on whether bargaining power is applied severely or moderately. The aim of the government single-payer
should be to negotiate prices that are high enough to make it worthwhile for medical
providers to provide high-quality medical care to all patients, but no higher. If the single-
payer forces prices down too far, providers won’t find it worthwhile, and there will be waiting lists and low quality. The single-
payer should let prices rise enough to eliminate waiting lists and achieve high quality, but no
higher. Without government single-payer intervention and negotiation, medical prices will be much
higher than needed to prevent waiting lists and achieve high quality. In countries where payer
bargaining power has sometimes been applied severely (Britain and Canada), waiting lists have sometimes been generated and
quality has sometimes been inadequate. But in countries where payer bargaining power has been applied
moderately (France and Germany), waiting lists have generally been avoided and quality has
generally been high .
2. Cost-effectiveness analysis---centralization makes it easier, and it solves
inefficiencies.
John Geyman 16, M.D. is the author of The Human Face of ObamaCare: Promises vs. Reality
and What Comes Next and How Obamacare is Unsustainable: Why We Need a Single-Payer
Solution For All Americans., 6-17-2016, "Cost Effectiveness Analysis (CEA) in U. S. Health Car,"
HuffPost, http://www.huffingtonpost.com/john-geyman/cost-effectiveness-
analys_b_10528932.html
Cost effectiveness analysis (CEA), as applied to health care, attempts to estimate the value of
expenditures on procedures or treatments that is returned to patients, such as longer life,
better quality of life, or both. Given that the U. S. has the most expensive health care in the world, with comparatively low value and outcomes compared
to many other advanced countries, you would think that CEA would be a major part of health policy in this country. Sadly, the opposite is true, and it is notably

absent from the way we do things. This is not to say that no attempts have been made in past years to
introduce ways to evaluate effectiveness of health care services, whether involving comparative efficacy or costs. Two national organizations were established in the 1970s—the
but both were later abolished
Office of Technology Assessment (OTA) in 1975 and the National Center for Health Care Technology (NCHCT) in 1978—

after a strong backlash from powerful vested interests, especially the medical device industry and some medical professional organizations. (1,2) The FDA
remains our main regulatory body, but it is handcuffed by political forces preventing it from using CEA in its coverage policies. It has been
underfunded over the years, and is largely dependent on user fees from the industries it supposedly regulates for much of its annual budget,
with obvious built-in conflicts of interest analogous to the fox in the henhouse. The Affordable Care Act (ACA) postured toward the need for comparative research on health care
services by establishing the Patient-Centered Outcomes Research Institute (PCORI). It was intended to
pursue clinical effectiveness research (not cost-effectiveness), but it was hobbled from the start by specific bans in the
legislation on any authority to dictate coverage or reimbursement policies. A recent study found that it has had minimal impact, with only one-third of its

funding going to clinical effectiveness research. (3) It will also disappear in 2019 unless reauthorized by Congress. As we know, up to one third of all

health services provided each year are either unnecessary , inappropriate , or even harmful . (4)
Here are some examples of why we need a much stronger approach to research on comparative

efficacy and cost effectiveness of health services being provided in this country: A 2008 study of 90 drugs approved by the FDA between 1998
and 2000 found that only 394 of 909 clinical trials were ever published in a peer-reviewed journal. (5) Much of the research done by drug

manufacturers are in for-profit commercial networks, conducted by their marketing departments, without rigorous scientific
methods and with unreliable results ; unfavorable results are typically not reported. Two-thirds of new drug
applications to the FDA each year aren’t really new , but instead are reformulations or minor modifications of existing drugs or requests
for new uses, hyped as new drugs. (6) Between 2003 and 2012, the number of defective Class I recalls of medical devices, which carry a significant probability of death, increased
from 7 to 57. (7) The FDA approved expanded marketing of off-label cancer drugs in 2009 despite the lack of clinical evidence of their effectiveness. (8) Testosterone drugs for
men are widely marketed by the drug industry, claiming their own “research” shows no adverse cardiovascular events, such as heart attacks and strokes, but major studies over
Spending on prescription
the last 30 years have documented an increase of more than 50 percent of these events among men taking these drugs. (9)

drugs in the U. S. rose to 457 billion in 2015, one-sixth of total health care spending. (10) We should ask why we still
don’t have an ongoing, evidence-based mechanism to evaluate the comparative clinical and cost

effectiveness of health services. The answer is that it has been opposed successfully to date by the economic and
political power of the vested interests that profit from the status quo of our deregulated marketplace. The Citizens United decision has enabled
the infusion of even more money into politics, in both major parties, and massive lobbyist campaigns are launched by corporate stakeholders defending their interests whenever
new legislation for CEA is being contemplated. Meanwhile, the insurance industry blames the drug industry for accelerating costs even as it increases its own costs and profits at
the expense of its enrollees and taxpayers. Whenever the need for comparative clinical or cost effectiveness research is raised, corporate stakeholders bring up a number of
myths, such as “CEA would stifle innovation,” “it would lead to rationing of care,” and “how can you measure the value of health services anyway”? CEA is an established but
underused discipline in this country. As one response to these myths, wouldn’t it be a good idea to address the widespread overuse of full-body CT scanning as a screening
technique, since more than 30 million such scans are performed every year, posing potentially harmful radiation exposure, without evidence of benefit or approval by the FDA or
the American College of Radiology? (11) The big unanswered question is who and how to decide on the cost effectiveness of health care services— market interests and politics

We can look to science-based models


driven by money vs. science and evidence. We have seen how poorly the first approach works.

around the world for better examples, such as The National Institute for Health and Care Excellence (NICE) in the United Kingdom. In this
country, sooner than later, we need an independent, non-partisan, science-based
national commission, free from political influence , funded on a long-term basis, and with
authority to recommend coverage and reimbursement policies in the public interest. It
would logically be part of single-payer financing reform with national health insurance
coupled with a private delivery system. As we finally deal with this important issue, we should heed this advice by Sir Michael Rawlins, chairman
of NICE: The United States will one day have to take cost effectiveness into account. There is no doubt about it at all. You cannot keep on increasing your health care costs at the
rate you are for so poor return. You are 29th in the world in life expectancy. You pay twice as much for health care as anyone else on God’s earth. (12)

3. Admin costs. Micro-costing studies prove solving them is both necessary AND
sufficient for growth. Single payer is key---the best case multi-payer reforms are
nine times less efficient. Our estimates are conservative.
Jiwani et al. 14. Super Models for Global Health; City University School of Public Health @
Hunter College; Philip R. Lee Institute for Health Policy Studies @ UCSF. 11/13/2014. “Billing
and Insurance-Related Administrative Costs in United States’ Health Care: Synthesis of Micro-
Costing Evidence.” BMC Health Services Research, vol. 14. PubMed Central,
doi:10.1186/s12913-014-0556-7.
***BIR = billing and insurance-related costs, a subset of administrative costs
Background In a well-functioning health care system, sound administration is required to ensure efficient operations
and quality outcomes. In the U nited S tates however, the complex structure of health care financing has led

to a large and growing administrative burden [1]. In 1993, administrative personnel


accounted for 27% of the health care workforce, a 40% increase over 1968 [2]. Similarly, administrative costs as a
percentage of total health care spending more than doubled between 1980 and 2010 [3]. Private
insurers’ overhead costs have also increased sharply , rising 117 percent between 2001
to 2010 [4]. In the U.S. multi-payer system, insurers’ coverage, billing and eligibility
requirements often vary greatly , requiring providers to incur added administrative effort
and cost [5]. These payment-related activities can be termed “billing and insurance-related” (BIR)
[6]. On the provider side, BIR activities include functions related to interacting with payers , including

filing claims , obtaining prior authorizations , and managed care administration . On


the payer side, most administrative functions are billing related , with only a small portion spent on care-
related issues [7]. Insurers’ [and] profits also contribute to BIR costs. Several studies have used
micro-costing methods — cost estimates constructed from detailed classification of resource use or expenditures — to quantify the
portion of administrative costs attributable to BIR activities in physician and hospital sectors. Though the specific set
of methods used to estimate this cost varies by study, the general approach has been to identify the administrative functions related to BIR activities
and use clinician interviews and/or surveys to determine the proportion of work time spent on these activities. In some studies, this process has been
supplemented with additional interviews with non-clinical staff [8,9] and observations of work flows [9]. In California in 2001, the BIR component of
administrative costs was as high as 61% for physicians (constituting 14% of revenue) and 51% for hospitals (6.6-10.8% of revenue) [7], with
predominantly non-BIR activities such as scheduling and medical records management forming the rest of administrative spending. When adjusted to
a standard definition of BIR, two other studies attributed 10-13% of revenue in physicians’ offices to BIR costs [9,10]. Though studies have documented
BIR costs in physician and hospital sectors, the
specific analytical methods and components included in the
analyses vary, rendering estimates mostly non-comparable . Thus, results cannot be easily
combined into a system-wide estimate. To address this problem, we synthesized
available micro-costing data on BIR costs . We use an explicit, consistent, and
comprehensive definition of BIR to calculate BIR costs in well-studied sectors; estimate the
portion of BIR spending in other provider sectors; and present a system-wide estimate of total
BIR costs in the U.S. health care system in 2012. We also calculate potential savings from a
system with simplified financing , by comparing measured BIR in US health care sectors to
lower levels observed with different financing mechanisms . This paper updates preliminary
information developed for an Institute of Medicine roundtable on Value and Science-Driven Health Care [6]. It is intended to facilitate policy
discussions about reducing the BIR component of administrative costs. Methods Overview Drawing on U.S. National Health Expenditures (NHE),
existing research and publicly reported data, we estimated total and added BIR costs in the U.S. health care system in 2012. Our estimates included the
following sectors: physician practices, hospitals, private insurers, public insurers and “other health services and supplies.” We assembled micro-costing
estimates of total and added BIR costs from various studies [5,8], as well as the percentage of revenue spent on BIR [5,7,9,10]. We reconciled
differences in methods and findings by adjusting estimates to include the same BIR activities, payers and cost categories (detailed below). We
calculated total BIR costs for each sector as the product of the 2012 U.S. NHE for that sector and the proportion of that sector’s revenue used for BIR.
To calculate added BIR costs, we adjusted our total estimates using benchmarks from simplified financing systems (detailed below). To assess the effect
of input uncertainty, we performed multiple sensitivity analyses. Health system sectors We defined the sectors using categories designated as “personal
health care” in the Centers for Medicare and Medicaid Services’ (CMS) accounting of NHEs, and in the case of payers, from the categories designated as
“health insurance” [11] (Additional file 1: Table S1). Examples of categories included under “other health services and supplies” sector were nursing
care, home health care, prescription drugs, and other medical products. Total BIR We calculated total BIR costs for each sector as: Total BIR costs =
2012 NHE × % revenue for BIR For example, 2012 projected NHE for physician and clinical services was $542.9 billion [11] and estimated average BIR
costs for physicians as a percent of their gross revenues was 13% [5,7,9]. Thus, we calculated total BIR costs for physician practices in 2012 as $70.6
billion. Existing micro-costing estimates of BIR costs in physician practices vary substantially due to differences in analytic methods and BIR functional
areas included in the analyses [5,7-9,12]. Rather than select only the estimates that were obtained based on the same BIR definition and analytic
method, we undertook a systematic process to make evidence more directly comparable. To do this, we classified BIR into sub-components by type of
cost (e.g. contracting, insurance verification, service coding, billing, information technology, overhead) and payer (e.g., private, public). We adjusted
each cost study as necessary to include all costs (e.g., overhead) and payers (e.g. public payers), based on data from other cost studies and from the
NHE. For example, our estimate of 13% revenue for BIR costs for physician practices is based on a synthesis of three published studies [5,7,9]. It
includes BIR costs at multi-specialty, single-specialty primary care, and single-specialty surgical practices. Each study was adjusted for missing
information. In the study by Morra and colleagues, the reported estimate of BIR costs at 8.5% of revenue accounted for both public and private payers,
but did not include the full range of BIR functional areas in physician practices [5]. Thus, we adjusted the Morra estimate to include information
technology, time for insurance verification, a portion of clinician coding of services, and overhead attributed to BIR administration. This translated to a
total BIR of approximately 13.3% of revenue, or 12.2% of revenue if clinician coding is omitted. See Additional file 1: Table S2, for details of the
synthesis transformations. We estimated that 8.5% of hospital revenue goes towards BIR activities, based on the mid-point value for hospitals found by
Kahn and colleagues [7]. For public insurers, we estimated 3.1% of revenue for BIR, which is the blended mean overhead for Medicare and Medicaid
[13]. Since the majority of administrative functions for private insurers are BIR, we assumed the full value of private insurer overhead, including
profits, as the percentage of revenue for BIR. We estimated this at 18%, which we calculated as the total enrollment-weighted mean overhead for the 19
largest for-profit, publicly-traded insurers based on market capitalization [14], using 2010 data filed with the Securities Exchange Commission (SEC).
Our estimate of private insurer BIR costs includes the administrative costs of private insurers for their administration of Medicare Advantage, Medicare
Part D and Medicaid managed care. We added these costs from the 2011 historical NHE to the total estimate of BIR for private insurers. Recent data on
BIR costs for categories within the “other health services and supplies” sector is absent from the literature, though some earlier data on total
administrative costs is available. An analysis of 1999 data from a sample of nursing homes in California and home health agencies across the U.S. found
administrative expenditures of approximately 19% and 35% of total expenditures, respectively [15]. We conservatively assumed that 10% of revenue for
our other health services and supplies sector categories goes to BIR activities, which is the mean percentage from physician practices and hospitals. We
vary these assumptions in sensitivity analyses. Table 1 shows the NHE and percent of revenue attributed to BIR for each sector.
[[TABLE 1 OMITTED]]
2012 U.S. National Health Expenditures, percent billing and insurance-related (BIR), and BIR proportion considered “added” Added BIR We
defined added BIR as the costs of BIR activities that exceed those in systems with
simplified BIR requirements . For physicians, hospitals and other providers, we used
Canada’s single-payer system for comparison. For private and public insurers, we used
U.S. Medicare as a comparator. We calculated added BIR costs for physicians, hospitals and
other health services/supplies as: Added BIR = Total BIR in U.S. sector ×(BIR in U.S. sector –
BIR in Canadian sector)/BIR in U.S. sector Morra and colleagues estimated annual BIR costs in physicians’ practices at
$82,975 per physician in the U.S. versus $22,205 in Ontario, Canada [5], i.e., 73% lower. While data on BIR costs in U.S. hospitals exists [7], we found
no comparable data on Canadian hospitals or Canadian or U.S. non-physician health service or supply sectors. We assumed an added proportion of
73% for these sectors. We varied these assumptions in sensitivity analyses. For
private and public insurers, we calculated
added BIR costs as: Added insurer BIR = Total insurer BIR × (Insurer overhead – U.S. Medicare
overhead)/Insurer overhead Table 1 summarizes the proportion considered as the added BIR costs of the U.S. multi-payer system for
each health care sector. Sensitivity analyses Excluding clinician coding of services BIR obligations likely require additional coding by clinicians, beyond
that needed for clinical documentation, consuming up to 2.3% of physician revenue [9]. In our base case estimate of BIR costs in physician practices,
we included 50% of the cost of coding. If we exclude clinician coding of services as a BIR function, we calculate a revised estimate of 12% for the
percentage of physician revenue spent on BIR, based on the average of three studies [5,7,9]. Canadian medicare In the base case analysis, we use U.S.
Medicare as a comparison system against which to estimate the added BIR costs of private and public insurers. Due to differing estimates of U.S.
Medicare overhead (i.e. excluding versus including private insurer administration of medical plans) [16], we explored the effect of using Canada’s
Medicare as an alternative comparator to calculate the excess BIR costs of U.S. insurers. We used an overhead estimate for Canada’s Medicare of 1.8%
(2011 forecast) [17] (Additional file 1: Table S3).
***BEGIN TABLE S3---FROM SUPPLEMENTARY FILE***
analysis findings based on using Canada’s Medicare as a comparison system for
Table S3: Sensitivity

calculating added BIR among U.S. insurers


Sector Total BIR Added* BIR costs
costs
Physicians $70 billion $49 billion

Hospitals $74 billion $54 billion

Other health services and $94 billion $69 billion


supplies
Private insurers $198 billion $182 billion**
Public insurers $35 billion $15 billion***

TOTAL $471 billion $369 billion


* "Added" is defined as spending above the indicated benchmark comparison.
**Based on an “added” proportion of 0.90
***Based on an “added” proportion of 0.42
***END TABLE S3---FROM SUPPLEMENTARY FILE***
Total BIR Due to uncertainty in some sector-specific inputs, we varied the percentage of revenue for BIR for each sector to obtain a plausible range of
total BIR costs. Where available, we used lower and upper bound estimates from the literature; where unavailable, we varied the estimates by up to ten
percentage points, using wider variations when data was least certain, e.g., for the “other health services and supplies” sector. Varying the estimates in
tandem, we obtained upper and lower bound estimates of total BIR costs across the U.S. health care system in 2012 (Additional file 1: Table S4). Added
BIR For the “other health service and supplies” sector, we
varied our baseline estimate of 27% (Canadian: U.S. BIR costs) by
5 percentage points in either direction . For the hospital sector, we calculated a new ratio of 8.1% (Canadian: U.S. BIR
costs) based on published data on total (not just BIR) hospital administrative spending in the U.S. and Canada [15] (text on added costs, Additional file
1). Ethics statement This research did not involve human subjects and thus did not require ethics committee review. Total BIR Our base case
calculation is that BIR costs in the U.S. totaled $471 billion in 2012. Physicians’ practices spent $70 billion on BIR activities, hospitals spent $74 billion,
and the “other health service and supplies” sector spent an estimated $94 billion (Figure 1). Private insurers contributed the largest share of BIR costs,
$198 billion; public insurers contributed $35 billion.
Total and added BIR costs (billions) by health care sector. Blue = total BIR; Orange = added BIR. Added defined as spending above indicated
benchmark comparison. Physicians: synthesis range = $68-71 billion ... Added BIR About
$375 billion ( 80% ) of annual BIR
costs constitutes additional spending compared to a simplified financing system . This
80% reflects 73% savings among provider sectors [5] and 93% savings in the
private insurance sector . When compared to Canada’s single payer system, added BIR costs in U.S. physicians’
practices totaled $49 billion annually (Figure 1; Additional file 1: Table S3). In U.S. hospitals and “other health service and supplies” sectors, added BIR
costs were $54 billion and $69 billion, respectively. When compared to BIR costs in U.S. Medicare, additional annual spending on BIR for private and
public insurers totaled $185 billion and $18 billion, respectively (Figure 1). Figure 2 shows each health care sector’s share of total added BIR costs.
Private insurers contributed much of added BIR spending at 49%, though providers collectively represented nearly half of the total.
[[FIGURE 2 OMITTED]]
Percentage of total U.S. added BIR costs by health care sector. Percentages indicate contribution towards total added BIR in the U.S. ($375 billion).
Added is defined as spending above indicated benchmark comparison (Canada’s single payer system ... Sensitivity analysis Excluding clinician coding If
clinician coding of services is excluded as a BIR activity, total and added BIR costs in physicians’ practices are reduced minimally to $65 billion and $48
billion, respectively. Canadian medicare Using Canada’s Medicare overhead (1.8%) [17] instead of U.S. Medicare’s (1.5%) [13] for comparison reduces
added BIR costs for private and public insurers to $182 billion and $15 billion, respectively, yielding a revised estimate of $369 billion in overall added
BIR costs (Additional file 1: Table S3). Total BIR Varying in tandem each of the sector-specific estimates of the percentage of revenue spent for BIR
yields a plausible range for total 2012 BIR costs of $330 billion - $597 billion (Additional file 1: Table S4). Added BIR Varying
the BIR cost
ratios for the hospital and non-physician health service and supplies sectors as described above,
and using the lower and upper bound estimates of total BIR in each sector, we obtained a
plausible range for overall added BIR costs in the U.S. of $254 - $507 billion in 2012 (see Table 2).
[[TABLE 2 OMITTED]]
Lower and upper bound estimates of added billing and insurance-related (BIR) costs in the U.S. health care system a Discussion While published data
exist on BIR costs for certain health care sectors, these isolated estimates do not provide the comprehensive portrayal needed to understand the overall
costs of BIR in the U.S. health care system. First, studies of similar sectors have examined a varying set of BIR activities and costs, complicating
straightforward comparisons and simple aggregation of existing component BIR costs. Akin to the tale of the blind men and the elephant [18], knowing
bits of information about a few isolated pieces cannot construct an accurate picture of the whole. Second, and equally important, evidence on BIR costs
in provider sectors other than physicians’ practices and hospitals is lacking from the published literature. Taken together, these realities have made it
difficult for policymakers to grasp the total magnitude of health care administrative costs due to BIR activities. Our analyses, which synthesize available
micro-costing data on BIR costs using a consistent definition of BIR and extrapolate data to sectors lacking estimates, present the first system-wide
estimate of total BIR costs across the U.S. health care system. Synthesizing data from existing studies, our analyses indicate that BIR costs totaled $471
billion annually in the U.S in 2012; 80% of this represents additional costs when compared to a simplified financing system. If
BIR costs were
pared to that of benchmark systems, system-wide savings would exceed $350 billion per
year. Total BIR costs currently represent about 18% of U.S. health care expenditures
(excluding government public health activities). Non-BIR administrative activities represent an additional

9.4% [7], leaving less than 73% of spending for clinical care (Figure 3; details of estimation of non-BIR administrative
Added BIR costs of $375 billion translate to 14.7% of
costs in Additional file 1).

U.S. health care expenditures in 2012, or 2.4% of GDP [19].


[[FIGURE 3 OMITTED]]
Allocation of spending for clinical care and administration in the U.S. health care system. Values represent share of 2012 U.S. Health Consumption
Expenditures (minus government public health activities; i.e., ~$2.6 trillion). BIR = billing ... Our findings update and expand on previous estimates.
Woolhandler et al. estimated total administrative spending in the U.S. health care system in
1999 at $294.3 billion, with added spending of $209 billion when compared to Canada [15].
Adjusting their estimates to 2012 health spending yields estimated added costs of
approximately $448 billion, an estimate that falls within the upper bound of our sensitivity

analysis . The earlier study assessed total administrative costs, not just BIR spending ,
and hence is not directly comparable to this study. However, a simplified payment system that blunts

entrepreneurial incentives (as in Canada ) might also reduce non-BIR administrative


costs for such items as marketing and internal cost accounting. Our estimates of BIR costs in
physicians’ practices are higher than previous studies, due to the more complete set of BIR activities, payers, and costs quantified in our analysis. Morra
et al. estimated total BIR costs per U.S. physician of $82,975, translating to total and added BIR costs of $38 billion and $28 billion, respectively [5].
However, their analysis involved just a subset of BIR activities, as detailed above. Similarly, Heffernan et al’s [8,12] analysis, which was limited to
private payers, estimated added BIR costs of $26 billion. After adjustment to encompass the entire scope of BIR activities, payers, and overhead costs,
these earlier estimates are consistent with ours (Methods and Additional file 1: Table S2). Remaining differences are small – less than 5% of total BIR
costs – and most likely explained by nuances in the questions used to obtain BIR costs. We present synthesis mid-point estimates of this small variation
for all relevant analyses (Additional file 1: Table S2). Several caveats apply to our estimates. First, BIR estimates in the published literature are most
robust for physicians’ practices, with limited information available for hospitals and almost no data for categories within the sector defined as “other
health service and supplies.” Hence, we explored the effect of uncertainty in our estimates in sensitivity analyses. Second, our analyses assume that BIR
can be distinguished from other administrative functions. This seems a fair assumption, given that consistent findings were obtained for similar
activities using varied methods. Qualitative claims by physicians of the burden of BIR lend further support to this assumption [10]. Finally, our
estimate of total and added BIR is likely conservative on three accounts . First, we assume
no BIR spending outside of the direct health sector , e.g., by employers or patients. Since
employment-based coverage is pervasive in the U.S., documentation of BIR costs of employers
might well augment the estimates presented here. Second, for providers, we assume that costs
such as public relations and marketing are incurred for non-BIR reasons. This assumption
might underestimate added BIR costs, since such non-BIR administrative costs might also
be lower in a simplified financing system. Finally, new evidence comparing total hospital
administrative costs in the U.S. to Canada and other OECD countries suggests that added BIR
for hospitals may be substantially higher than our base case estimate [20]. Since these
added costs are a function of the structure of the U.S. multi-payer system, some might
characterize these costs as excess , in that that they provide little to no added value to the health
care system. If BIR functions produce secondary benefits, such as enhanced quality or utilization management, the high BIR costs in the U.S. might be
justified. Some research suggests, for example, that prior authorization can reduce over-utilization of brand-name medications without reducing
patient satisfaction [21]. It is also possible that BIR functions provide benefits that have not yet been quantified. Nonetheless, any
unmeasured benefit would have to be large to offset added BIR costs. Moreover, at least one study
has found that higher administrative costs are associated with lower quality [22]. Hence,
reducing BIR costs by adopting a simplified financing system would provide substantial
recurring savings and produce an unequivocal benefit from a societal perspective. It is worth noting also that
a simplified financing system does not preclude utilization controls, and that such controls might be employed in single

payer systems while maintaining lower BIR costs. Eliminating added BIR costs of $375
billion per year (14.7% of US health care spending) would provide resources to extend and improve
insurance coverage , within current expenditure levels . Since uninsured individuals have
utilization of about 50% of insured individuals [23], the current 15% uninsured could be covered
with roughly half of the $375 billion . Remaining savings could be applied to improved
coverage for those already insured. Full financial analyses of single payer insurance
reform formalize and extend these analyses [24]. Unfortunately, recent reforms incorporated in the
Affordable Care Act (ACA) and the American Recovery and Reinvestment Act (ARRA) are
unlikely to substantially reduce BIR costs and administrative burden. Data on the BIR portion of administrative
costs is not yet available in the published literature. Using the BIR cost percentages identified in this analysis as a starting point, we projected BIR costs
under the ACA in 2014 and 2018. Our projections were based on estimated increases in the insured population in each health sector (i.e., 7 million
more people covered by private insurance and 8 million more by Medicaid in 2014; 13 and 12 million more, respectively, in 2018) [25]. Assuming
parallel increases in healthcare utilization, stable administrative complexity, and an initial cost of $5.8 million to operate the exchanges [26,27], we
estimate that implementing the ACA will increase system-wide BIR costs by 5 − 7% ($24 − $34 billion) in 2014 and 9 − 11% ($45 − $55 billion) in 2018
(in 2014 USD). Moreover,
greater use of deductibles under the ACA will likely further increase
administrative costs, since each claim will require processing and value adjustment before
determining whether the deductible has been met. Thus, the new system will incur some new
BIR costs for both the insured and uninsured portion of care. Empirical evidence from similar reform in
Massachusetts is not encouraging: exchanges added 4% to health plan costs [28], and the reform sharply increased administrative staffing compared
with other states [29]. While it was hoped that the ARRA’s incentives for adoption of health information
technology (HIT) would reduce costs, partly by streamlining billing and administration [30],
savings have not materialized [31,32]. Indeed, it appears that HIT will impose hefty
implementation and training costs [33], and may require ongoing expenditures for IT
upgrades and maintenance [4]. Moreover, the ACA’s emphasis on financial incentives such
as pay-for-performance may well increase administrative complexity , and hence costs [34].
A recent estimate suggests that simplifying administrative activities within the existing multi-payer system
by implementing a range of standardization, automation and enrollment stabilization reforms
could save $40 billion annually [35]. While these savings are significant , we estimate that the
annual administrative savings under a single-payer system would be nearly nine-fold
higher . Though some argue that shifting to a single payer system could propagate
unintended financial hazards (i.e., overutilization ) and inefficiencies, as discussed previously,
utilization controls can be employed in simplified financing systems while also keeping
BIR costs down . Moreover, evidence from the U.S. Medicare program and the systems of several
other countries [1] demonstrates that large, unified payers can achieve significantly
greater efficiencies than multi-payer systems . Unified payment schemes
economies of scale, sharply reduce the burdens of
enjoy
claims processing, and obviate the need for marketing,
advertising and underwriting expenses . Conclusions While the estimates presented
here should continue to be refined through additional sector-specific research on BIR costs, the cost burden of BIR activities in the existing U.S. multi-
payer health care system is clear. Implementation
of a simplified financing system offers the potential for
substantial administrative savings, on the order of $375 billion annually , which could cover all
of the uninsured [36] and upgrade coverage for the tens of millions who are under-insured. Further research into the costs of BIR activities to
employers and in areas such as home health care, nursing home care, and prescription drugs would augment the findings from this analysis. Data on
BIR costs since implementation of the ACA is also needed to further illuminate the administrative effects of recent health reforms and provide
additional tangible information for policy decision-making.

Reforms must be system-level---reducing individual transaction costs is


insufficient
Theodore Marmor and Jonathan Oberlander 12, Yale University Professor Emeritus in the
Schools of Management and Law and the Department of Political Science, Adjunct Professor in
Public Policy at Harvard’s Kennedy School of Government, Ph.D., Harvard University; professor
and chair of Social Medicine and professor of Health Policy & Management at the University of
North Carolina-Chapel Hill, PhD 1995 Political Science, Yale University, September 2012, “From
HMOs to ACOs: The Quest for the Holy Grail in U.S. Health Policy,” Journal of General Internal
Medicine, Vol. 27, No. 9, pages 1215-1218
The United States has the most expensive medical care system in the world by a
large margin , with per capita expenditures of $7960 in 2009.1 Moreover, despite a recent slowdown due largely to the
recession’s impact, the U.S. is projected to spend over $30 trillion on medical care in the coming
decade.2 Over four decades after President Richard Nixon declared a cost crisis, the United States has yet to get a
firm grip on rising medical care costs. The failure to control health care spending has been accompanied by a
distinctive dynamic. Since the 1970s, American policymakers and policy analysts have relentlessly
searched for the “the Big Fix,”3 a reform that will decisively rein in spending and simultaneously
improve the coordination and quality of medical care. The combination of these ambitious goals and our dismal
record of cost containment has not diminished the health policy community’s endless enthusiasm for the latest fad. We have
run through a truly staggering list of proposed panaceas : Health Maintenance Organizations
(HMOs), Preferred Provider Organizations (PPOs), managed care, capitation, integrated delivery systems,
health savings accounts (HSAs) and consumer-directed care, pay for performance (P4P), health information technology
(HIT), comparative effectiveness research (CER) and much more. Now, bundled payment, value-based
purchasing, patient-centered medical homes, and accountable care organizations (ACOs) have emerged
as the solutions of the day , propelled forward by the 2010 Patient Protection and Affordable Care Act (ACA) and by
private sector initiatives. Reforms aimed at slowing health care spending have encompassed (and often
combined) a range of organizational (HMOs, ACOs), financial (bundling, HSAs, P4P, ACOs), and informational
(HIT, CER) approaches. Some reforms have called for more patient cost-sharing, others for tighter
control of medical services by health plans, and still others for more evidence to guide medical
decision-making. Thus the U.S. has moved rhetorically from the era of managed care to consumer-directed health care and
now into the era of value purchasing and delivery system reform. The range of available ideas is evidently
narrow enough that we are now repeating fads —yesterday’s conviction that capitation held the key to
stemming the tide of rising costs is reborn in today’s faith in bundling while integrated delivery systems and HMOs have morphed
into ACOs.4 THE SEARCH FOR THE HOLY GRAIL Fads in American health policy come and go so quickly that there is too little
reflection about their origins, effects, and whether any are actually effective approaches to controlling health care spending. Why do
American analysts keep searching for the Holy Grail in health policy and what impact has that quest had on our medical care?
American health policy is dominated by the search for these policies largely because of their political appeal. Reform labels promise
to modernize and rationalize the health care system. Who can oppose the march of progress to replace paper medical records or our
ostensibly antiquated fee-for-service payment arrangements? How can anyone oppose reforms that promise to curb medical
spending and yet improve health outcomes? Indeed, because panaceas promise to moderate spending by reducing ineffective care,
improving coordination, and keeping people healthy, such policies offer the prospect of painless cost control.5 That is powerfully
alluring for politicians who want to avoid the conflict associated with policies such as imposing budgetary caps, limiting payments,
restricting the availability of services, or cutting benefits. Further, if new organizations can be created to handle the task of making
the difficult choices, or if new payment tools can be adopted that automatically unleash the right incentives, politicians can avoid
blame for unpopular decisions. Innovation
and its promise to enhance efficiency is an appealing
substitute for policy realism and political will. Many of these reform ideas are framed in ways that makes
rational criticism seem implausible. Few will defend “medical homelessness” or argue that the U.S. medical care system needs less
coordinated care. Indeed, a key characteristic of many reforms is that their descriptive labels are not actually descriptive, but instead
comprise persuasive definitions.6 We used to label health care organizations by their primary characteristics; Kaiser Permanente
was accurately known as a “prepaid group practice.” But beginning with the Nixon administration’s campaign to promote Health
Maintenance Organizations in the 1970s, policymakers and analysts increasingly started to label organizations and policies more by
their aspirations, rather than by their substantive characteristics. “Managed care” and “patient-centered medical homes” exemplify
such marketing slogans, terms that imply success by their very use. Yet many so-called managed care plans actually don’t do much to
manage care.7 And whether a health care institution is “patient centered” is an empirical question (assuming we could agree on a
definition of what it means to be patient-centered). In other words, the language used to describe many health reforms is meant to
convince rather than to describe and explain, and that obscures realistic assessments of their appeal and impact. Another
reason that Americans look for the “big fix” is the absence of a coherent national health
system. In most industrialized democracies, health care spending is controlled “upstream”
through budgeting, fee schedules, and systemwide limits on medical capacity. But adopting
such measures in the U.S. political system has been and remains extraordinarily difficult. Restraining
spending requires reducing the income of health care providers who historically have been effective at resisting robust cost
controls.8 In addition, government measures to reduce spending growth invite charges of rationing that tap into many Americans’
distrust of government—recall the hysteria over mythical “death panels” during the 2009-2010 health care reform debate. And
America’s fragmented political institutions give opponents multiple chances to defeat or weaken
proposals to limit spending. In fact, the U.S. has not had a national health system at all and
consequently, cost containment efforts often focus “downstream” to regulate the costs of
individual medical encounters.9 These efforts are typically led by individual employers and health plans, actors that by
definition cannot pursue systemwide solutions. Our enthusiasm for innovative and organizational solutions
to cost containment is, then, partly a product of our political incapacity to produce
universal health insurance . Belief in “American exceptionalism”—that as a nation we are too different culturally,
socially, and politically to learn from other countries—has reinforced America’s tendency to look inward for solutions to control
health care spending. Problems with Panaceas There are five major problems with the endless search for cost control panaceas. The
first is that the yearning for a transcendent solution inevitably produces a cycle of exaggerated expectations, followed by deep
disappointment. The problem, as Bruce Vladeck argues, begins when a “modestly successful innovation is hyped as the unique and
unitary solution to some complex, persistent problem.”10 Thus many policy analysts celebrated the rise of managed care during the
early to mid-1990s as the solution to America’s health care spending problem. But as health care costs started to accelerate again,
analysts quickly turned to writing managed care’s obituary. Similarly, it will be difficult for ACOs to meet the lofty expectations that
now surround them. ACO euphoria is evident in Ezekiel Emanuel and Jeffrey Liebman’s foolhardy prediction that “By 2020, the
American health insurance industry will be extinct,” replaced entirely by ACOs.11 Given the hype about their transformational
impact, it is worth remembering the Centers for Medicare and Medicaid Services (CMS) median estimate that the ACO Shared
Savings Program will reduce federal government spending on Medicare by only a total of $470 million during 2012-15, a tiny
fraction of total program expenditures.12 Moreover, a recent review by the Congressional Budget Office of
disease management, care coordination, and value-based payment demonstrations—all ideas
currently touted as solutions to Medicare’s financing challenges—found that “most programs have not
reduced Medicare spending .”13 Second, because we invest so much hope and faith in new solutions, and because
persuasive labels make these ideas appear self-evidently right, the real-world challenges in making policies work are commonly
overlooked. Aspirations are undercut by implementation problems, unanticipated outcomes and political constraints. Managed
care triggered backlash from providers and patients. Supposedly the least effective form of managed care—
PPOs—surprisingly emerged as the victor in the market by the beginning of the 2000s.14 ACOs may enhance integration
of some providers and foster better coordination of some care. But the incentives to create ACOs
may also lead to greater consolidation of health care providers and to hospitals purchasing
physician practices, both of which could raise overall health spending.15 A third problem is generalizability. The enthusiasm for
particular reforms often stems from positive results in a particular geographic and institutional settings: Kaiser Permanente, the
Palo Alto clinic and the Mayo Clinic were held up as exemplars in the past, today they are joined by the Veterans Administration,
Geisenger, and Intermountain. These institutions have in many cases produced impressive results. But the success of any particular
institution does not imply that its performance can be extrapolated to the whole of American medicine. The difficulties Kasier has
had in making its model work outside of its traditional regions illustrates this point.16 And the VA has a level of organizational
centralization that is not found in most other areas of American medicine. Creating new types of organizations is extraordinarily
difficult and replicating them across different institutional, political, economic and geographic settings is even more so.17 A fourth
problem is that these reform ideas usually focus on reducing the utilization of medical services. There
are, to be sure, many instances of low-value medical care in the U.S. worth reducing.18,19 And in the past
decade, increases in Medicare expenditures on physician services have been driven mostly by growth in service volume and
intensity.20 But a predominant focus on utilization diverts us from other important
sources of high health care spending .21, 22, 23 The difference between Canadian and
American spending on hospital and physician care, according to a recent study, is mostly explained
by prices and administrative expenses , reflecting the lower costs of Canada’s single-payer system.24 Only
14% of the difference is attributable to higher utilization of medical services in the U.S. Yet American policy analysts
continue to focus on ways to limit excessive utilization, while giving comparatively short shrift to
policies—such as all-payer reform—that could lower prices and administrative costs .
Plan
The United States federal government should establish a health insurance
program, publicly financed and government-administered, to provide
comprehensive health care in the United States.
Disease
Advantage two is disease:

Diseases cause extinction


Ranu Dhillon 17, instructor at Harvard Medical School and a physician at Brigham and
Women’s Hospital in Boston. He works on building health systems in developing countries and
served as an advisor to the president of Guinea during the Ebola epidemic instructor at Harvard
Medical School and a physician at Brigham and Women’s Hospital in Boston. He works on
building health systems in developing countries and served as an advisor to the president of
Guinea during the Ebola epidemic, Harvard Business Review, 3-15-17, “The World Is Completely
Unprepared for a Global Pandemic”, https://hbr.org/2017/03/the-world-is-completely-
unprepared-for-a-global-pandemic
We fear it is only a matter of time before we face a deadlier and more contagious pathogen,
yet the threat of a deadly pandemic remains dangerously overlooked. Pandemics now occur with
greater frequency , due to factors such as climate change , urbanization , and international
travel . Other factors, such as a weak World Health Organization and potentially massive cuts to funding for
U.S. scientific research and foreign aid, including funding for the United Nations, stand to deepen our vulnerability. We
also face the specter of novel and mutated pathogens that could spread and kill
faster than diseases we have seen before . With the advent of genome-editing technologies, bioterrorists could
artificially engineer new plagues, a threat that Ashton Carter, the former U.S. secretary of defense, thinks could rival nuclear

weapons in deadliness . The two of us have advised the president of Guinea on stopping Ebola. In addition, we have worked on ways
to contain the spread of Zika and have informally advised U.S. and international organizations on the matter. Our experiences tell us
that the world is unprepared for these threats. We urgently need to change this trajectory. We can start by learning four
lessons from the gaps exposed by the Ebola and Zika pandemics. Faster Vaccine Development The most effective way to stop pandemics is with
vaccines. However, with Ebola there was no vaccine, and only now, years later, has one proven effective. This has been the case with Zika, too. Though
there has been rapid progress in developing and getting a vaccine to market, it is not fast enough, and Zika has already spread worldwide. Many other
diseases do not have vaccines, and developing them takes too long when a pandemic is already under way. We need faster pipelines, such as the one
that the Coalition for Epidemic Preparedness Innovations is trying to create, to preemptively develop vaccines for diseases predicted to cause outbreaks
in the near future. Point-of-Care Diagnostics Even with such efforts, vaccines will not be ready for many diseases and would not even be an option for
novel or artificially engineered pathogens. With no vaccine for Ebola, our next best strategy was to identify who was infected as quickly as possible and
isolate them before they infected others. Because Ebola’s symptoms were identical to common illnesses like malaria, diagnosis required laboratory
testing that could not be easily scaled. As a result, many patients were only tested after several days of being contagious and infecting others. Some were
never tested at all, and about 40% of patients in Ebola treatment centers did not actually have Ebola. Many dangerous pathogens similarly require
laboratory testing that is difficult to scale. Florida, for example, has not been able to expand testing for Zika, so pregnant women wait weeks to know if
their babies might be affected. What’s needed are point-of-care diagnostics that, like pregnancy tests, can be used by frontline responders or patients
themselves to detect infection right away, where they live. These tests already exist for many diseases, and the technology behind them is well-
established. However, the process for their validation is slow and messy. Point-of-care diagnostics for Ebola, for example, were available but never used
because of such bottlenecks. Greater Global Coordination We need stronger global coordination . The
responsibility for controlling pandemics is fragmented, spread across too many players
with no unifying authority . In Guinea we forged a response out of an amalgam of over 30 organizations, each of which had its
own priorities. In Ebola’s aftermath, there have been calls for a mechanism for responding to
pandemics similar to the advance planning and training that NATO has in place for its
numerous members to respond to military threats in a quick, coordinated fashion. This is the right
thinking, but we are far from seeing it happen. The errors that allowed Ebola to become a crisis replayed with Zika, and the WHO, which
should anchor global action, continues to suffer from a lack of credibility. Stronger Local Health Systems
International actors are essential but cannot parachute into countries and navigate
local dynamics quickly enough to contain outbreaks . In Guinea it took months to establish the ground game
needed to stop the pandemic, with Ebola continuing to spread in the meantime. We need to help developing countries establish health systems that can
provide routine care and, when needed, coordinate with international responders to contain new outbreaks. Local health systems could be established
for about half of the $3.6 billion ultimately spent on creating an Ebola response from scratch. Access to routine care is also
essential for knowing when an outbreak is taking root and establishing trust. For months, Ebola spread before anyone knew it was happening,
and then lingered because communities who had never had basic health care doubted the intentions of foreigners flooding into their villages. The
turning point in the pandemic came when they began to trust what they were hearing about Ebola and understood what they needed to do to halt its
spread: identify those exposed and safely bury the dead. With
Ebola and Zika, we lacked these four things — vaccines, diagnostics,
global coordination, and local health systems — which are still urgently needed. However, prevailing
political headwinds in the
United States, which has played a key role in combatting pandemics
around the world , threaten to make things worse. The Trump administration is seeking drastic budget cuts in funding for foreign aid
and scientific research. The U.S. State Department and U.S. Agency for International Development may lose over one-third of their budgets, including
half of the funding the U.S. usually provides to the UN. The National Institutes of Health, which has been on the vanguard of vaccines and diagnostics
research, may also face cuts. The Centers for Disease Control and Prevention, which has been at the forefront of responding to outbreaks, remains
without a director, and, if the Affordable Care Act is repealed, would lose $891 million, 12% of its overall budget, provided to it for immunization
programs, monitoring and responding to outbreaks, and other public health initiatives. Investing in our ability to prevent and
contain pandemics through revitalized national and international institutions should be our shared
goal. However, if U.S. agencies become less able to respond to pandemics, leading institutions from other nations, such as Institut Pasteur and the
National Institute of Health and Medical Research in France, the Wellcome Trust and London School of Hygiene and Tropical Medicine in the UK, and
nongovernmental organizations (NGOs have done instrumental research and response work in previous pandemics), would need to step in to fill the
Pandemics are an existential threat on par with
void. There is no border wall against disease.

climate change and nuclear conflict . We are at a critical crossroads, where we must either take the steps needed to
prepare for this threat or become even more vulnerable. It is only a matter of time before we are hit by a deadlier, more contagious pandemic. Will we
be ready?

The US is key and there’s no burnout


Yaneer Bar-Yam 16, physicist and complex systems scientist, Founding President of the New
England Complex Systems Institute, Ph.D., S.B., physics, Massachusetts Institute of Technology,
“Transition to extinction: Pandemics in a connected world,” NECSI, 7-3-2016,
http://necsi.edu/research/social/pandemics/transition
[ FIGURE 1 OMITTED ] The video (Figure 1) shows a simple model of hosts and pathogens we have used to study evolutionary dynamics. In the
animation, the green are hosts and red are pathogens. As pathogens infect hosts, they spread across the system. If you look closely, you will see that the
red changes tint from time to time — that is the natural mutation of pathogens to become more or less aggressive. Watch as one of the more aggressive—
brighter red — strains rapidly expands. After a time it goes extinct leaving a black region. Why does it go extinct? The answer is that it spreads so rapidly
that it kills the hosts around it. Without new hosts to infect it then dies out itself. That the rapidly spreading pathogens die out has important
implications for evolutionary research which we have talked about elsewhere [1–7]. In the research I want to discuss here, what we were interested in is
the effect of adding long range transportation [8]. This includes natural means of dispersal as well as unintentional dispersal by humans, like adding
airplane routes, which is being done by real world airlines (Figure 2). [ FIGURE 2 OMITTED ] When
we introduce long range
transportation into the model, the success of more aggressive strains changes. They can use the long
range transportation to find new hosts and escape local extinction. Figure 3 shows that the more
transportation routes introduced into the model, the more higher aggressive pathogens are able
to survive and spread. [ FIGURE 3 OMITTED ] As we add more long range transportation, there is a critical point at
which pathogens become so aggressive that the entire host population dies. The pathogens die at the
same time, but that is not exactly a consolation to the hosts. We call this the phase transition to extinction (Figure 4).

With increasing levels of global transportation, human civilization may be


approaching such a critical threshold .
Figure 4: The probability of survival makes a sharp transition (red line) from one to zero as we
add more long range transportaion (horizontal axis). The right line (black) holds for different
model parameters, so we need to study at what point the transition will take place for our world.
In the paper we wrote in 2006 about the dangers of global transportation for pathogen evolution and pandemics [8], we mentioned the risk from Ebola.
Ebola is a horrendous disease that was present only in isolated villages in Africa. It was far away from the rest of the world only because of that
isolation. Since Africa was developing, it was only a matter of time before it reached population centers and airports. While the model is about
evolution, it is really about which pathogens will be found in a system that is highly connected, and Ebola can spread in a highly connected world. The
traditional approach to public health uses historical evidence analyzed statistically to
assess the potential impacts of a disease. As a result, many were surprised by the spread of Ebola
through West Africa in 2014. As the connectivity of the world increases, past experience is
not a good guide to future events. A key point about the phase transition to extinction is
its suddenness. Even a system that seems stable, can be destabilized by a few more long-
range connections, and connectivity is continuing to increase. So how close are we to the tipping
point ? We don’t know but it would be good to find out before it happens. While Ebola ravaged three countries in West Africa, it only resulted in a
handful of cases outside that region. One possible reason is that many of the airlines that fly to west Africa stopped or reduced flights during the
epidemic [9]. In the absence of a clear connection, public health authorities who downplayed the dangers of the epidemic spreading to the West might
seem to be vindicated. As with the choice of airlines to stop flying to west Africa, our analysis didn’t take into consideration how people respond to
epidemics. It does tell us what the outcome will be unless
we respond fast enough and well enough to stop the
spread of future diseases, which may not be the same as the ones we saw in the past. As the world becomes more
connected, the dangers increase. Are people in western countries safe because of higher quality health systems? Countries like the
U.S. have highly skewed networks of social interactions with some very highly connected
individuals that can be “superspreaders.” The chances of such an individual becoming
infected may be low but events like a mass outbreak pose a much greater risk if they do
happen. If a sick food service worker in an airport infects 100 passengers, or a contagion event happens in mass
transportation, an outbreak could very well prove unstoppable . Watch this mock video of a pathogen spreading
globally through land and air transportation. Long range transportation will continue to pose a threat of pandemic if its impacts cannot be contained.

The plan solves:

1. Prevents delays in seeking care


Laura H. Kahn 17, author of One Health and the Politics of Antimicrobial Resistance, published
in 2016 by Johns Hopkins University Press. A general internist who began her career in health
care as a registered nurse, Kahn works on the research staff of Princeton University's Program
on Science and Global Security. Her expertise is in public health, biodefense, and pandemics, 6-
5-2017, "Why access to health care is a national security issue," Bulletin of the Atomic Scientists,
http://thebulletin.org/why-access-health-care-national-security-issue10819
Early last month, US House Republicans rammed through the American Health Care Act, a remarkably regressive piece of legislation
that, among other flaws, would be disastrous for pandemic planning and preparedness . The bill

eliminates funding for the Prevention and Public Health Fund, which was created under the
2010 Affordable Care Act to invest in vaccination programs, electronic laboratory reporting of infectious diseases,

and infection-prevention programs. Vaccines are an important preventive strategy against


deadly pandemics, while electronic lab reporting facilitates a rapid response to disease. In other
words, these are precisely the funds that will be needed to prevent the next Ebola or Zika virus
from turning into a national catastrophe . In late May, the Congressional Budget Office delivered its projections on the
House bill’s costs and impacts, finding that it would leave an estimated 51 million people under the age of 65

uninsured by 2026—23 million more than the estimated 28 million who will be uninsured under the current law. A Senate version of the bill
may not pass, which would end Congressional Republicans’ umpteenth attempt to undermine or reverse the Affordable Care Act. But we can be sure
their fight will continue, and that has important national security implications even beyond slashing emergency-planning funds (which, by the way,
Trump’s proposed federal budget also does). Cutting the Prevention and Public Health Fund, which deals directly with planning for bioterror attacks
and pandemics, was only the most obvious way in which the House bill attempted to undermine American security. Over the long term, there is also a
movement afoot to put basic health care out of reach of many Americans. Simply making healthcare unaffordable may seem less dramatic than slashing
an emergency-preparedness budget, but doing so also undermines national security. As the Congressional Budget Office report suggests, the American
Health Care Act would make healthcare essentially unaffordable for people with pre-existing conditions, because it would allow insurance companies to
dramatically increase their premiums. Ten years ago, I wrote about the security impact of the uninsured during the George W. Bush presidency. In
2005, almost 47 million people (about 16 percent of the total US population) were uninsured. Thanks to the Affordable Care Act passed under the
Obama administration, that number dropped to a low of 11 percent, according to a Gallup poll taken during the first quarter of 2016. The Affordable
Care Act was a big step in the right direction, but it didn’t close the gap, and the national security and public health challenges of having a large fraction
of the population uninsured remain as relevant today as they were a decade ago. Uninsured
people delay seeking health
care . Once they seek it, often in a busy emergency room , they are typically given less attention
than people with insurance. This failure to get care becomes a danger not only for the individual but for
the public at large when the problem is a deadly infectious disease . We saw this scenario play
out in Dallas during the Ebola crisis of 2014 and 2015. A poor Liberian man, infected with the virus,
presented himself to Texas Health Presbyterian Hospital with severe abdominal pain and a high
fever. He was examined and sent home with a bottle of antibiotics. Amazingly , he did not set off an
Ebola outbreak in his community, though the risk that he could have was significant and the wider
public shouldn’t count on being so lucky next time . Before dying, he infected two nurses who had received
inadequate training and equipment to protect themselves. During the anthrax crisis of 2001, in which spores of the deadly disease were sent through
the US mail, many people infected were federal employees with health insurance. If these postal workers hadn’t had easy access to health care, the
death toll might have been higher than only five; 17 more were infected but survived thanks to timely medical attention. Anthrax spores do not spread
from person to person, but it’s no stretch to imagine a different scenario: Suppose
a future attack involves smallpox , a
highly communicable virus , and that the initial victims are uninsured childcare workers or food handlers. The initial signs of
smallpox include fever, chills, and headache. Uninsured victims would likely delay trying to get care , hoping

for the symptoms to pass. By waiting they would certainly expose others to the virus, potentially
setting of a pandemic . Countries like Canada, which has universal health coverage and a well-funded
public health infrastructure, are much better prepared to handle deadly epidemics . In 2003, Canada confronted Severe
Acute Respiratory Syndrome (SARS), which originated in China. A physician from Guangdong province inadvertently infected a number of tourists
with the SARS virus, setting off a global pandemic after everyone returned to their home countries. Among the infected travelers was an elderly
Canadian woman who returned to Toronto after a 10-day vacation in Hong Kong. Over the course of about four months, the Canadian health system
worked hard to contain the virus, treating 400 people who became ill and quarantining 25,000 Toronto residents who may have been exposed.
Ultimately, 44 people died from the disease in Canada, but the result would have been much worse without a quick and well-organized response. The
Canadian government’s response had its glitches—primarily in the form of poor political leadership. Mel Lastman, the mayor of Toronto and a former
furniture salesman, became angry when the World Health Organization (WHO) issued a travel advisory against his city. He railed against the WHO’s
decision on television, revealing his complete lack of knowledge about either the organization or public health in general. As a result of Lastman’s poor
leadership, he was ultimately relegated to a secondary role as the deputy mayor took his place. Lastman’s credibility and legitimacy never recovered
from the SARS outbreak. Likewise, US leaders will be judged by how they handle a bioterrorist attack or pandemic. Unlike
Canada,
America’s piecemeal healthcare and public health systems are inherently less able to
handle such crises . The Affordable Care Act helped fill in the gaps, but really, the only way to
prepare for the eventuality of pandemics or bioterrorist attacks is with a single-payer
government-run system that covers everyone. The United States might consider
modeling its health care system after the one in Israel, a country that, given longstanding
threats, takes every terrorist risk very seriously . In 1994, it established universal health
coverage for all citizens. The country’s Ministry of Health monitors and promotes public health,
oversees the operations of the nation’s hospitals, and sets healthcare priorities. As a result,
Israel’s public health, emergency response, and hospital systems are state-of-the-art, highly
efficient, and coordinated—a necessity when responding to terrorist attacks . The
preamble to the US Constitution states the goals to “provide for the common defense” and “promote the general Welfare.” The US government won’t
fulfill either of these duties if it fails to protect its citizens against pandemics and bioterrorism. The mandate requires a robust public health
infrastructure and a universal healthcare system that covers all Americans. The Trump Administration and Congressional Republicans threaten to
undermine this essential function of government, unnecessarily jeopardizing American lives.

2. Makes care preventative


CAN 17, California Nurses Association, January 2017, “SARS, EBOLA, AND ZIKA: What
Registered Nurses Need to Know About Emerging Infectious Diseases,” accessed via Google
Cache
We have discussed examples of emerging infectious diseases that do not occur
CONCLUSION ]

exclusively in tropical, developing countries . Many diseases do emerge in these places —


concentration of the population in slums, rapid deforestation and destruction of environment,
effects of climate change, lack of healthcare and public health infra- structure all conspire to
create conditions for new diseases to emerge or old diseases to re-emerge. But it is false to
assume that residents of the United States and other developed countries are safe
from these diseases . SARS traveled to Toronto and infected many healthcare workers and
other patients in hospitals there. Ebola traveled to Texas and infected two healthcare workers. Zika has been
brought to the United States in hundreds of travelers returning from areas with local transmis- sion, introducing it to local
mosquitoes in Florida. The examples discussed also illustrate the potential for diseases to erupt in the United States. SARS exploded in Toronto and it was just barely that Ebola
The public health infrastructure
was contained in Texas. Zika has not yet been fully contained in Florida, and the full impact of Zika is yet to be seen.

in the United States is fragmented and underfunded . Healthcare workers are most often the
first line of contact for infectious travel- ers. The lack of protections for healthcare workers was demonstrated clearly in the SARS and the
Ebola epidemics. The profit-driven healthcare system typically reacts to infectious disease outbreaks

rather than taking a precautionary approach to protect workers and patients. The United States needs a
single-payer system to reorder healthcare industry priorities from profit to care ,
economic and political reforms , the simul- taneous funding of public health systems
including vector control and emerging disease research and vaccine/diagnostics
development. This systemic change is necessary to protect global health. The increasing
income inequalities in the United States and across the world are intimately tied to the forces that
have led to increased urbanization. Many infectious disease outbreaks, like Zika and Ebola, have taken root in

slums before exploding into full-blown epidemics regionally and even globally . One author,
Matthew Gandy, describes the situation thus:
Big Data
Advantage three is big data:

The plan is key:

1. Interoperability---single payer incentivizes data format uniformity and


eliminates profit motives that impede sharing
Frolick et al 17, Ph.D. Professor/Endowed Chair, Management Information Systems, Thilini
Ariyachandra is an Associate Professor of Management Information Systems in the Williams
College of Business at Xavier University in Cincinnati, Ohio, USA, and Sadath Hussain, Xavier
University "Patient Healthcare Smart Card System: A Unified Medical Record for Access and
Analytics." Journal of Information Systems Applied Research. ISSN: 1946-1836 10(1) April
2017.
The overall integration of patient health records is the main success factor of UMRAA. However, there is a need for the healthcare
system stakeholders to realize that it is important to either have a single payer system or a federal
mandated multi payer system for the proposed plan to be adopted and implemented nationwide . Besides the
factors that were discusses earlier, the main factor that stops different healthcare organizational silos from
adopting something like this is that they fear losing patients to their competitors . This paper
has highlighted this fundamental deterrent that needs to be addressed in order to have buy-in from the
payers and providers of healthcare. If not there may be a need to make it a federal mandate, just as it was done in the case of the
"Affordable Care ACT." Smart Card Alliance which is a non-profit organization whose purpose is to develop an understanding and explain the use of
smart card technology is trying to bring awareness and therefore stays connected to industry leaders through educational programs, marketing
research, and open forums (Alliance 2015). They have also been trying to ease the industries fear that revolves around the security and privacy issues
that come with it. However, Smart Card Alliance fails to understand that in
US, it is going to be very difficult to
implement such a system due to a multiple payers (Hussey & Anderson 2003), in spite of all the
benefits that have been discussed already in this paper. There are healthcare smart cards in the US that are
currently being proposed for nationwide implementation. However , the players proposing such
smart card implementation are failing to take into account the biggest barrier to nationwide system

implementation: the multi payer system . The US healthcare system is comprised of multiple payers. Medicare is the single
largest payer in the US and the rest are multiple private insurers. Therefore, the US healthcare has been mostly working in silos up until the affordable
care act (ACA) came into play. 9. CONCLUSION The use of heterogeneous patient information system in various health care
facilities can make
it a challenge to report and analyse patient data. The data integrity and
completeness can be challenging for the employers, clinicians, and researchers. The UMRAA card
distributed information system approach can combine and integrate the pertinent patient data
in all the healthcare facilities to support quality of health care, reporting, treatment and management. Besides the innumerable
benefits of this system, there are challenges to overcome. These challenges come in the shape of privacy, security, and costs associated with the
initiation, implementation, and maintenance of this system. Smart cards are not new to the healthcare industry. Current health smart card is in use in
many parts of Europe as well as here in the US such as the one that was implemented by the Mount Sinai health system in New York. Most of these
health smart cards are used of reimbursement feasibility. In addition to making reimbursement easy, there are a few that are more geared towards
improving the care of patients as well as improving quality and reducing cost. There
are endless possibilities and benefits
that could come along with this program, such as globalization of healthcare and
uniformity in the quality of services offered to the patients. An UMRAA system has the
potential to reduce malpractices, delayed decision making, etc. which usually occur as a result of healthcare providers
not having enough information. Other future possibilities in health care with UMRAA would be with the use of
analytics . Analytics for research purposes using the data derived from the widespread use of UMRAA card would
potentially lead to better health care process reengineering .

2. Depth and breadth of datasets---only single payer can collect granular data
system-wide
Stuss et al 15, Donald T. Stuss, PhD in Psychology from Ottawa University, Professor,
Graduate Dept. of Rehabilitation Science, Faculty of Medicine, University of Toronto, University
Professor, Department of Psychology, Faculty of Arts and Science; Department of Medicine
(Neurology), Faculty of Medicine; Centre for Studies of Aging, University of Toronto, Shiva
Amiri, CEO at BioSymetrics Inc., Ph.D. in Computational Biochemistry from Oxford, Martin
Rossor, NIHR National Director for Dementia Research, Professor of Clinical Neurology,
University College London, Richard Johnson, CEO, Global Helix LLC, Juris Doctor degree from
the Yale Law School where he was Editor of the Yale Law Journal, he received his M.S. from
MIT where he was a National Science Foundation National Fellow, Zaven Khachaturian,
President of the Campaign to Prevent Alzheimer’s Disease by 2020, PhD Case-Western Reserve,
“How we can work together on research and health big data: Strategies to ensure value and
success,” Chapter 5 in Dementia Research and Care Can Big Data Help?: Can Big Data Help?,
edited by Anderson Geoff, Oderkirk Jillian, pgs. 61-74, google books
The value lies in the creation of a system, integrating all partners from the very beginning of
research to catalyse, facilitate, and maximise scientific, health care and policy, and
commercialisation efforts. The standardisation of assessments and the sharing of data
have many positive outcomes. Assessment standardisation in all of the clinical centres involved in the
research means consistency of clinical evaluation at the research level across regional and national
boundaries and an increase in the number of individuals involved in research activities . The sample
size increase has obvious benefits for research power , and the study of mechanisms of
disorders across diseases. With a greater number of individuals involved, and careful
standardised characterisation, the potential exists for good data and high quality clinical trial
platforms . There is an increased opportunity to observe the variability and heterogeneity of
disease expression (Georgiades ct al., 2013; Stuss and Binns, 2008), and develop well- characterised sub-groups. A direct
and completely linked corollary is improved diagnosis and treatment . This should be
attractive to improve clinical trials and commercialisation of neuroscience research, in both
neurotechnology and neurotherapeutics, i.e., the potential benefit of targeted pharmacological and behavioural treatments. In
essence, there is a real opportunity for product development based on a “personalised medicine” approach. And this will only be
enhanced if the full data sets from all past clinical trials are shared (Eichler et al., 2013). Equally important is the need to
link both research "deep data” and an individual’s and population “broad data” (defined as the
data in the health system - often the greatest breadth of data is in single payer arrangements — of the
patient’s medications, usage of the health system, changes in personal health over time, the existence of co-morbidities, and the
associated cost of this usage) about AD and dementia with the vast amounts of data generated during clinical trials. It is important to
take advantage of the new policies adopted by many biopharmaceutical companies, social philanthropists, and government funders
to increasingly share clinical data. This
provides a unique opportunity for health policy and health
service delivery research . The OECD should identify and catalogue these new policies and trends across different
regulatory jurisdictions. For example, the US National Academies recently released a new report proposing guiding principles for
responsible sharing of clinical trial data (National Research Council, 2014). As indicated in the 2014 OECD report on harnessing big
data, “big data is, however, not just a quantitative change, it is a conceptual and methodological change” (OECD, 2014). The goal
then, to truly maximise the value of big data, is to establish a system where basic science flourishes because
of patient characterisation and removing boundaries around diseases are removed to facilitate
studies of mechanisms of disorders ; where the informatics platform and data sharing
within and across diseases provide an opportunity not only for hypothesis driven research, but for chance finding,
data mining, and the creation of new hypotheses; where discovery and treatment are more
closely linked; where industry works closely with researchers to implement their discoveries into
new products; where the new products for improved patient health has an economic benefit through the creation of new
companies and jobs; where the creativity of the researchers and the needs of individuals with disorders fuel new research questions
and ideas; where the network of patient advocacy groups and health charities, as well as knowledge exchange with primary health
care givers push early and rapid uptake of new diagnoses and new treatments; and where collaborative linkages and partnerships are
created to harness the value of these approaches. There are technical challenges of bringing together datasets even within
jurisdictions, let alone beyond national borders, and harmonising these linkages internationally is the biggest challenge
(Khachaturian, 2013). But they are not insurmountable, and the outcomes appear to be well worth the effort (Cukier and Mayer-
There is a value to the international scientific community . with
Shocnbcrger, 2013).
economic impacts of shared R&D resource/structure, to facilitate international
collaborative research .
Multi-payer can’t accomplish this combination
David May 13, M.D., Ph.D., F.A.C.C., began as the chair of the Board of Governors of the
American College of Cardiology in March 2013. Dr. May currently works as a managing partner
at his private practice, Cardiovascular Specialists, PA (CVS) in Lewisville, Texas., 4-25-2013, "I
Am A Republican … Can We Talk About A Single Payer System?," PNHP,
http://www.pnhp.org/news/2013/april/i-am-a-republican-%E2%80%A6-can-we-talk-about-a-
single-payer-system
Firstly, Medicare and the Center for Medicare and Medicaid Services (CMS) are de facto setting all of the rules now.
They are a single payer system. When we go to lobby the Hill, we lobby Congress and CMS. Talking to Blue Cross, Aetna, Cigna and
United Health care is essentially a waste of time. All the third party payers do is play off the Medicare rules to their advantage and profit. They have
higher premiums, pay a somewhat higher benefit and have a significantly higher level of regulation which impedes the care of their customers. This is
no longer consumer choice but effectively extortion, a less than hidden shake down in which the “choice” for a family of four is company A at $900 per
month or company B at $1100 per month. The payers are simply taking advantage of the system, playing both ends against the middle. Secondly, in
order to move forward with true health care finance we need complete transparency in cost and expense … and we need it now. As was noted in a recent
Time magazine piece on the hidden cost of health care, our current system is a vulgar, less than honorable construct more akin to used car sales than
medical care, cloaked under the guise of generally accepted accounting principles and hospital cost shifting. Thirdly, with
a single payer
system would potentially come real utilization data, real quality metrics and real
accountability . The promise of ICD-10 with all of its difficulties is that of a much more
granular claims-made data. We could use some granularity in health care data and
we will never achieve it in big data quantities without a single payer system.

3. Funding diversion---multipayer siphons resources to maintain billing


infrastructures.
Goldsmith et al. 03. 07/01/2003. “Federal Health Information Policy: A Case Of Arrested
Development.” Health Affairs, vol. 22, no. 4, pp. 44–55.
Despite more than $20 billion in information technology (IT) expenditures in 2001 by U.S. providers
(less than one-third of which, $6.5 billion, was spent for hospital clinical systems), less than 10 percent have
adopted computerized patient records (CPRs), and less than 5 percent have adopted
computerized physician order entry (CPOE).1 Much of the large amount spent on IT goes
toward upgrading and maintaining financial systems (such as billing), which are
unnecessary in countries with a single payer . The United States lags well behind other
countries, notably Great Britain, Australia, and New Zealand, in the adoption of computerized
clinical systems, especially in the outpatient area.2 All of these countries have a greater ability to
standardize clinical data systems, because they have national health services or a single
payer, and they have used their financial and administrative muscle to facilitate
widespread use of some information technologies.3
The plan is sufficient to overcome barriers because it combines carrots and sticks.
Taiwan proves.
Li et al. 15. aGraduate Institute of Biomedical Informatics, College of Medical Science and
Technology, Taipei Medical University. 08/2015. “Building a National Electronic Medical
Record Exchange System – Experiences in Taiwan.” Computer Methods and Programs in
Biomedicine, vol. 121, no. 1, pp. 14–20.
Digitization of hospitals is an important foundation for promoting EMR exchange. The implementation of inter-institution EMR exchange is only
possible when good EMR systems are available at all hospitals. EMR systems offer good incentives to hospitals by eliminating the need to print paper
records, thus saving the costs of purchasing paper and printing. They can also help to reduce other administrative costs. In Asia, there are many large
hospitals with more than 1000 beds which also serve more than 10,000 outpatients each day. Using paper medical records, the task of accurately
retrieving10,000 medical records, distributing them to each doctor, collecting them afterwards, and putting them back in the storage room, takes a
huge amount of manpower. The implementation of EMR systems can save a lot of labor. According to the experience in Taiwan, hospitals have a strong
incentive to introduce an EMR system because it is a highly cost effective investment .
It is more difficult to encourage
hospitals to participate in an EMR exchange than to implement an EMR system ,
because sharing medical records with other hospitals or clinics does not produce a
financial incentive [27,28]. Since Taiwan uses a single-payer system , 90% of the revenue
of most hospitals comes from health insurance payments . Hospital accreditation
classifies hospitals into three classes, and the amount of health insurance payments varies
between the classes. Through the accreditation and health insurance payments, the
government is able to demand that hospitals take steps to improve the quality of
medical care and patient safety. In order to encourage hospitals to share medical records throughthe EMR exchange system, at the early
stage, the government used the subsidy approach by providing direct subsidies to par-ticipating hospitals. However, in order to ensure the ongoing
operation of inter-institution EMR exchange, the government must have regulatory powers over the hospitals backed by lawsand institutions. The
implementation of inter-institution EMR exchange can only be successfully accomplished by properly
exercising the ‘carrot and stick approach’ .6. Availability of softwareThe ‘EMR Exchange Gateway (EEC GW)’ software
can be down-loaded free from the EEC’s website (http://eec.mohw.gov.tw)and installed on computers owned by medical institutions. Most regular
personal desktop computers (PCs) are sufficientto meet the hardware requirements. After installing the EECGW software, the hospital can go to the
EEC’s website to applyfor the interfacing process. The system only operates on anNHI VPN.7. Future plansAfter the implementation of the inter-
institution EMRexchange is complete, the Ministry of Health and Welfare willstart to promote Personal Health Records (PHRs) and variousother
value-added applications such as developing a self-managed health management app based on the PHR [29]. APHR will be integrated with an EMR
and data from tele-healthservices in order to satisfy the 4P characteristics (preventive, predictive, participatory, and personalized) of the next gener-
ation of medical care service [30,31].

Scenario one is telomere erosion:

Single payer solves


E. Soura et al. 16. Dept. of Dermatology/University clinic, “Andreas Sygros” Hospital, Athens.
03/2016. “Hereditary Melanoma: Update on Syndromes and Management - Emerging
Melanoma Cancer Complexes and Genetic Counseling.” Journal of the American Academy of
Dermatology, vol. 74, no. 3, pp. 411–420.
Telomeres are DNA protein structures comprised of tandem repeats of the six-nucleotide unit sequence TTAGGG that extend for
thousands of bases at chromosome ends. As DNA replication mechanisms are unable to fully
copy end DNA, a progressive shortening of telomeres is observed with subsequent cycles ,
which eventually leads to cell senescence . However, various mechanisms exist in order to
counter this gradual telomere erosion .21 The shelterin complex physically protects telomeres and also regulates the
function of TERT. The shelterin complex contains 6 proteins: TRF1, TRF2, and TPP1, which can specifically recognize and bind to double-stranded
TTAGGG repeats; POT1, which binds to the single-stranded telomeric overhang; and TIN2 and RAP1 (Figure 3).22 POT1 mutations lead to insufficient
capping of telomeres by shelterin. In addition, overexpression of POT1 leads to inhibition of telomerase function. Interestingly, POT1 and TPP1 may
have the ability to serve as enhancers of telomerase processivity.23 TERT, a reverse transcriptase, and TERC, an RNA fragment which acts as a
template for telomere addition, are the main components of telomerase, a large multi-subunit ribonucleic protein (Figure 2b). Telomerase protects
telomeres by elongating them via a strict, cell cycle-regulated process. In addition, telomerase can preferentially elongate the shortest telomeres, which
leads to only a subset of telomeres being elongated in any given cell cycle. Telomerase levels and the balance between its components play an important
role in appropriate telomere length maintenance.24 Mutations
in the promoter region of the telomerase reverse transcriptase (TERT) gene
have been described both at the germline level and at the somatic level in sporadic cases of melanoma.25, 26 These mutations have been
found in the promoter region of the TERT isoform that encodes a catalytic reverse transcriptase subunit of telomerase responsible for telomere length
maintenance (Figure 2b). Indeed, an association between longer telomere length and CMM risk has been demonstrated;27 telomerase over-expression
may be responsible for the cellular immortality associated with cancer.25, 26 Horn et al. have described TERT mutations in a kindred with 14 CMM
patients, yet they also described somatic mutations of TERT, which bore a clear UV-signature, in sporadic CMMs.25 A paucity of data exists on the
phenotypes of patients carrying TERT mutations, although co-existence of numerous nevi has been reported.25 Several studies have demonstrated UV-
signature mutations at various positions in the TERT promoter.25, 26 These alterations may increase the transcription of TERT by creating novel Ets
transcription factor binding sites. The GA binding protein (GABP) transcription factor, an Ets family member, has now been shown to be recruited to
the sites of TERT promoter mutations.28 As reported in a recent study, melanomas with TERT promoter variants were more likely to be nodular and
superficial spreading in subtype and had increased thickness, ulceration, high mitotic rate, and frequent BRAFV600E mutations.29 It is uncertain
whether these characteristics also apply to familial cases, but they do imply a more aggressive course for these CMMs.29 In a recent study by Gibbs et
al. a significant association between multiple primary melanomas and mutations in TERT has been demonstrated (TERT/CLPTM1L rs401681,(P =
0.004).30 Besides CMM, a number of other malignancies have been attributed to TERT mutations (Table 1). Interestingly, similar mutations were seen
in 16% of various established cancer cell lines, suggesting this might be a common activating mutation in multiple cancer types.26 In addition, Vinagre
et al. observed recurrent TERT promoter somatic mutations in 43% of central nervous system cancers, 59% of bladder cancers and 10% of thyroid
(follicular cell-derived) cancers, in a total of 741 tumors screened.31 Other studies have also connected the TERT locus or telomere biology with
melanoma risk.32 Nan et al. researched the role of 39 single-nucleotide polymorphisms (SNPs) associated with telomere length in 218 patients with
CMM and found a positive association with telomere length and CMM risk. In addition, two SNPs in the TRF2 gene, rs153045 and rs251796, showed
significant associations with both total number of moles and the number of raised moles on upper extremities.33 This finding is consistent with those
of previous studies.34 Iles et al. studied 7 SNPs, previously associated with telomere length,35 in 11,108 CMM patients and 13,933 control patients from
various areas in the world. A strong association between increased telomere score and increased risk of melanoma (P = 8.92 × 10−9) was consistent
across geographic regions, and 4 SNPs with a p value <0.05 were reported (rs10936599 (TERC), p=0.0003; rs2736100 (TERT), p=0.02; rs7675998
(NAF1), p=0.03, rs9420907 (OBFC1), p=0.001).35 The frequency of these mutations in melanoma kindreds has not yet been investigated. Beyond
telomerase, other components of the telomeric apparatus have also been shown to harbor
mutations in melanoma-prone families. POT1 is a critical member of the shelterin complex, which resides at the telomeres and protects the
ends of chromosomes. Mutations in the POT1 gene have been described in a small number of unrelated Italian, French, and U.S. melanoma families.36,
37 In addition to POT1 mutations, other mutations affecting the function of the shelterin complex have also been described. For instance, mutations in
the adrenocortical dysplasia (ACD) protein gene have been observed in a small number of melanoma families. This gene not only increases the affinity
of POT1 for telomeric single-stranded DNA, but, together with POT1, mediates the interaction between shelterin and TERT as well. Other mutations of
the shelterin complex observed in CMM families include mutations in the TERF2IP gene. Families harboring such mutations may present with early
onset melanomas (appearing in patients as young as 15 years), as well as multiple primary melanomas. Other types of cancer were also observed,
including breast, prostate, and lung, among others.38 (Table 1) Go to: 2.4. Other melanoma cancer syndromes KEY POINTS Cowden syndrome belongs
to the family of PTEN hamartoma tumor syndromes and is characterized by the appearance of trichilemmomas, papillomatous papules, mucosal
lesions (papules) and palmar-plantar keratosis within the 3 first decades of life Newer data suggest that Cowden syndrome patients have a higher risk
of presenting with melanoma compared to healthy controls Patients that harbor MITF mutations may exhibit a high atypical nevus count and have a
tendency to develop melanomas at young age Patients with pancreatic or renal cancer who harbor MITF mutations have a higher risk of developing
CMM PTEN hamartoma tumor syndrome (PHTS) is a rare condition that encompasses four major, clinically distinct entities associated with germline
mutations in the tumor suppressor gene PTEN. These include Cowden syndrome, Bannayan-Riley-Ruvalcaba syndrome, Proteus syndrome, and
Proteus-like syndrome. Phenotypically, all of them are characterized by formation of multiple hamartomas due to unregulated cellular proliferation,
but only Cowden syndrome seems to be associated with an increased risk for malignancy.39, 40 Cowden syndrome is rare, with an estimated
prevalence of approximately 1 case per 200,000 population.40 Cowden syndrome (CS) exhibits a very distinct phenotype which includes the
appearance of trichilemmomas, papillomatous papules, mucosal lesions (papules) and palmar-plantar keratosis. These features are pathognomonic,
since 99% of the patients develop them before the 3rd decade of life (diagnostic criteria can be found ref39, 40). Although CS has been associated in the
past with various types of malignancies, recent data also show an association with melanoma. Tan et al. investigated 368 individuals carrying PTEN
mutations and demonstrated an elevated standardized incidence ratio (SIR) for CMM of 8.5 (95% CI, 4.1–15.6), with an estimated lifetime risk of 6%.41
Similarly, Bubien et al. reported a SIR for melanoma of 28.3 for women (95% CI, 7.6–35.4), and 39.4 for men (95% CI, 10.6–100.9) (P < .001) in 154
investigated patients.42 At this point the exact incidence of melanoma in Cowden syndrome patients has not been clearly defined. Thus, an annual
dermatologic examination should be considered for all CS patients. (Level of evidence IV) For those with documented germline PTEN mutations,
referral to specialized centers for coordinated cancer care (e.g. annual thyroid gland ultrasound, mammography, or endometrial biopsy) is
recommended.39 (Level of evidence IV) Given the fact that Cowden syndrome is very rare, most recommendations are supported on anecdotal data, or
small studies. A single codon 318 Glu-to-Lys (E318K) mutation in the Microphthalmia-associated transcription factor (MITF) was recently described
and shown to increase CMM risk.43–45 MITF has been demonstrated to act both as a master transcription factor, involved in cell cycle regulation, and
a transcriptional repressor.46 The E318K mutation affects MITF sumoylation (Figure 2c), therefore altering MITF’s transcriptional properties.43
Recently, Bartolotto et al. reported that MITF(E318K) mutations are associated with a 5-fold increase in risk for developing CMM, renal cancer, or
both.43 Ghiorzo et al. similarly reported that MITF(E318K) mutation carriers have a 3-fold increase in CMM risk; the authors also determined there
was a positive association with the development of other types of cancer besides CMM (Table 1). Interestingly, carriers with personal and/or family
history of PC or renal cancer had a 31- and 8-fold increase in risk, respectively, for developing CMMs.44 Yokoyama et al. studied the phenotypic traits
of MITF(E318K)-mutation carriers and reported an association with a higher nevus count, CMM onset before 40 years of age, and non-blue eye color;
no association was found with freckling, skin color, or hair color.45 These findings were corroborated by Sturm et al., in addition to a reportedly higher
incidence of amelanotic melanoma and an association with fair skin. These patients exhibited not only a higher nevus count, but also more nevi larger
than 5 mm in diameter, compared to controls. The most common location of CMM development was found to be the back, followed by the limbs, which
as the authors suggested, may indicate a propensity for UVR-exposed sites. The predominant dermoscopic pattern for the nevi in MITF(E318K)-
mutation carriers was reticular.47 The exact incidence of MITF (E318K)-mutation is not defined in the overall population. However this variant has
been found to be augmented in cases displaying multiple primary melanomas and in those with a family history of melanoma.48 Although most studies
largely corroborate one another, additional studies are needed in order to fully understand the way that these mutations alter the risk for cancer
development and the clinical utility of testing for this single variant. A summary of the basic known clinical characteristics for emerging melanoma
syndromes can be seen in table 2. Table 2 Table 2 Summary of known data on the phenotypic appearance of patients with possible melanoma tumor
syndromes5,7,15,30,33,38–40,45,47,48 Go to: 2.5. Conclusion During recent years, a number of genes associated with melanoma have been described.
However, the pathogenesis of melanoma is complicated and multifactorial , and it is likely that
many contributory genes remain to be elucidated. With high throughput next
generation sequencing and big data analysis , the science of melanoma syndromes is fluid and
changing . Thus, dermatologists need to be constantly aware of new cancer complexes involving CMM in order to provide early diagnosis for
cutaneous malignancies and referral to specialized centers for preventive interventions to screen for possible internal malignancies. Recognizing
and treating such patients requires a multidisciplinary approach that includes, among others,
dermatologists, pathologists, and oncologists. More epidemiological, clinical, and genetic studies are needed in order
to fully understand the way that these syndromes are inherited, the role of environmental
factors in gene penetrance, the influence of gene-gene interactions, and the importance of these
genes in patient prognosis. Furthermore, such studies may also assist in identifying new, druggable
targets for the treatment of melanoma.

Erosion’s the most probable extinction risk


Reinhard Stindl 14. Professor at the Institute of Medical Biology. 2014. “The Telomeric Sync
Model of Speciation: Species-Wide Telomere Erosion Triggers Cycles of Transposon-Mediated
Genomic Rearrangements, Which Underlie the Saltatory Appearance of Nonadaptive
Characters.” Die Naturwissenschaften, vol. 101, no. 3, pp. 163–186.
The last heresy — intrinsic causes of species extinction Of all the species that have existed
at one time or another on earth, only about 1 in 1,000 is still alive; hence 99.9 % of species
died out (Raup 1991, pp. 3–4). Clearly, according to Darwin’s theory, the causes of extinction must usually
lie outside of the organism , because prospering species have been adapting for many thousands of years and must be very fit. As a consequence,
all kinds of threats to species survival have been proposed — mostly humans , climate
change , asteroids and limited food resources . Yet, in the Pleistocene, the average American
did not have access to automatic fire weapons , craters of asteroid impacts have not been found and
the climate (if ever) changed in a rather smooth way , which leaves us with the idea of limited food
supply. Consequently, some authors have suggested that limited prey resources forced saber-toothed cats and American lions to utilize more of the remaining carcasses
leading to a greater incidence of tooth breakage. The story goes that times were difficult and therefore both predominant carnivores became extinct 12,000 years ago. The
Rancho La Brea tar seep deposits in California, representing the past 50,000 years, provide an abundance of remarkably well-preserved specimens. Based on the fossil record,
rates of tooth breakage increased in both examined species until extinction. Especially in the American lions, shortly before extinction, a stunning 36 % were affected by this
dental handicap. However, Desantis et al. (2012) clearly showed that the diet did not change, and extensive bone crushing cannot be the cause of dental degradation in these
large carnivores. Schindewolf preferred an alternative explanation in his 1950 book and cited the Austrian paleontologist Othenio Abel, who described the very abundant
evidence of Ice Age cave bears, which, shortly before becoming extinct, exhibited extremely wide variability and all kinds of manifestations of degeneration, including severe
bone and tooth disease and injuries — even young animals were affected by these conditions (Schindewolf 1993 p. 322). Abel was convinced that degeneration of a species was a
consequence of optimum existence (Abel 1980). Yet, Schindewolf notes: “The symptoms of the cave bears are strikingly similar to the disease-altered bones that Hansen
described from Norman graves in Greenland” (Schindewolf 1993 p. 322). In contrast to the cave bears extinction, in Greenland living conditions seemed to be worsening during

that time. Consequently, Schindewolf states: “We arrive at the same conclusion , that the actual causes of
degeneration and extinction lie deeper and manifest themselves earlier than any
environmental influences whatsoever” (Schindewolf 1993, p. 323). “Thus, the reasons for extinction or
continued existence are essentially internal — they lie within the lineages themselves. As P. Jensen,
C. Zimmer, Karl Beurlen and other authors believe, the reasons may perhaps be sought in an aging

of the germ substance, a gradual loss of function in the sex glands resulting in reduced fertility”
(Schindewolf 1993, p. 322). According to Otto H. Schindewolf, geological catastrophes would be only the last hit , putting an end to a

process that had been underway for ages for internal reasons. “It is the same as when the wind finally topples an old and rotten

tree” (Schindewolf 1993, p. 319). Cope’s Rule, the observed tendency for organisms in a lineage to increase in body size over time, is still poorly understood in terms of
selection for fitness, especially in the many cases of gigantism. Alternatively, Schindewolf claimed that orthogenesis, the primary trend of evolution, is to blame. First, it yields a
normal, beneficial size increase and later inevitably exceeds it and leads to a serious disadvantage and even to extinction of a species (Schindewolf 1993, p. 309). Since a larger
body size requires more cell doublings, especially during lifelong regeneration of somatic tissues (Stindl 2004b), it is easy to imagine how increasing height in a lineage can have
a negative effect on telomere reserve. Conflicting literature data on the mean telomere length of somatic tissues and its consequences for aging and age-associated diseases have
been reported over the years. In my view, this has two main reasons: Almost all researchers investigate the telomere length of blood samples, despite the fact that mammalian
red blood cells lack a cell nucleus and only telomeres of the small fraction of white blood cells can be measured. These immune cells show complex patterns of migration and
replenishment, which are influenced by various factors (e.g., stress) and might, therefore, not provide a reliable picture of the telomere reserve of an individual. The other
shortcoming is the widespread ignorance of the mechanisms of somatic tissue regeneration by adult stem cells (Stindl 2008). Consequently, I suggest that telomere length and
the available number of adult tissue stem cells in a given species might be the determining factors of lifespan, regeneration capacity of tissues and aging. An anecdotal case of an
endangered pack of wolves with unusual signs of aging and degeneration on Isle Royale in Michigan was recently reported in Science (Mlot 2013). The population was
established six decades ago and remained stable until the 1980s when a viral disease reduced their numbers to a mere dozen. In 1997, a large male wolf from Ontario crossed the
ice bridge. This wolf became whiter as he aged, something not seen before in Isle Royale wolves. He sired 34 offspring and genetically took over the population. Over the last
years, physical abnormalities have increased to abnormal levels. In 2009, the majority of the wolves had some kind of spinal deformities. Another mystery is the occurrence of
several wolves with one opaque eye, not seen before. Finally, in summer 2012, no pubs were born and the remaining population of four female and four male wolves now faces
extinction. Some researchers at Isle Royale blame inbreeding for the signs of degeneration (Räikkönen et al. 2009), although an opaque eye is usually a result of aging, not
inbreeding. Telomere length measurements might bring new aspects into the discussion. A high percentage of ancient Egyptians were considerably crippled, by changes in the
vertebral column and by lesions of the peripheral articulations (Ruffer 1919; Moodie 1923). Similarly, 38 % of ancient Egyptians and 25 % of ancient Peruvians with a mean age
at death of around 40 years showed signs of atherosclerosis (Thompson et al. 2013). Unfortunately, an accurate chronological survey of cases of degeneration and atherosclerosis
is not possible based on the findings of these studies. Yet, it was shown that the health of pre-Columbian populations significantly deteriorated long before Columbus arrived and
climatic distinctions were completely irrelevant. Surprisingly, hunter–gatherers, who lived several millennia ago, were the healthiest Native Americans in stark contrast to the
later people, who lived in times of agriculture, government and urbanization. About 90 % of the aboriginals may have died within the following two centuries after the arrival of
the Spanish; however, the authors found the long-term trend towards a poor health status of aboriginal populations to be the causal factor of the speed and ease of the conquest
(Steckel and Rose 2002). Clearly, a thorough reexamination of all signs of degeneration in troubled species or populations is needed, to put the new theoretical model of an
intrinsic extinction mechanism on solid scientific grounds. In accordance with the aging of the germ substance idea cited by Schindewolf, the telomeric sync model of speciation
is based on transgenerational telomere erosion, which can lead to decreased fertility (Baird et al. 2006) and an increase of age-associated (Sharpless and DePinho 2007) and all

Age-associated diseases , like cancer ,


sorts of degenerative diseases (Chang et al. 2004) at the end of a species lifespan.

cardiovascular disease , immunosenescence and dementia , and degenerative


diseases of teeth , bones and joints , are proposed to culminate even in middle-aged
individuals and fertility decreases . Critically short telomeres in somatic tissues and in
germ cells of individuals have been shown to be capable of causing all these kind of health
issues . During such a transformation phase, the species either transforms into a new species, or
stabilizes its telomeres, or becomes extinct . However, since the phenotypic change, triggered by short telomeres and mediated by
transposons, can be enormous, the close relationship of two species might be invisible in the fossil record. Consequently, the extinction of some species might be an artifact of
the fossil record caused by the gradualistic genetic model of evolutionary theory. It is my conviction that some extinctions, like the ones of the Neanderthals, will one day turn
out to be transformations. Go to: Hominin evolution: extinction and complete replacement of archaic humans worldwide … really? Some years ago, the evolution of hominins
was every neo-Darwinist’s darling. It was all about an African progressing series of archaic hominins resulting in the superbright Homo sapiens spreading out of Africa and
replacing all other dumb relatives worldwide. Nowadays, according to Kimbel the story reads differently: “The evolutionary events that led to the origin of the Homo lineage are
an enduring puzzle in palaeoanthropology.” (Kimbel 2013) What happened? Well, it turned out that instead of a gradual phenotypic change towards perfection, nature seems to
have played around with different combinations of “archaic” and “modern” body parts that make no sense under the light of genetic gradualism. A series of reports published
this year in Science focused on fossilized skeletons of Australopithecus (Kimbel 2013). One sample of Australopithecus afarensis has an upper thorax more similar to modern
humans, although it is 1.6 million years older than Australopithecus sediba, which has an ape-like pectoral girdle (Kimbel 2013). Furthermore, the fossilized skeletons of A.
sediba had a surprisingly ape-like calcaneus, in contrast to its Homo-like mandibles. To further complicate matters, although Australopithecus is usually characterized by six
lumbar and four sacral vertebrae, in A. sediba the modern human pattern is seen, which is five lumbar and five sacral vertebrae (Kimbel 2013). If the phenotypic confusion in the
hominin lineage still leaves some unconvinced, let us turn to comparative sequencing data. Based on the fact that mitochondrial DNA of all Neanderthal specimens falls outside
the variation of present-day humans, interbreeding between archaic and modern humans was a no-go for many years (Ward and Stringer 1997). However, the draft sequence of
the Neanderthal genome clearly confirmed archaic genes in our genome and the authors suggested a unidirectional gene flow from Neanderthals into the non-African ancestors
of present-day humans before the Eurasian split (Green et al. 2010). In the same year, the sequencing of the DNA extracted from a finger bone led to the birth of a new archaic
cousin in southern Siberia, the Denisovan (Reich et al. 2010). Again, it was found that the Denisovan man, similar to the Neanderthal, contributed 4–6 % of its genetic sequence
to modern humans, although to Melanesians only (Reich et al. 2010). A 100-year-old lock of hair from an Aboriginal man in southern Western Australia revealed similar
admixture rates with archaic humans (Rasmussen et al. 2011). And so, it was concluded that Homo sapiens interbred with now-extinct forms of humans all over the world
(Gibbons 2011). According to a 2011 Science study, more than half the HLA alleles of modern Eurasians must have introgressed due to multiple and widespread admixtures with
archaic humans. The authors suggested that the surprisingly high numbers were the consequence of some sort of selection (Abi-Rached et al. 2011). Yet, if one considers that the
reproductive barriers of different chromosome complements effectively prevent the vertical spread of foreign genes in a population, it remains an eternal mystery how the
proposed sexual activities of our immediate ancestors with all kinds of archaic hominins could result in significant numbers of fertile offspring. Of course, we do not know the
karyotype of Neanderthals or Denisovans, but differing chromosome complements are the hallmarks of closely related species (White 1978; Cho et al. 2013) and extinct
hominins were successful and independent species for many thousands of years. Besides the surprising sequencing data, the paleoanthropologists have always pointed to certain
bone and dental features of Neanderthals that apparently survived in all modern Europeans (Trinkaus 2007). Already in 1943, Franz Weidenreich, one of the early proponents of
the multiregional model, wrote: “Two years ago I published an article (…) dealing with the obvious incongruities of the morphological and chronological sequences of the various
evolutionary stages of Man as they appear on the basis of steadily increasing discoveries of recent years. At the very appearance of true hominids there must have already existed
several different branches, morphologically well distinguishable from one another, which all proceeded in the same general direction with mankind of today as their goal”
(Weidenreich 1943). The telomeric sync model of speciation predicts successive series of defined chromosome rearrangements and genomic repatternings in all individuals of a
species within similar time intervals, worldwide. Accordingly, the remains of archaic genes and phenotypic traits found in modern humans, typical for local archaic hominins,
might be a consequence of directly developing from these local ancestors through a defined genomic repatterning. The telomeric clock that triggers programmed rearrangements
and transposon-mediated repatternings in combination with worldwide gene flow within a species might be responsible for the proposed 99.9 % sequence identity in human
populations around the world, despite the separate development of local lineages for many thousands of years. Clearly, the unexpected finding of a unidirectional gene flow from
Neanderthals into modern humans only, but not in the other direction (Green et al. 2010; Wills 2011), inevitably supports the multiregional concept of local archaic humans
directly transforming into modern humans. In other words, there were no other local and healthy archaic humans left to interbreed with, once the new generation of modern
humans evolved from them. It is an indisputable fact that the observed one-way genetic exchange from archaic to modern humans shakes the foundations of the currently
favored admixture and interbreeding model, which is thought to result in some sort of bidirectional gene flow. For all these years, modern humans have been regarded as being
superior to their archaic counterparts, and now, even if the standard population genetic model predicts a bidirectional gene flow with a dominating modern-to-archaic direction,
the neo-Darwinists suddenly discover the exclusive superiority of archaic genes from a dying human lineage. The Danish embryologist Søren Løvtrup once commented: “And
today the modern synthesis (…) is not a theory, but a range of opinions which, each in its own way, tries to overcome the difficulties presented by the world of facts” (Lovtrup
1987, p. 144). Go to: The transformation or bifurcation phase as exemplified in Finnish blue foxes and humans In a Finnish farm, several hundred blue foxes, parents and
offspring, were analyzed over 4 years. About half of them had a Robertsonian translocation in a heterozygous form (2n = 49), whereas a quarter were homozygous carriers
(2n = 48) and a quarter had the original karyotype with two acrocentrics (2n = 50). As expected and predicted by genetics, litter size tended to be smaller in mating groups of
chromosomal heterozygotes in this study (Makinen and Lohi 1987), in contrast to a previous report (Moller et al. 1985) but in line with an older study (Christensen and Petersen
1982). Surprisingly and contrary to the predictions, animals with the Robertsonian translocation in a homozygous form (2n = 48) increased over the 4-year span (Makinen and
Lohi 1987). Accordingly, it was observed that matings of two heterozygotes seemed to favor the 2n = 48 offspring production (Makinen and Lohi 1987) and the spread of a new
chromosomal race. It was therefore shown that the blue fox displayed an evolutionary tendency towards a lower chromosome number and that the Robertsonian translocation in
its homozygous form had a positive effect on fertility. If, after 30 years, this farm still exists and breeders have not intervened based on karyotypes, a re-examination of the
descendants of these animals would be an interesting project. In humans too, Robertsonian translocation (ROB) is the most common recurring chromosomal rearrangement. De
novo formation of fusions between chromosome 13 and 14, rob(13q14q), accounts for the largest proportion of ROBs (Page and Shaffer 1997). Jacobs states: “The reason for the
high mutation rate of human Robertsonian translocations in general, and for the 13/14 translocation in particular, is obscure” (Jacobs 1981). Several scenarios have been put
forward to explain this phenomenon. Bandyopadhyay and colleagues proposed illegitimate recombination between paralogous satellite III DNA on acrocentric chromosomes.
They classified ROBs into two groups: Class I, mainly rob(13q14q) and rarely rob(14q21q), account for 85 % of ROBs, and class II includes all other sporadic ROBs. Breakpoints
of the common class I ROBs are almost always in the same region, whereas sporadic class II ROBs are characterized by varying breakpoints. Regarding these class II ROBs, the
authors write: “The variable breakpoint could result from breakage and exchange in repetitive DNA, such as satellite III DNA sequences, that are common to all acrocentric short
arms and the pericentromeric regions of these chromosomes” (Bandyopadhyay et al. 2002). Since a sporadic breakpoint within repetitive DNA would always vary, and this is not
seen in the majority of common ROBs (Bandyopadhyay et al. 2002), I conclude that illegitimate recombination between paralogous satellite III DNA cannot be the source of
common human ROBs. Another mechanism, which has been put forward, is based on the fact that acrocentric chromosomes come physically near to form the nucleolus, because
of rDNA genes. However, human rDNA genes are located on the short arms of all acrocentric chromosomes (13, 14, 15, 21, and 22; Henderson et al. 1972) and cannot explain
why just two combinations, which are rob(13q14q) and rob(14q21q), are constituting 85 % of all ROBs, why the breakpoints are almost always in the same region and why 95 %
of de novo cases originate during maternal meiosis (Page and Shaffer 1997; Bandyopadhyay et al. 2002). Rescue comes from the observation of nonrandom telomere patterns in
humans (Graakjaer et al. 2006) and the indirect evidence of telomere erosion in the female germline (see above). In a small study on 20 aged individuals, the telomere on the
proximal end of chromosome 13 has been found to be the shortest and on the p-arms of chromosome 14, 15 and 21 one of the shortest (Graakjaer et al. 2006). Except for the
short telomere on 15p, the telomere data fit the observed pattern of fusion products. Evidence for a prezygotic selection for ROBs in male humans has been described, similar to
the meiotic drive of ROBs in the common shrew we discussed earlier (Hamerton 1968). Again, I propose the negative effect of short telomeres on fitness to be the underlying
cause. Sperm cells containing a rearranged metacentric chromosome instead of two acrocentrics with eroded and unstable telomeres may simply be preferred. Based on the
suggested telomere erosion in the human species and the nonrandom telomere profile, we would expect to see the appearance of a new chromosomal race, with 44 chromosomes
and two rob(13q14q). Is there any evidence for such a transformation or bifurcation phase? During a cytogenetic study of an aged population, a heterozygous carrier of a
rob(13q14q) was found. He was 90 years old and in good general health. The authors mentioned that he looked younger than his chronological age and that all his close relatives
survived beyond the age of 80 (Anday et al. 1974). In 1984, Martinez-Castro and colleagues were the first to report on a Spanish family with heterozygous and homozygous
carriers for a rob(13q14q) without any impairments of phenotype. They also observed an excess of homozygous carriers among the progeny of heterozygotes, in accordance with
prezygotic selection for ROBs (Martinez-Castro et al. 1984). The stunning discovery was confirmed by a Finnish study based on three families with a female again being
homozygous for rob(13q14q) and a karyotype of 44 chromosomes. The authors described the good health and normal phenotype of these individuals and speculated that
rob(13q14q) might be the next step in the chromosomal evolution of man (Eklund et al. 1988). Clearly, a re-examination of these families is highly recommended. Furthermore, I
suggest to undertake a cytogenetic survey of the 100 or so isolated aboriginal human populations worldwide, to measure telomere length and to search for alternative
chromosomal races. Go to: Conclusions In this paper, I present an alternative to Darwin’s gradualistic theory and provide a biological framework for the old European concept of
saltatory evolution, summarized best in Otto H. Schindewolf’s book, Basic Questions in Paleontology (Schindewolf 1993). The high quality of the fossil record in sediments of
ancient oceans guarantee that Schindewolf’s extensive studies of corals and cephalopods are superior to the currently dominant genetic models of modern laboratory-based
transgenerational
scientists. As a consequence, my telomeric sync model of speciation mainly builds on Schindewolf’s typostrophic theory. In short, I propose that

telomere erosion leads to identical chromosome fusions and triggers a transposon-


mediated genomic repatterning in many individuals at once . The phenotypic outcome of the telomere-
triggered and transposon-mediated repatterning is the saltatory appearance of nonadaptive characters in new species, which is in perfect agreement with the fossil record (Table

The species clock based on transgenerational telomere erosion gives species a sense of time
1).

and is therefore the material basis of aging at the species level . According to the telomeric sync model of speciation, speciation events
can be triggered suddenly and simultaneously, eventually synchronizing the transformation of a whole interconnected biotope of many plant and animal species within a
relatively short time frame. In addition to the studies and experiments I have already put forward to test the proposed model, the currently observed immunodeficiency of
honeybees displays several signs of a telomere-driven species crisis (Stindl and Stindl 2010). A study of telomere length and chromosomal races in affected honeybee
populations is therefore highly recommended. Similarly, the white-nose syndrome of North American bats should be reinvestigated in the light of telomere-driven

a single critically short


immunosenescence (Buchen 2010). However, I have to point out that measuring mean telomere length is not sufficient because

telomere determines the viability of a cell (Hemann et al. 2001), possibly the life expectancy of an individual and
according to the new evolutionary model, the duration of a species .

Scenario two is tissue engineering:

Single payer uniquely aggregates medical data and improves research through
high sample variety
Caplan et al. 17. aSkeletal Research Center, Department of Biology, Case Western Reserve
University, Cleveland, Ohio, USA. 01/2017. “The 3Rs of Cell Therapy: The 3Rs of Cell Therapy.”
STEM CELLS Translational Medicine, vol. 6, no. 1, pp. 17–21.
INTRODUCTION Cell therapy involves the introduction of live cells directly from the patient or froman exogenous

source into tissues or the bloodstream to affect a therapeutic outcome. The cells may be used alone (cell therapy) or in

combination with a scaffold (tissue engineering). The technology has been in play since the mid-1950s, when
hematopoieticbonemarrowwas first successfully transplantedtorepopulate patients previously exposed to depopulating chemotherapy; the first
recorded bone marrow implantation took place in Ulster, Ireland, in about 500 BC [1]. Further back in time, rudimentary cell therapies have been used
for thousands of years if one considers aspects of animal husbandry [2]. In the 21st century, embryonic and adult cells, both fresh and culture-
expanded, allogeneic and autologous, have been used in variousmedical circumstances [3]. The science had progressed sufficiently so that by the 1980s
and 1990s many companies had started to produce tissue-engineered skin substitutes ormesenchymal stemcells (MSCs) for clinical conditions [4, 5].
Now, in the 2010s, companies are in the clinic with additional mature and progenitor cell types (such as neural cells, retinal cells, cardiac cells, and
pancreatic cells) from a variety of sources for a broad set of disease states. Not considered here are the businesses that deal with hematological diseases,
themanipulation of hematopoietic cells or their descendants, or gene therapy. Whether the cells in question are from an autologous or allogeneic
source, cell therapy as a clinical solution presents a business model challenge , especially in an
environment that is dominated by large, highly successful pharmaceutical
corporations that are used to selling “blockbusters”: high-volume, low-cost goods and high-
margin off-the-shelf products . Cell therapies are far from being in the “ blockbuster ” space,
being low-volume and costly to manufacture, whether they are individually made autologous therapies
(more akin to a service) or universal allogeneic products. Now that the science and translation of cell therapy have advanced, the
business model questions have extended beyond the allogeneic versus autologous debates of the last few years to a broader
set of issues that get to how such products are approved , how the health authorities see their

economic value relative to that of other solutions, and how companies will deliver on that value. The historic
precedents for value propositioning have been set in the biotech era of the 1980s with the emergence of Genentech, Amgen, and Biogen, to name but a
few. These new corporations followed the business pattern established previously for smallmolecule drugs that produced major health care gains. The
production of vaccines in the 1940s and 1950s had a profound effect in setting the business tone for the biotech companies of the 1980s. Indeed, the
failure to produce safe polio vaccines [5, 6] was one of the primary drivers for the formation of the U.S. Food and Drug Administration (FDA) and its
current central impact on economics and business strategies for new products for medical care delivery. THE 3RS In the past century, it was widely said
that the basis of a good education was the 3Rs of “reading, ’riting, and ’rithmetic.” In today’s world of high health care costs ($17% of U.S. gross
domestic product) and a plethora of new, exciting technologies, the basis for good health care solutions can also be thought of as the 3Rs: regulation,
reimbursement, and realization of value. Both public sector (especially the National Institutes of Health) and private funding have led to the invention
and development of severalmedically driven sectors. Products in the device sector, such as implantable devices in orthopedics and cardiology, have led
to an increase in longevity and have solved clinical problems of substantial scope and proportions. Corporations making such products have been
allowed to fast-track their products, often without the need for a clinical trial through the device-specific 510(k) route. Most new medical devices enter
the market via this route, which requires only demonstration of “substantial equivalence” to a previously marketed device. For example, in this context,
more than 1 million knees and hips will be replaced with metal devices in 2014, and that number is predicted to increase by 10%–20% per year with the
entrance of the baby boomers into the age range needing joint replacement. However, with the emergence of cell therapy potentially enabling joint
tissue regeneration, this device segment may shrink during the coming years. Given the potential of cell therapy solutions to have long-lasting, even
curative, effects and given the inherent complexity of manufacturing and delivering such solutions to patients, paying close attention to the 3Rs will be
even more important for companies trying to bring cell therapies into the health care marketplace than it is for the other three therapeutic pillars of
health care: small-molecule drugs, biologics, and medical devices [7]. EXEMPLAR: MSCS The cell therapy industry is facing many unique challenges.
MSCs will be used as an exemplar for the sector as a whole to illustrate the requirement for novel business models, as well as regulatory and
reimbursement challenges, to enable these potentially gamechanging therapies to deliver transformative or curative therapies as part of everyday
clinical practice. MSCs reside in every tissue of the body as perivascular cells (pericytes) and function naturally at sites of blood vessel breakage or
inflammation [8–10]. From the front of the newly released and activated MSCs, a curtain [11] of biofactors is secreted that inhibits the overaggressive
immune system from surveying the damaged tissue (the first line of defense against the establishment of autoimmune reactions). From the back of the
MSC, trophic factors [12] are secreted that inhibit ischemia-caused apoptosis, inhibit scar formation, stimulate angiogenesis, and stimulate the mitosis
of tissue-specific progenitors. The molecular mechanisms for these activities and functions are becoming known [13]. More than 600 clinical trials
usingMSCs (as shown on http:// www.clinicaltrials.gov with “mesenchymal stem cells” used as the search term) are in progress around the world for
clinical conditions such as multiple sclerosis, amyotrophic lateral sclerosis, stroke, acute and chronic heart failure, rheumatoid arthritis and
osteoarthritis, kidney or liver fibrosis, spinal cord cuts or contusions, and sepsis. Thirty to forty corporations using various formulations of MSCs or
MSC-like cells from multiple tissue sources for various clinical indications have emerged. One of the biggest companies, Mesoblast Ltd., has a market
capitalization of more than $1 billion and has recently purchased the cell-therapy products and intellectual property from Osiris Therapeutics, Inc. (the
first MSC company, founded in 1992). But in the face of challenging approval pathways and in the wake of unexpected adverse reimbursement changes,
such as those encountered by Organogenesis and Dendreon in 2015, a key question remains: What is the pathway to success for the MSC products and
the companies that are bringing them and similar cell therapy technologies forward? REGULATION Like
any small-molecule drug or
biologic, a cell therapy must satisfactorily demonstrate safety and positive therapeutic
effects in preclinical animal models, after which it transitions into human testing as a component or
product to be tested in clinical trials under the auspices of a for-profit company or, in academia,
in an investigator-initiated clinical trial. Indeed, the first-in-humans MSC therapy was conducted at Case Western
Reserve University and University Hospitals of Cleveland in an investigator-initiated study [14]. In either case, the standard pathway for the testing and
acceptance of any new therapy in humans has been established by the sequential stepwise process of phase I, II, and III clinical trials. This process has
its roots in the days of big pharma before the entry of biologics; the process then adapted to accommodate the biologics. These same procedures and
outcome measures that were established for small- and macro-molecule drugs are now used by national regulatory agencies to assess and approve cell
therapies. But unlike drugs, whose structure, potency, and purity can be routinely documented, cell therapies are not so easily characterized because
cells are complex multicomponent entities. This means that no
standard regulatory route is now in place that is
entirely appropriate, let alone favorable, for cell therapy. The current guidelines for certification of cells for
therapeutic use attempt tomimic aspects of the criteria long established for drugs and, consequently, bring with them several problems because they are
not “fit for purpose.” The first problemis one of scope. The
standard phased clinical trials have been set up by large,
multibillion-dollar pharmaceutical companies that have the resources to conduct such
trials, some of which can cost hundreds of million dollars all-in. Small companies specializing in cell therapy do
not have that wherewithal . As a consequence, many clinical studies to date have been
uncontrolled and underpowered , leading to anecdotal results, unclear benefits,
and, often, failure in subsequent phases with larger patient populations. Second, one can analyze and characterize a chemical or
biologic drug to prove its composition, purity, and consistency of manufacturing lot. Defining and certifying the purity and composition of a group of
living cells and ensuring that consistency over time is not so easy and, in many cases, is not 100% possible. Furthermore, in many cells, let alone
mixtures of cell populations, onemay not know exactly which components of the cell are critical and efficacious for a specific clinical indication. Third,
unlike a drug that is metabolized and excreted, cells may continue to live on in the body. Therefore, the regulatory authorities are right to be concerned
about understanding what the cells do and where they go in the body (i.e., issues of homing, engraftment, cell division, and tumorigenicity that are
nonissues for conventional drug products). For some cell types, such as MSCs, that may not live long in the body and for which there is sufficient
clinical history of safety in the clinic, this will be less of an issue than for others. For many cell preparations, however, clinical approval may be
dependent on other technologies, such as sophisticated in vivo tracking, which can be problematic especially for a small, resource-constrained
company. Not addressed here are the questions of how to “tune” therapeutic cells, such as MSCs, to be optimal for the disease being treated and
optimal for each patient. Currently, companies tend to use one batch of MSCs for all clinical situations and thus can be expected to have high
“nonresponder” rates because of the lack of disease-specific tuning. In short, the
current regulatory process can appear
long, expensive, and disproportionately regulated , especially given that several cell
therapies appear to be transformative and in some cases curative , but the FDA has been receptive to criteria
proposed by different companies and organizations with new proposals for judging the efficacy and therapeutic potential of cell therapies. One such
new process, recently instituted in Japan under their new Regenerative Medicine Act, enables a rapid (2- to 3-year) route to conditional time-limited
approval with reimbursement. This requires an initial study to demonstrate clear safety and, at aminimum, a suggestion of efficacy [15]. Full approval is
subject to ongoing monitoring and longer-term studies. Such innovative regulation is essential for the field to flourish. The first product has just
emerged successfully through this route: HeartSheet (autologous skeletal myoblast sheets) from Terumu (Tokyo, Japan, http://www.terumo. com),
with a reimbursement price of approximately $120,000. This and other types of new processes must be tailored to not only the new emerging
technologies but also the limited resources of small corporations or academia because that is where most new cell-based therapies are being developed
and first tested in humans. In the U.S., one provision of the Regenerative Medicine Promotion Act, introduced in March 2014, was to direct the
Department of Health and Human Services to establish a Regenerative Medicine Coordinating Council, with one of its goals being development of
“consensus standards regarding scientific issues critical to regulatory approval of regenerative medicine products.” In the meantime, in early November
2014, the FDA released new draft guidelines for human cells, tissues, and cellular and tissue-based products to clarify what constitutes “minimal
manipulation” for a cell therapy. Minimal manipulation of a cell population has been a key criterion for determining whether a given cell therapy is
deployed under the practice of medicine or has to undergo the lengthier and more complex route of a traditional biologics license application. Clarity of
definition and consistency around the world will be useful for the field because there is considerable confusion among all the stakeholders. However, in
February 2015 the FDA started to progress the debate by issuing draft guidelines [16]. Last, because regulatory bodies change relatively slowly in
response to the introduction of new therapies, it may be useful for legislative bodies to take the lead in effecting regulatory change. Certainly, the
legislation brought forth in Japan has all the world watching its progression into product approvals for cell and gene therapies. Groups such as the
Bipartisan Policy Center in Washington are exploring ways to have the U.S. Congress pass progressive legislation for cell-based therapy (http://www.
bipartisanpolicy.org; a conference titled “Advancing a New Policy Framework for Regenerative Cell Therapy” was held in April 2016). If successful, new
legislation will enhance the FDA’s regulatory capacity by settling both regulatory and societal goals. A proposal in this regard has been made previously
[17]. REIMBURSEMENT To state the obvious, for a company to produce a health care product on an ongoing basis, it must be paid for and the
company must be able to make a profit. Although in theory the health care system in the U.S. gives great leeway to producers to set price and determine
value of a given therapeutic, in practice it puts huge control capacity in the hands of insurance companies and government agencies (especially the
Centers for Medicare & Medicaid Services) to set the monetary standards for specific procedures and therapies. This is even more stringent in
countries, such as the United Kingdom, that have explicit cost-effectiveness controls in place through bodies such as the National Institute for Health
and Care Excellence, where comparator based cost-effectiveness may be hard to prove for early stage therapies. For
the foreseeable
future, cell therapies will continue to be high priced because the cost to produce the large
numbers of cells needed for a given therapy is substantial and the production runs are
relatively small, with high production costs . However, the cost-of-goods can be expected to come down for several
major reasons. First, future generations of bioprocessing tools, disposables and reagents, and acquired experience will reduce the cost of
manufacturing. Second, increasing cell potency and the development of improved targeting strategies will lower the number of cells needed for a
specific therapy, further reducing cost and variation. Third, economies of scale will begin to have a major impact in much the same way as has occurred
for other drug platform technologies in the past (e.g., penicillin). Overall, this means that in the current early stages of the cell therapy era, inefficiency
must be paid for to ensure efficacy and proof of principle for some of these treatments. Once a collection of cell therapy is approved and put into
practice, the marketplace will reward companies that can do “more for less” money. We can expect that new production and innovative cell-delivery
strategies will emerge exactly as new strategies did in the monoclonal antibody production business during the past 20 years. Companies that provide
therapeutic cells need to be paid for producing and making such therapies accessible. Large pharmaceutical companies mayhave the resources to wait
to obtain compensation should marketing approval come long after initial regulatory approval, but small companies do not have the same luxury.
Mechanisms must be found to provide payment or reimbursement early in the approval process, provided there are the right contingencies regarding
safety and efficacy. To date, some cell therapies have been approved by regulatory agencies, but reimbursement is still lacking. Japan’snewlegislation,
as mentioned earlier in this article, which became effective at the end of November 2014, is an attempt to solve this conundrum.Manycompanies can be
expected to take advantage of that. To date, Athersys, Cytori Therapeutics Inc., and Mesoblast Ltd. and others have set up shop in Japan to do so.
Likewise, other governments and national regulatory bodies are observing the impact of the changes in Japan. Mesoblast’s graft-versus-host product,
Prochymal (marketed as Temcell by JCR Pharmaceuticals Co., Hyogo, Japan, http://www.jcrpharm.co.jp) was priced at the end of 2015 by the
Japanese regulatory agency at approximately $7,000 per bag of 72 million MSCs (about 16–24 bags are used for a complete therapeutic course).
Another issue that will come to the fore in the cell therapy field is that many cell therapy solutions have the promise of treating the underlying cause of
a disease. This is unlike many conventional drug products that manage the disease and/or its symptoms. If a therapy can affect a cure or a
transformative change (e.g., a long-term halt in disease progression),howis the company compensated for that? Currently, we pay for drugs and devices
on an interventional basis, the potentially “once-and-done” approach deployed by cell and gene therapies is therefore a new challenge for
reimbursement compared with the pay for a pilla- day-for-life pharmaceutical practice. The recent discussion in the U.S. about the pricing of Sovaldi
(Gilead, Foster City, CA, http://www.gilead.com)—$1,000 a pill, $84,000 for a 3-month regimen—brought this issue out as the debate raged as to
whether the value of avoided liver transplants was the appropriate determinant. Likewise, in Europe the pricing of Glybera (uni- Qure, Amsterdam, The
Netherlands, http://www.uniqure.com/), at $1.4 million per treatment regimen, generated controversy. Now that human pancreatic progenitor cell
therapy is starting clinical trials, we can envision the time, for example, when b islet cell transplants for patients with diabetes removes the need for a
lifetime of blood tests and insulin injections, not to mention avoiding the complications of the disease and their attendant costs. The latter are often
twice the direct costs of the disease itself. In such a case, how does a company get reimbursed appropriately? Should it be based on the cost of the
therapeutic itself or on the entire stream of value it creates, or somewhere in between? Recognizing
that this value is created
and captured only over time has led to proposed reimbursement plans whereby a company
receives payment initially for the therapeutic intervention and on a periodic basis as the
therapy proves out for an individualover time[18]. This type of outcomes-based compensation is
attractive in that it aligns economic and health interests but will be difficult to
implement in practice because it involves assignment of cause and effect and requires
complex patient tracking and reporting over time, a special
challenge in environments, such as the U.S., without a single-
payer system.
Key to global food security
Mark Post 14, MD, PhD in Pharmacology from Utrecht University, Professor of Vascular
Physiology and Chair of Physiology at Maastricht University, and Cor van der Weele, Prof and
bioethicist in the Dept of Applied Philosophy at Wageningen University, PhD in philosophy of
Biology, “Principles of Tissue Engineering for Food,” Ch 78 in Principles of Tissue Engineering
(Fourth Edition), 2014, Pages 1647–1662, Science Direct
Most techniques in tissue engineering were developed for medical applications. The potential benefits of tissue
engineering and regenerative medicine for the repair of non-regenerative organs in the human body have not really been questioned. It is generally accepted that these
technologies offer therapeutic opportunities where very limited alternatives are at hand to improve quality of life. Therefore, a tremendous amount of government funded
research and business R&D has been and continues to be devoted to tissue engineering. Still, 25 years after its introduction, regenerative medicine by tissue engineering is not
yet part of mainstream medical therapy [1]. This suggests that the technical challenges to generate tissues that are fully functional and can immediately replace damaged tissue

As a spin off from this research activity, techniques in tissue engineering and
are substantial.¶

regenerative medicine may be used to produce organs to produce food. This idea is not new and had in fact been proposed by

Winston Churchill in his 1932 book ‘Thoughts and adventures' [2] and by Alexis Carrel [3]. Although the biological principles of tissue

engineering of food are very similar to the medical application there are also differences in goals, scale of
production, cost-benefit ratio, ethical-psychological considerations and regulatory requirements.¶ In this chapter the distinctions between the challenges of tissue engineering
for food production are highlighted and discussed. The focus will be mainly on tissue engineering of meat as a particularly attractive and suitable example.¶ Why Tissue
Engineering of Food?¶ Growing food through domestication of grasses, followed by other crops and livestock has a 13,000 years head start. The success of economical food
production likely determined the growth and sophistication of our civilization [4]. Why would we try to replace the relatively low-tech, cheap and easy natural production of food
by a high-tech complicated engineering technology that is likely to be more expensive? There are two main reasons why current ways of food production need to be

reconsidered.¶ First, with growth of the world population to 9.5 billion and an even faster growth in global economy,
traditional ways of producing food, and in particular meat, may no longer suffice to feed
the world [5]. Food security is already an issue for some populations, but absence of this security may
spread across all civilizations due to generalized scarcity of food . Meat production
through livestock for example already seems maximized by the occupation of 70% of current arable land surface, yet the
demand for meat will double over the next four decades [6]. Without change, this will lead to scarcity and high
prices. Likely, the high prices will be an incentive for intensification of meat production, which will increase the pressure of

using crops for feed for livestock instead of feeding people . The arable land surface could be increased but this
would occur at the expense of forests with predictable unfavorable climate consequences. Lifestyle changes that include the reduction of

meat consumption per capita would also solve the problem, but historically this seems unlikely to happen. A
technological alternative such as tissue engineering of meat might offer a solution. In fact, the
production of meat is a good target for tissue engineering. Pigs and cows are the major sources of the meat we consume, and these animals are very

inefficient in transforming vegetable proteins into edible animal proteins, with an average bioconversion rate of 15% [7]. If this
efficiency can be improved through tissue engineering, this will predictably lead to less
land, water and energy use for the production of meat [8], which introduces the second major reason why
alternatives and more efficient meat production should be considered.

Goes nuclear
FDI 12 – Future Directions International ’12 (“International Conflict Triggers and Potential
Conflict Points Resulting from Food and Water Insecurity Global Food and Water Crises
Research Programme”, May 25, http://www.futuredirections.org.au/files/Workshop_Report_-
_Intl_Conflict_Triggers_-_May_25.pdf,)
There is a growing appreciation that the conflicts in the next century will most likely be fought over a lack of

resources . Yet, in a sense, this is not new. Researchers point to the French and Russian revolutions as
conflicts induced by a lack of food. More recently, Germany’s World War Two efforts are said to have been
inspired, at least in part, by its perceived need to gain access to more food. Yet the general sense among those that attended
FDI’s recent workshops, was that the scale of the problem in the future could be significantly greater as a result of population pressures, changing weather, urbanisation,
migration, loss of arable land and other farm inputs, and increased affluence in the developing world.¶ In his book, Small Farmers Secure Food, Lindsay Falvey, a participant in
FDI’s March 2012 workshop on the issue of food and conflict, clearly expresses the problem and why countries across the globe are starting to take note. .¶ He writes (p.36),
“… if people are hungry, especially in cities, the state is not stable – riots, violence, breakdown of law and order and migration
result.” “Hunger feeds anarchy.” This view is also shared by Julian Cribb, who in his book, The Coming Famine, writes that if “large regions of the world run
short of food, land or water in the decades that lie ahead, then wholesale, bloody wars are liable to follow .” He continues: “An
increasingly credible scenario for World War 3 is not so much a confrontation of super powers and

their allies, as a festering, self-perpetuating chain of resource conflicts.” He also says: “The wars of the
21st Century are less likely to be global conflicts with sharply defined sides and huge armies,
than a scrappy mass of failed states, rebellions, civil strife, insurgencies, terrorism and genocides, sparked by
bloody competition over dwindling resources.” As another workshop participant put it, people do not go to war to
kill; they go to war over resources , either to protect or to gain the resources for themselves. Another observed that hunger results in
passivity not conflict. Conflict is over resources, not because people are going hungry. A study by the International Peace Research Institute indicates that where food

security is an issue, it is more likely to result in some form of conflict. Darfur, Rwanda, Eritrea
and the Balkans experienced such wars. Governments, especially in developed countries, are increasingly aware of this phenomenon.¶ The
UK Ministry of Defence, the CIA, the US C enter for S trategic and I nternational S tudies and the Oslo Peace
Research Institute, all identify famine as a potential trigger for conflicts and possibly even nuclear war .
2AC
Econ
AT: Taxes Impact
Single payer boosts national disposable income through efficiency gains
Michele Swenson 7-22, A former nurse, Michele Swenson has researched and written about
the history of women’s health care, as well as religious fundamentalist and gun-centered
ideologies. Her book Democracy Under Assault: TheoPolitics, Incivility and Violence on the
Right is an in-depth examination of the fractured church-state divide, assaults on the
independent judiciary, as well as resurgent 19th century science, socioeconomic Darwinism,
corporatism, and Christian nativism. She&nbsp;is a member of the working committee of
Health Care for All&nbsp;Colorado Foundation that created the proposal., 7-22-2017, "Health
Care Reform: Commercial Multi-Payer vs. Public Single-Payer Health Insurance," Common
Dreams, https://www.commondreams.org/views/2017/04/30/health-care-reform-commercial-
multi-payer-vs-public-single-payer-health-insurance
Washington health care “reform” has become one more vehicle for the continued 40-year wealth transfer upward. Recent Republican proposals offer
$600 B in tax cuts for the wealthiest, paid for by $840 B in Medicaid cuts for the low-income. To Republicans, "freedom" means freedom from health
care access. Washington
health reform proposals, including the Affordable Care Act, are built around
the most costly, inefficient model – that is, multiple commercial insurances that
drive wasteful complexity and high administrative costs . Commercial multi-payer
health insurances rely on public subsidies to preserve private insurance profits. The
private health insurance and pharmaceutical industries together siphon off tens of billions
of public dollars annually , to boost their profits. Commercial health insurers further protect their
bottom line by increasing premiums, copays and deductibles , while limiting benefits
and shrinking provider networks - thus shifting costs and risks to the insured . Health
insurance middlemen practicing “Denial Management” deny and delay claims in order to cut costs

and increase their profits, while greatly adding to billing costs for providers, who often are
required to submit a single claim multiple times . The uncertainty leaves too many Americans
one illness or accident away from financial disaster. Public Single-Payer Insurance: A Boost for the Entire Economy
Dozens of studies over the past 30 years have demonstrated that a single national
insurance – modeled on traditional Medicare - provides the most sustainable , comprehensive,
universal health coverage. By covering everyone in one large risk pool, single-payer
insurance can best leverage economies of scale to cut costs by negotiation of global
budgets and bulk medicine rates. Furthermore, single-payer insurance provides first-dollar
coverage , eliminating copays and deductibles while reducing administrative costs ,
saving up to $500 billion annually – enough to cover the uninsured and fully cover the under-
insured. Analysts estimate that another $150 billion would be saved by negotiation of bulk
medicine rates by Medicare, as the VA now does. At 18 percent of GDP in 2015 and growing, U.S. health costs
average almost twice as much as other countries that all report better health outcomes. U.S. health spending is crowding out most other segments of the
economy – including education, housing, infrastructure and pensions - and reducing consumer purchasing power and wages. Properly done, single-
insurance health reform could boost all segments of the U.S. economy, saving as much
as $1 trillion annually in overall health spending, based on the experience of other countries
whose health expenditures are almost half as much as that of the U.S. Businesses, state and local governments, and
families would all realize savings. U.S. businesses would be more globally competitive ,
eliminating the high health costs that now inflate the price of U.S. goods , including
thousands of dollars added to every U.S.-made car. A traditional Medicare model insurance
would relieve businesses of the time and cost of managing employee health plans . Jobs
will be retained in the U.S. when high health costs no longer induce insurance companies and
self-insured firms to use medical tourism to send patients abroad for medical procedures. We have
seen the benefits of increased Medicaid coverage in states like Colorado, where reduction of uncompensated care has stabilized rural economies,
contributing to job growth and permitting hospitals and clinics to remain viable. The more sustainable, cost-efficient traditional Medicare model,
improved and extended to all, would benefit everyone and be a boon to the entire economy. Everybody does better when everyone is covered. Contrary
to political right narrative, Medicare is not "socialized medicine" – insurance by nature is "socialized." Only a Medicare model assures full choice of
private or public providers; whereas, commercial insurers shrink their networks in order to cut costs, thus limiting provider access. Some assert that
Medicare is “free,” nevertheless all working people invest in Medicare through payroll deductions. Like the Fire Department, all contribute according to
their means in order to ensure health care is available when each of us needs it. The
government already underwrites 60
percent of all health costs , much of it to subsidize the insurance and pharmaceutical
industries , while also partially or completely funding congressional, VA and public employee
health coverage. An innovative proposal for consolidation of the health insurance
industry would permit the federal government to “buy out” commercial health
insurances , with a projected payback period of two years, a much shorter time than banks took to pay back their TARP loans during the great
recession. Read U.S. Healthcare Financing Reform: The Consolidation of the Health Insurance Industry.
AT: Small Biz
Aff boosts business confidence by taking healthcare off the agenda
Toby Scammell 7-7, Mr. Scammell is founder of Womply, a Software-as-a-Service company
serving small and medium-sized businesses., 7-7-2017, "Beyond Politics, Trump's Handling Of
Tax, Health Care Reform Looms Large For Small Businesses," Forbes,
https://www.forbes.com/sites/realspin/2017/07/07/beyond-politics-trumps-handling-of-tax-
health-care-reform-looms-large-for-small-businesses/2/#498ec7ff7479
With controversy being the only constant in the early presidency of Donald Trump , it’s easy
to get distracted by politics and lose sight of what matters most for the economy
right now . In particular, Americans preoccupied with 24-7 political analysis should pay more
attention to how the country’s 28 million small business owners are responding to
Trump’s presidency because he has an outsized impact on their confidence , and their
confidence has an outsized impact on the economy and jobs. Small business owners strongly
preferred Trump over his Democratic rival Hillary Clinton as their choice for president, but they weren’t overly
impressed with either candidate. Still, the local business community initially responded to Trump’s election with a spike in confidence
before whiffs on the Obamacare repeal and tax reform started eroding local merchants’ optimism .

This highly elastic response, and its potential to ripple through the broader economy , is
precisely why a magnified focus on Trump’s political turmoil without sufficient discussion
of real economic issues is concerning . My company recently polled thousands of small business owners in all 50 states
to see what’s driving their optimism or anxiety, and their how sentiment translates into actions like hiring or expansion. We also looked for correlations
between the policy environment and its potential impact on America’s economic engine on Main Street. According to our data, Trump’s election is the
No. 3 reason for confidence among optimistic owners and the No. 1 reason for concern among pessimists. This matters because optimists are 3.5 times
more likely to hire and give raises to employees, while pessimists are 6.5 times more likely to reduce staff and employee pay. In short, Trump’s
ability to build and maintain small business confidence could have enormous economic
consequences , with room for wide swings in either direction. Beyond the political gallows, Trump
faces the ripple effects of how small business owners react to key policy issues that
affect local commerce . Specifically, American merchants will evaluate their new president — and adjust their confidence levels —
based on how he prioritizes and approaches tax reform, Obamacare repeal and immigration. Taxes Local merchants are fundamentally pragmatic, and
making enough money is one of their most pressing worries, according to our study. The average small business pays $1 out of every $5 it earns in
effective tax rate, with higher rates for partnerships and S corporations. It should come as no surprise, then, that tax reform is the policy change small
businesses want most of all. In fact, for all the fuss about health care reform, local merchants are twice as likely to say they want tax reform compared to
a new national health law, according to our data. Trump’s tax plan would reduce the corporate tax rate to 15 percent, which would obviously make life
and business easier for businesses everywhere. Small businesses are watching anxiously to see where tax reform goes and will no doubt make some
early judgments about their new president based on his ability to cut a deal that makes sense to them. No room for additional hiccups here —
leadership priority No. 1. Health Care While tax reform is pretty straightforward, health
care is a complex issue , even for
Main Street’s practical bunch . We asked what impact repeal of the Affordable Care Act
(Obamacare) would have on small businesses. Here’s what we heard: 16% very positive 9% somewhat positive 29% no impact 5%
somewhat negative 10% very negative 23% depends on what replaced it 8% don’t know The majority of business owners either don’t

expect much impact from health care reform or aren’t going to be impressed by
change for change’s sake , which should give the president pause as he and congress rush to
repeal and replace. Trump can score significant leadership points — and ratchet up Main Street
confidence — by taking the necessary time to articulate a clear, compelling health care vision
that resonates with the 52% of small business owners who are ambivalent about national health care or eager to evaluate the details of the replacement
plan. Immigration Trump has spoken early and often about his vision for a more provincial U.S., promising a wall along the U.S.-Mexico border,
deportation of undocumented immigrants, and changes to the H-1B visa program that admits some 85,000 working immigrants each year.
Immigration policy plays well political circles but it’s a low priority for small business owners. Our data revealed that only 2% of small business owners
want immigration policy changed as their top priority, ranking below taxes, regulations like inspections and licenses, health insurance, and minimum
wage. In fact, small business owners are six times more likely to say they want no policies changed than to say immigration. Good leaders know how to
pick battles and prioritize the issues that drive real results. For Trump, immigration might be better suited for the back burner, especially given what’s
at stake for small business confidence with tax and health care reform. Trump’s
influence on small business sentiment is a
unique opportunity and challenge for our new commander-in-chief. If he provides focus and results
that drive sustained optimism on Main Street, the effects will ripple through the economic and
political arenas . If he can’t, he could set off an economic freeze along the front lines of
U.S. commerce . Either way, local businesses will have a considerable say in determining the leadership legacy of our 45th president.
Disease
AT: Rationing Turn
Causes delays in seeking care which is a de-facto wait period. And, link wrong:
studies prove the differential is fake.
Colleen Flood and Bryan Thomas 17, Colleen M. Flood, Faculty of Law, University of
Toronto; Bryan Thomas is a Law Fellow and Adjunct Professor at the O'Neill Institute, 01/2017,
“A View from a Friend and Neighbor: A Canadian Perspective on U.S. Healthcare and the
Affordable Care Act,” published in the Oxford Handbook of U.S. Health Law, DOI:
10.1093/oxfordhb/9780199366521.013.5
iii. Wait Times Much has been made of the problem of wait times in the Canadian healthcare
system —corroborated by 2014 Commonwealth Fund survey data, finding that wait times for specialists, elective surgery, and
emergency room treatment are worse in Canada than in any of the other eleven developed nations under study. The surveys found
the United States to be comparatively average.50 Similarly, a 2010 study ranked Canada last among eleven countries in terms of wait
times, finding that 33% of Canadian patients reported waiting six or more days for an appointment with a doctor or nurse, 41%
reported waiting two months or more to see a specialist, and 25% reported waiting four months or more for elective surgery.51 It is
not clear, however, that long wait times are endemic to single-payer financing. Certainly,
jurisdictions with single-payer finance such as England have largely eliminated wait
times . Within Canada, there has been some success in tackling wait times in priority areas
of care : For example, a recent study found that all provinces were able to provide radiation
therapy to at least nine out of ten patients within a benchmark timeframe of twenty-eight days.52 The province of
Quebec, facing pressure from the courts, established maximum wait times for certain services, whereupon
the province will pay for patients to obtain care in private clinics or abroad if necessary.53
Wait times in the United States are comparable to those in Canada for patients looking to get
same- or next-day appointments, as well as those requiring specialized tests (e.g., CT, MRI).54 The
cost of medical treatment also hinders timeliness of care in the United States, where more
people chose not to seek recommended medical care or not to fill prescriptions than
in any other country .55 In short, while Canada could do a better job managing wait lists—
most provinces having no system for prioritizing patients outside of emergency and urgent-care
settings—the American approach reduces wait times in part through gaps in coverage .56
AT: Primary Care
We increase primary care
Dr. Jerald Winakur 16, Dr. Winakur practiced internal and geriatric medicine for 36 years and
is an associate faculty member at the Center for Medical Humanities and Ethics at the
University of Texas Health Science Center at San Antonio., June 2016, “A Single-Payer System
Can Save Primary Care,” Caring For The Ages, Volume 17, Number 7,
http://www.caringfortheages.com/pb/assets/raw/Health%20Advance/journals/carage/JULY_
2016.pdf
The problem with Medicare, of course, is the fee schedule itself. It vastly undervalues the work that doctors like me do. At
the same time, it overvalues the work of my sub-specialty colleagues . The fee schedule favors technology over
touch, performing procedures over spending time with patients. Of course, Medicare didn’t pick numbers out of the air; the fee schedule has
been unduly influenced by the richer and thus more powerful doctor groups that sit on the American Medical
Association’s resource update committee. This secretive and specialist-stacked assemblage advises Medicare on setting procedural fees. Ninety percent
of such advice finds its way into the schedule, which is why an otolaryngologist makes more money to clean wax out of an ear than a geriatrician gets for
evaluating an 85-year-old woman who comes into the office after having had a “little spell.” I believe that a vibrant system of primary
care is essential for patient wellbeing . Every one of us needs and deserves enough space for an unrushed visit and a
thorough physical examination by someone who knows us and our unique circumstance and is available across time and sites of care to minister and to
advocate. Patients
who are under the watchful eye of a primary care physician receive less
expensive medical care without sacrificing the quality of that care. Medicare may track every test I order,
every consultant to whom I make a referral, every hospital admission I initiate. But Medicare has never known how many patients I saved from
inappropriate testing and consultations, ED referrals, hospital stays, and LTC placements. And aside from the dollars saved, my patients have been
spared many needless procedures and the associated morbidity these entail. Single-Payer Solution There
is one simple way to
accomplish this. A single-payer plan can tinker with the fee schedule and fi nally
make it financially remunerative for young doctors to once again pursue primary
care . Increase the routine doctor visit reimbursement codes signifi cantly — enough to allow primary care physicians to actually spend
face-to-face time with patients. The 7-minute visit is unsatisfactory to patients and demoralizing to
doctors. Undoubtedly, this is contributing to the greater than 50% burnout rate for
primary physicians. Changing to a single-payer system will not require an infusion of more
dollars into the system. It will require a rebalancing of what a sane single-payer system pays for the thousands of over-
compensated procedure codes that currently exist. Yes, my specialty colleagues will be unhappy with such a proposal. But
unless they want to take over the 24/7 responsibility of caring for the soon-tobe 75 million seniors with their complex medical histories, polypharmacy,
and fraught social needs, my
specialist friends should yield to the reality that our current system of
crumbling primary care cannot accomplish this task without major change.
OFF
T All People---2AC
NHI covers all or ALMOST ALL
The Law Dictionary 17 http://thelawdictionary.org/national-health-insurance/ Powered by
Black's Law Dictionary Free 2nd Ed. and The Law Dictionary Law Dictionary: What is
NATIONAL HEALTH INSURANCE? definition of NATIONAL HEALTH INSURANCE (Black's
Law Dictionary)
What is NATIONAL HEALTH INSURANCE? A federal government's insurance
benefits system established to cover all or almost all national citizens. The United
States is developing such a program. Tax money funds these systems entirely or
partially.
Most real world
Gail Henderson 97 Department of Medicine UNC School of Medicine The Social Medicine
Reader p.418
Glossary: National Health Insurance—A national program of social insurance that
finances medical care for all or part of the population. Usually the term "national health
insurance” refers to programs that cover most of a country's population (for example,
64% in the Netherlands, 85% in Belgium. 89% in Germany, 97% in Canada).

It is not any longer, as it was in Lloyd George’s day used to describe compulsory social
insurance programs that cover only industrial workers or some other minority of the
citizens of the country (for example, Americans aged 65 and older who are covered by
Medicare). Under national health insurance, the government, either directly or through private
sector agents, makes payments to physicians, hospitals, pharmacies, laboratories, etc.. which
themselves operate privately. How- ever, in order to participate (be paid by the pro- gram) these
private entities must meet the standards that have been set by the national insurance plan.
Smokers PIC---2AC
It’s 15% of the population---a massive gap
CDC 15 (Centers for Disease Control. “Current Cigarette Smoking Among Adults in the United
States”
https://www.cdc.gov/tobacco/data_statistics/fact_sheets/adult_data/cig_smoking/index.htm)
swap
Cigarette smoking is the leading cause of preventable disease and death in the United
States, accounting for more than 480,000 deaths every year , or 1 of every 5 deaths.1 In 2015,
about 15 of every 100 U.S. adults aged 18 years or older (15.1%) currently* smoked cigarettes. This means an estimated 36.5
million adults in the United States currently smoke cigarettes.2 More than 16 million Americans live with a
smoking-related disease.2

AND doesn’t change smoker behaviors


Justin Giovannelli 15, 1-13-2015, "Insurance Premium Surcharges for Smokers May
Jeopardize Access to Coverage," Commonwealth Fund,
http://www.commonwealthfund.org/publications/blog/2015/jan/insurance-premium-
surcharges-for-tobacco-use
Insurers’ flexibility to charge higher rates for tobacco use raises the risk that
smokers will be unable to afford coverage and, therefore, will go without it. This
danger is acute for lower-income Americans because of the way the health law’s premium tax
credits are calculated. In short, they don’t ease the impact of the surcharge. For a nonsmoker who earns
around $17,000 a year and receives federal premium assistance, for example, annual premiums equal 4 percent of
income (about $700); for a similarly situated smoker, the tax credit stays the same, but the price
tag for coverage nearly quadruples. Given this calculus, those who might be especially well-served by coverage—and
the access to cessation services it provides—may be unable to afford it. Anecdotal reports
presented at a recent national meeting of state insurance regulators indicate that, in some areas, the tobacco surcharge
poses as big an obstacle to coverage access as the states that have not yet expanded
eligibility for Medicaid. Tobacco rating is also problematic because insurers may treat
tobacco use as a proxy for poor health, raising premiums to account for an enrollee’s (presumed)
medical needs in a way that skirts the ACA’s prohibition on health status rating. Finally, though there
are public policies and medical and behavioral interventions with track records of promoting cessation, there is little
evidence to suggest that hiking a consumer’s insurance premium is an effective way to end
tobacco addiction.

BUT insurance does


Michelle Andrews 12, 12-17-2012, "Study Finds Coverage To Help Kick Smoking Can Be
Tricky," Kaiser Health News, http://khn.org/news/121812-michelle-andrews-smoking-
cessation-treatments/
The Affordable Care Act requires plans that are new or those whose coverage has changed enough to lose their grandfathered status
to provide preventive benefits recommended by the U.S. Preventive Services Task Force without any cost-sharing by members. The
task force, a group of experts that evaluates medical evidence to guide consumers’ and doctors’ decisions about screening and
preventive services, strongly recommends tobacco-cessation treatment for adult smokers. It describes several effective treatments,
such as counseling (including brief behavioral counseling sessions and telephone quit lines) and medication (including nicotine-
replacement gum, lozenges and the patch, as well as prescription non-nicotine drugs such as Zyban and Chantix). It says research
shows that a combination of counseling and medication together is more effective at helping
people quit than either type of treatment alone. But when researchers at Georgetown University’s Health Policy
Institute examined 39 health plans in six states, they found that coverage for smoking cessation was often confusing. Many contracts
didn’t clearly state that the coverage was available, didn’t cover recommended treatments and/or didn’t provide it without cost-
sharing. “The study points out the need for the Department of Health and Human Services to provide much more specific
guidelines,” Myers says. Insurers offer a different perspective. “The final rules [for preventive services] recognized that there wasn’t
necessarily a one-size-fits-all approach,” says Susan Pisano, a spokeswoman for America’s Health Insurance Plans, a trade group.
“So we would expect to see variation around the methods that plans are using.” In addition, she said AHIP’s own survey of plans
found that nearly all offer some type of intervention for tobacco users. MORE FROM THIS SERIES INSURING YOUR HEALTH
Study Finds Coverage To Help Kick Smoking Can Be Tricky Tobacco use kills an estimated 443,000 people in
the United States every year, accounting for about one in five deaths annually. It remains the
No. 1 cause of preventable death. If insurance coverage of smoking-cessation treatment isn’t enough to encourage
people to quit, the health law provides an added incentive: Smokers’ premiums can be up to 50 percent higher than non-smokers’ in
some plans. The Obama administration recently proposed softening that rule by allowing smokers to avoid the higher rate if they
participate in a stop-smoking program. In the 34 years that she smoked cigarettes, Caroline Randa tried to quit several times, but
she never lasted more than a few weeks before starting up again. Randa, now 60, finally managed to kick her 2 1/2-pack-a-day habit
seven years ago using Chantix. Randa and her husband live in Foley, Ala. Both of her parents died of smoking-related illnesses. (Her
sister, Julia Cartwright, works for Legacy, an anti-smoking advocacy group.) Randa spent roughly $120 per month for the two
months she used the drug, which was not covered by insurance. “If something like Chantix had been on the
market and my insurance had covered it, I would definitely have quit more than seven years
ago,” she says. The cost of quitting smoking varies widely depending on cessation products used. Although some people quit
without the aid of stop-smoking products or support, many people need assistance. At the low-cost end,
someone who gets only telephone counseling and buys over-the-counter nicotine replacement
products, such as Nicorette gum and NicoDerm patches, might spend up to $400 on a quit attempt, says Cheryl
Healton, president and chief executive of Legacy. At the high end, someone who gets more-intensive
counseling, uses a prescription stop-smoking drug such Chantix plus an over-the-counter
nicotine replacement method could spend up to $1,000 per try, she estimates. A study by researchers at
Pennsylvania State University reached a similar conclusion. It takes the typical person up to 11 tries to succeed,
according to Healton. Most studies have found that health insurance coverage increases both the
use of stop-smoking treatments and the rates of smoking cessation, according to the
2008 clinical practice guideline on treating tobacco use and dependence published by the
Department of Health and Human Services.
Discourse PIK---2AC
The concept of medicalization poorly explains society because it flattens diffuse
processes
Jonathan Sholl 17. Department of Philosophy and History of Ideas, Aarhus University, Jens
Chr. Skous. 08/2017. “The Muddle of Medicalization: Pathologizing or Medicalizing?”
Theoretical Medicine and Bioethics, vol. 38, no. 4, pp. 265–278.
Introduction: what’s in a name? Medicalization has become one of those issues which appear both ubiquitous and

unquestionably problematic . Nearly any reference to it seems to signal at once a social and existential

threat , such as when Allen Frances, a former chair of the DSM task force, writes that the latest edition of the DSM-5 and the ever-growing power
of Big Pharma are contributing to the ‘‘medicalization of ordinary life’’ [2]. This perception of medicalization as a growing threat, however, is nothing
new. It dates back at least to the 1960s and 70s with the ideas of various sociologists, physicians, and theorists, such as Ivan Illich, Irving Zola, Thomas
Szasz, Michel Foucault, and later Peter Conrad and Joseph Schneider. Each has contributed in their own way to the development of the so-called
medicalization thesis [3], which aimed to capture how various conditions, behaviors, or experiences which were previously under legal, political, or
religious surveillance, and as such were not ‘‘inherently medical,’’ were becoming increasingly defined as medical entities to be labelled and treated [4].
This has led to the common view, which can be found in the medical sociology and anthropology literature (e.g., [5]), that ‘‘Medicine used to claim
authority over the cracks and interruptions in life; now it claims authority over all of life’’ [6, p. 2]. Through subsequent analyses, this thesis has been
expanded to cover not only the development and application of medical categories, but also to capture how ‘‘the populace has internalized medical and
therapeutic perspectives as a taken-for-granted subjectivity’’ [1, p. 14], with medicalization riding on the waves of consumer and market culture.
Whether this expansion is understood in terms of different ‘‘engines of medicalization’’ [7], or by analyses charting the expanding nature of biomedicine
since roughly 1980 with the concept of ‘‘biomedicalization’’1 [8], it seems uncontroversial to claim that this concept was and is mainly understood as a
critique. In other words, ‘‘‘medicalization’ is usually used in the sense of inappropriate medicalization’’ [6, p. 35] or simply as ‘‘overmedicalization’’ [1, p.
146]. Most broadly, ‘‘The concept of medicalization has been put forward in order to name, analyse and criticise the changing role of medicine in
modern society’’ [9, pp. 90–91]. This concept has thus been used to denounce various trends in twentieth-century (Western) medicine, such as the
usage of medical categories and treatments to control deviant behavior, the widening of diagnostic categories (‘‘disease mongering’’), the
commodification of health, the problem of iatrogenesis, the tendency to obscure the social or political context of illness, the privatization of medical
practice, and the role of pharmaceutical companies in shaping diagnoses and treatments, to name a few (e.g., [1, 4, 6, 8, 10–18]). This standard
approach to the study of medicalization has problematic consequences . On the one hand,
subsequent theoretical or conceptual analyses have centered more on these various problems
and less on attempting to clarify what constitutes medical reality and practice. This is
explicitly acknowledged by many, such as when Conrad claims that his interest is not with adjudicating what is
‘‘really’’ medical but simply with describing how a definition becomes ‘‘viable’’ in a given society
[1]. While this suggests an attempt to avoid making normative claims, many have pointed out that
it takes for granted what it seeks to explain [19–22]. In other words, while this approach claims
that medicalization is a growing problem, it assumes that there is simply one ‘‘medical
model’’ and that the realm of ‘‘the medical’’ which is expanding can be more or less clearly
delineated . Even the few philosophical attempts to clarify the concept of medicalization also
carry these presuppositions [23] or set them aside and proceed ‘‘as if’’ the medical had some clarity [24]. On the other hand,
while the aim of these various researchers has been to establish the reality of this growing threat, doing so often
requires either not making justified distinctions between different practices or making
arbitrary ones. It is this muddling that I seek to address.
Wake Market CP---2AC
High deductible + HSAs fail---no bargaining, psychology, principal-agent problem
YELLOW = what the card is answering
BLUE = read
Erin C. Fuse Brown 15. * Assistant Professor of Law, Georgia State University College of Law.
2015. Resurrecting Health Care Rate Regulation. Google Scholar,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2584625.
2. Consumer-Directed Health Care Consumer-directed health care (“CDHC”) is another market-based
approach that builds off of price transparency . The typical mechanism to encourage CDHC
is through a high deductible health plan coupled with a tax-advantaged health
savings account (“HSA”).113 By giving patients some “skin in the game,” CDHC sensitizes
patients to health care costs, which leads the patient to exert market pressure on providers to
move toward more uniform prices.114 CDHC attempts to address the moral hazard problem of health insurance by
forcing the insured individual to bear the initial cost of her health care expenditure, which will cause her to ration her utilization of
services.115 In 2015, a plan is considered a high deductible plan if the deductible is at least $1300 for an individual and $2600 for a
family.116 Deductibles vary widely, and over a third of family deductibles exceed $5000. 117 The use of high deductible plans is
widespread and steadily increasing. In a 2014 survey of large employers, eighty-one percent reported offering a CHDC plan to
employees as an option, and thirty-two percent reported offering a CDHC plan as the only option, up from twenty-five percent the
previous year.118 As of 2014, twenty percent of covered employees were enrolled in a CDHC plan, compared with just four percent in
2006. 119 On the health insurance exchanges, high deductible health plans comprise approximately sixty percent to eighty percent of
plans.120 In the form of high deductible health plans, CDHC has a limited impact on hospital prices because most hospital services
will be so expensive that a patient will “blow through” her deductible, and thus be insensitive to price variations above the
deductible.121 Carl Schneider and Mark Hall identified several barriers to the goals of CDHC that
prevent patients from engaging in consumer behaviors.122 Patients may lack choices among
plans and providers .123 Moreover, despite efforts to promote transparency, patients still
lack necessary information about price and quality .124 If they lack quality data, patients
are likely to opt for the higher-priced hospitals because of the mistaken perception that
price is a proxy for quality .125 Even with price information, patients are often unable to
bargain with the hospital, either because they will not or cannot. Patients place themselves in their doctor’s hands, following
whatever advice the doctor prescribes, including at which hospital to have a procedure performed.126 Acutely sick patients are in a
particularly vulnerable position, unable to negotiate on prices for urgently needed care on the way to the emergency room or at the
bedside of a gravely ill family member.127 Empirical research has cast doubt on patients’ financial literacy and ability to process the
complex information necessary to make health care choices.128 Studies have demonstrated that higher cost-
sharing has a disproportionate, negative impact on the poor and those with chronic
illness , highlighting questions of distributive justice.129 Greater costsharing causes people to cut back not
just on unnecessary care, but needed care as well.130 Privately insured individuals with
incomes below 200 percent of poverty are significantly more likely to have deductibles
that exceed five percent of their incomes and are more likely to delay needed care as a
result.131 When individuals defer cost-saving preventive and outpatient care, they may later
consume more expensive ER and hospital services for poorly controlled illness .132
CDHC may contribute to adverse selection , with healthier (and, evidence shows, wealthier and more
educated) individuals selecting a CDHC plan and sicker people opting for plans with lower
deductibles .133 Meredith Rosenthal and Norman Daniels explain that among employer-sponsored high-deductible health
plans with HSAs, the employer contribution to the HSA tends to redistribute wealth from the unhealthy to the healthy.134 3.
Reference Pricing Like high deductibles, reference pricing also puts the individual’s own dollars at stake, but reverses who pays the
first dollar of coverage.135 Instead of making the patient pay for the first few thousand dollars of care, health plans agree to pay the
price for a given service charged by a low priced provider, and the individual is free to seek care from a range of other providers but
is responsible for the difference between that provider’s higher price and the reference price.136 Health economist Austin Frakt
illustrates the difference between deductibles and reference pricing by analogizing deductibles to being told that insurance will pay
for any Toyota you want if you pay the first $500. 137 You would likely pick the most expensive car, such as an $80,000 Land
Cruiser. With reference pricing, you are told that insurance will cover the first $15,000 of any Toyota, but you have to pay the excess
price.138 In this example, reference pricing will clearly lead to more value and price-sensitive shopping by the consumer who may
opt for a Toyota Yaris instead of a Land Cruiser. Proponents say reference pricing makes patients more sensitive to the differences in
price between hospitals than CDHC, where most hospital visits will exceed the patient’s deductible.139 The increased price
sensitivity from reference pricing creates market pressure for high-priced providers to lower their prices closer to the reference price
or else lose business. One advantage of reference pricing over CDHC is that it relies on the health plan to gather and report the
providers’ price information rather than the individual, who may not have sufficient data or wherewithal to evaluate the different
options.140 Health insurers favor reference pricing because it caps their financial responsibility for a particular service.141 There is
early evidence that reference pricing can nudge patients toward more cost-effective choices and cause high-priced providers to lower
prices closer to reference price levels.142 Payers are starting to use reference pricing for hospital or outpatient services, but generally
only for certain standardized procedures where there is wide price variation but little variation in quality, such as colonoscopy or hip
replacement.143 In addition, the service ought to be “shoppable,” that is, a nonurgent service that allows the patient time to shop
around, with readily available information regarding price and quality, and for which there are several choices of provider.144
Reference pricing also has limitations. Most health care spending is for services that are not shoppable and are therefore ill-suited
for reference pricing.145 To make up profits that it loses on reference-priced services, hospitals may simply raise prices for non-
reference priced services.146 One study suggests reference pricing may have a limited impact on total spending because it tends to
affect prices only at the highest end of the price distribution.147 Some of the barriers to CDHC, such as lack of choices, lack of
available data, reliance on physician recommendations, and impaired ability to make choices based on information given, could
similarly afflict reference pricing initiatives. Reference pricing comes with a thorny technical problem of how to set the reference
price. Set too high, and the cost-savings will be lost, with lower-priced providers raising their prices up to the reference price.148 Set
too low, and the providers may not be able to cover the cost of providing the service, leading providers to drop the service, to cost-
shift to more remunerative services, or to seek market power as a method of resisting reference pricing.149 4. Tiering and Narrow
Networks Another market approach relies upon health insurance plans to engage in active purchasing in the form of narrow or
tiered networks to pressure hospitals and other providers to restrain prices.150 In a narrow network, payers selectively contract with
a limited group of providers who will agree to lower prices in exchange for patient volume.151 Under a tiering strategy, the health
plan sorts contracted providers or service lines into tiers based on price and steers patients to the lower priced providers (the
preferred tier) using lower cost-sharing incentives.152 In their roles curating the narrow or tiered network, the health plan is the one
wielding the consumer power on behalf of the patient. Neither approach is new. Narrow networks and tiering were both strategies
widespread during the rise of managed care and HMOs in the 1980s and 1990s.153 Consumers and employers vociferously resisted
choice-limiting networks then, and it is unclear whether they will accept similarly narrowed choices today.154 Nevertheless, tiering
and narrow networks are gaining renewed attention as solutions to discipline health care prices.155 The ACA has accelerated the
revival of narrow networks because of its limits on health plans’ ability to engage in underwriting or to narrow benefits to keep
premiums down.156 Thus, one of the remaining strategies for health plans to keep their prices in check is to offer narrow networks
of providers.157 Narrow networks and tiering strategies rely upon the existence of sufficient competition among hospitals, which is
lacking in many markets.158 Without competition, powerful providers use their market power to require anti-tiering provisions in
their contracts with health plans, or else require that the plan always include the high-priced hospital in the most preferred tier.159
To address this issue, Massachusetts passed a law in 2010 prohibiting providers from using anti-tiering provisions in their plan
contracts.160 Even with such a law, health plans may have no choice but to include high priced hospitals in their network or in the
best tier because they have unique services, such as a Level I trauma facility or a Neonatal Intensive Care Unit (“NICU”), that lower
priced hospitals lack.161 In most places, health plans will be unable to exclude “musthave” providers from the highest tier due to
their market power.162 5. Market Approaches Measured Against Health Care Market Failures Market solutions to the
hospital pricing problem are intuitively pleasing they attempt to restore market forces to a failed market. Price
transparency and reference pricing take aim at correcting information asymmetry and helping patients become informed
consumers. It may be technically difficult to implement price transparency or reference pricing, but it is plausible that well-designed
programs can prompt effective comparison shopping by patients.163 The cost-sharing imposed on individuals by
CDHC, especially in the form of reference pricing, attempts to address moral hazard and
sensitizes patients to the prices of their care. The biggest problem with market approaches to
discipline hospital prices is that they fundamentally will not work in concentrated
markets where there is little choice or competition between providers .164 And as
discussed above, concentrated hospital markets are the norm, not the exception.165 Where there is no
choice, patients cannot shop around or substitute the lower cost or higher value
provider. Where hospital-sellers have disproportionate market power, purchasers (health plans or
patients) can exert little discipline on prices through transparency, CDHC, reference
pricing, or active purchasing. Moreover, the stressful nature of most hospital
encounters makes it unlikely for transparency plus CDHC to overcome patients’ substantial
cognitive and behavioral barriers to rational consumer behavior .166 Even when
equipped with sufficient price and quality information, when it comes to serious medical
decisions, patients generally defer to their physicians .167 Buying health care is thus unlike shopping for a
car, unless one imagines buying a car while being chased by a gunman, when there are only a couple unfamiliar models to choose
from, relying upon the guidance of a trusted car salesman who tells you which car is best for your situation and also serves as your
driver as you try to get away. Although market approaches attempt to address information asymmetry
and moral hazard, the principal-agent problems persist . Health plans and/or providers are
in the best position to gather, report, and translate price and quality information for patients,
and thus market approaches build upon the existing web of principal-agent
relationships .168 Finally, market solutions like price transparency or reference pricing are
inherently limited because much of health care is not “shoppable.” 169 Acute or urgent health
care does not lend itself to comparison or price shopping , and patients end up seeking
care at the nearest hospital, the one to which the ambulance delivers them , or the one to
which they are referred to by their physician. Market solutions are premised on improving
informational deficits and sensitizing consumers to their health care costs to bring competitive forces to bear on health
care prices. As an overall strategy to discipline health care prices, however, market solutions are
fundamentally limited because they fail to address the underlying lack of
competition in hospital markets and barriers to patient consumer behavior .17
Innovation DA---2AC
AND centralizes data---that’s key
Andrew Torrance 17, Staff Writer, 3-24-2017, "Life, Liberty, and Minor Complaints: Single-
payer health care," Knight Errant, http://bsmknighterrant.org/2017/03/24/life-liberty-and-
minor-complaints-single-payer-health-care/
Furthermore, a single-payer health care system would create a cooperative, comprehensive
medical database for all patient and medical records . In the current system, Health
Maintenance Organizations and insurance companies privately possess their own
patient records , and these are accessible solely to people within the company ; this
hinders medical research on what causes certain diseases. If doctors and researchers could examine
the entirety of a population affected by a certain medical condition and the procedures that aid
in the treatment of said condition, remarkable steps could be taken towards curing a
multitude of diseases
DA is wrong: research is federal, squo’s not innovative, profit margins shield, and
cost sharing turns access
Kesselheim et al 16 – Aaron S. Kesselheim, Associate Professor of Medicine at Harvard
Medical School and a faculty member in the Division of Pharmacoepidemiology and
Pharmacoeconomics in the Department of Medicine at Brigham and Women’s Hospital, M.D.
and J.D. from University of Pennsylvania School of Medicine and Law School, MPH from
Harvard School of Public Health, primary care physician at the Phyllis Jen Center for Primary
Care at Brigham & Women’s Hospital, Jerry Avorn, Professor of Medicine at Harvard Medical
School and Chief of the Division of Pharmacoepidemiology and Pharmacoeconomics in the
Department of Medicine at Brigham and Women’s Hospital, M.D. from Harvard Medical School
in 1974, and completed a residency in internal medicine at the Beth Israel Hospital in Boston,
Ameet Sarpatwari, PhD in epidemiology at the University of Cambridge, Instructor in Medicine
at Harvard Medical School, an Associate Epidemiologist at Brigham and Women’s Hospital, and
Assistant Director of the Program On Regulation, Therapeutics, And Law (PORTAL) within the
Division of Pharmacoepidemiology and Pharmacoeconomics, JD at the University of Maryland
as a John L. Thomas Leadership Scholar, Principal Investigator on a Greenwall Foundation
Making a Difference in Real-World Bioethics Dilemmas grant and a Faculty Affiliate with the
Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School
and the Behavioral Insights Group at the Harvard Kennedy School (“The High Cost of
Prescription Drugs in the United States: Origins and Prospects for Reform,” Journal of the
American Medical Association, Vol. 316, No. 8, pgs. 858–871, August 23rd, Available to
Subscribing Institutions)
Justifications for High Drug Prices The pharmaceutical industry has maintained that high drug prices
reflect the research and development costs a company incurred to develop the drug, are
necessary to pay for future research costs to develop new drugs, or both. It is true that industry often makes
expensive investments in drug development and commercialization, particularly through late-stage clinical trials, which can be
costly.84 These assertions have been used to justify high prices on the grounds that if drug prices
are constrained, the pipeline of new medications will be adversely affected. Some economic analyses
favored by the pharmaceutical industry contend that it costs $2.6 billion to develop a new drug that makes it to market.85 However,
A number of factors weigh against these
the rigor of this widely cited number has been disputed.86,87
rationales for high drug prices. First, important innovation that leads to new drug products is
often performed in academic institutions and supported by investment from public
sources such as the National Institutes of Health. A recent analysis of the most transformative drugs of the last
25 years found that more than half of the 26 products or product classes identified had their
origins in publicly funded research in such nonprofit centers .88 Other analyses have
highlighted the importance of small companies, many funded by venture capital .89,90 These
biotech startups frequently take early-stage drug development research that may have its origins
in academic laboratories and continue it until the product and the company can be acquired by a
large manufacturer, as occurred with sofosbuvir. Arguments in defense of maintaining high drug prices
to protect the strength of the drug industry misstate its vulnerability . The biotechnology and
pharmaceutical sectors have for years been among the very best-performing sectors in the US
economy. The proportion of revenue of large pharmaceutical companies that is invested in
research and development is just 10% to 20% (Table 4); if only innovative product
development is considered, that proportion is considerably lower .91 The contention that
high prescription drug spending in the United States is required to spur domestic innovation has not
been borne out in several analyses .92 A more relevant policy opportunity would be to address the stringency of
congressional funding for the National Institutes of Health, such that its budget has barely kept up with inflation for most of the last
decade. Given the evidence of the central role played by publicly funded research in generating discoveries that lead to new
therapeutic approaches, this is one obvious area of potential intervention to address concerns about threats to innovation in drug
there is little evidence of an association between research and development
discovery. Thus,
costs and drug prices93; rather , prescription drugs are priced in the United States primarily on the
basis of what the market will bear. This explanation also helps to account for several high-profile
case studies, including high-priced new branded products94 and exorbitantly priced generic drugs described above.95 In
preparation for recent hearings on this topic, the US House Committee on Oversight and Government Reform subpoenaed internal
correspondence from Turing and Valeant Pharmaceuticals, which had sharply increased the prices of older drugs the companies had
acquired. The investigation revealed, for example, that Turing received “no pushback from payors” when it increased “Chenodal
price 5x... [Thiola] price 21x... [and Daraprim] price 43x.”96 Similarly, Gilead spent $11 billion to purchase sofosbuvir from
Pharmasset, a small biotechnology firm that developed the drug, based in part on federally funded research led by an investigator at
Emory University.97 Gilead recouped almost all of this cost in the first year that sofosbuvir was on the market, recording sales of
$10.3 billion in 2014.98 In December 2015, the
US Senate Committee on Finance released a detailed
report based on its access to internal company documents on Gilead’s strategies to maximize
the prices it could charge for both that drug and its planned successor, which the company also owned.99
In the current system for drug payment in the United States, few options exist to counter this approach. Companies should of course
be rewarded fairly for the research innovations they make that help generate new drug products and for their costly trial work that
facilitates the assessment and availability of new medications. But providing them with large incentives to do the opposite is
counterproductive. Clinical Consequences of High Drug Prices The high cost of prescription drugs in the United States
has clinical as well as economic consequences .100,101 Even though more Americans have drug coverage
as a result of the Medicare drug benefit plan and the Patient Protection and Affordable Care Act, cost-containment
strategies in recent years have shifted an increasing share of drug expenses to patients .102
Private insurers have increased deductibles103 and most co-payments, and added a new payment tier for
certain specialty drugs in which patients must pay coinsurance—often between 20% and 33% of the total drug price—rather
than a simple co-payment.104 Although such cost-shifting measures have helped “bend the cost curve” for employers and payers,
they can reduce use of effective medications .105,106 Almost a quarter of 648 respondents
to a 2015 poll reported that they or another family member did not fill a prescription in the last year
because of cost.107 In other studies, patients who were prescribed a costly branded product rather than a more affordable
generic alternative were found to adhere to their regimen less well than those receiving a similar generic drug12 and to have worse
health outcomes.108 Nonadherence due to all causes has been estimated to contribute to $105 billion in
avoidable health care costs annually.109 In some cases, manufacturers have attempted to circumvent higher co-payments
by providing patients with coupons that reimburse their out-of-pocket expenses.110 Coupons can be useful for patients with no other
option, but they leave the insurer obliged to pay the much larger amount of each prescription’s costs, thereby increasing health care
spending. This approach has become common for branded drugs that have comparable but much less expensive alternatives.111
Faced with fixed health care budgets, states with higher drug costs for their Medicaid programs have had to
reduce other services or increase health care eligibility requirements.112 Several state
Medicaid programs, for example, have imposed nonevidence-based policies to restrict sofosbuvir, including
denying coverage to users of alcohol or other drugs.113,114
Federalism---2AC
No link---aff prescribes a blueprint, states have latitude for implementation which
retains flexibility
Sandro Galea 7-18, Sandro Galea, MD, is the Robert A. Knox professor and dean of the Boston
University School of Public Health. He is the author of the book&nbsp;Healthier: Fifty Thoughts
on the Foundations of Population Health, 7-18-2017, "Is the U.S. Ready for a Single-Payer
Health Care System?," Harvard Business Review, https://hbr.org/2017/07/is-the-u-s-ready-for-
a-single-payer-health-care-system
But are these concerns warranted? Doctors who fear losing their autonomy need only look north to see how a single-payer
system can work without encroaching on the independence of physicians. Canada has had a
single-payer model for decades, and there’s no government takeover of its health care system in sight.
Most services are still provided by the private sector, and most physicians are still self-employed.
While health expenditures remain high, Canadians nevertheless enjoy better health outcomes at
lower cost than the United States, whose population’s health is mediocre despite ever-higher spending on medical care.
Canada’s success stems from a few basic tenets . Its system is structured around a federal
requirement to provide coverage for necessary services such as doctor and hospital visits. While the cost of this
care is covered by the taxpayer, the task of providing it is decentralized to each of the country’s 13 provinces and territories. Each
region has wide latitude to innovate — as long as it honors the basic guarantee of providing free point-of-care treatment to all
citizens for certain essential services, funded through a central payer. This is an important point. The single-payer
approach is often characterized as a gateway to Byzantine regulation. Yet the reality is it is a
fundamentally simple, even elegant, concept: Everybody gets the coverage that everybody
pays for. Within this framework, there is much room for maneuver .

NOR is it zero sum


Erin Ryan 12, Law professor @ Lewis & Clark Law School, “Negotiating Federalism Past the
Zero-Sum Game,” Administrative and Regulatory Law News, Vol. 38, No. 1 (Fall, 2012)
These instances of intergovernmental bargaining offer a means of understanding the relationship
between state and federal power that differs from the stylized model of “zero-sum”
federalism that has come to dominate political discourse. The zero-sum model sees winner-takes-
all jurisdictional competition between the federal and state governments for power, emphasizing sovereign
antagonism within the federal system. Yet countless real-world examples of interjurisdictional governance

show that the boundary between state and federal authority is really an ongoing project of
negotiation , taking place on levels both large and small. Working in a dizzying array of regulatory contexts, state and federal
actors negotiate over both the allocation of policymaking authority and the substantive terms of the
mandates that policymaking will impose. Bargaining takes place both in policy realms plagued by legal
uncertainty about which side has the final say, and in realms unsettled by uncertainty over whose decision
should trump, regardless of legal supremacy. Reconceptualizing the relationship between state and federal
power as one heavily mediated by negotiation reveals just how far federalism practice has
departed from the zero-sum rhetoric . Better still, it offers hope for moving beyond the more
paralyzing features of the federalism discourse, and toward the kinds of good governance that Americans of all political
stripes hope for.
AT: Warming Impact
Adaptation and resilience solve warming
Hart 15 (Michael, he’s the Simon Reisman chair at the Norman Paterson School of
International Affairs at Carleton University in Ottawa, former Fulbright-Woodrow Wilson
Center Visiting Research, he was also a Scholar-in-Residence in the School of International
Service and a Senior Fellow in the Center for North American Studies at American University in
Washington, a former official in Canada’s Department of Foreign Affairs and International
Trade, where he specialized in trade policy and trade negotiations, MA from the University of
Toronto and is the author, editor, or co-editor of more than a dozen books, “Hubris: The
Troubling Science, Economics, and Politics of Climate Change”, google books)
As already noted, the IPCC scenarios themselves are wildly alarmist , not only on the basic science but also on the

underlying economic assumptions , which in turn drive the alarmist impacts. The result cannot withstand
critical analysis . Economists Ian Castles and David Henderson, for example, show the extent to which the analysis is driven by the desire to reach
predetermined outcomes.50 Other economists have similarly wondered what purpose was served by pursuing such unrealistic scenarios. It is hard to credit the defense put
forward by Mike Hulme, one of the creators of the scenarios, that the IPCC is not engaged in forecasting the future but in creating “plausible” story lines of what might happen

Each scare scenario is based on linear projections without any reference to


under various scenarios.51

technological developments or adaptation . If, on a similar linear basis, our Victorian ancestors in the UK, worried about rapid
urbanization and population growth in London, had made similar projections, they would have pointed to the looming crisis arising from reliance on horse-drawn carriages and
omnibuses; they would have concluded that by the middle of the 20th century, London would be knee-deep in horse manure, and all of the southern counties would be required

Why should the rest of


to grow the oats and hay to feed and bed the required number of horses. Technology progressed and London adapted.

humanity not be able to do likewise in the face of a trivial rise in temperature over the
course of more than a century ? The work on physical impacts is equally over the top . All
the scenarios assume only negative impacts , ignore the reality of adaptation , and attribute
any and all things bad to global warming. Assuming the GHG theory to be correct means that
its impact would be most evident at night and during the winter in reducing atmospheric heat loss to outer space.52 It
would have greater impact in increasing minimum temperatures than in increasing maximum temperatures. Secondary studies, however, generally ignore

this facet of the hypothesis. The IPCC believes that a warmer world will harm human health due, for example,
to increased disease, malnutrition, heat-waves, floods, storms, and cardiovascular incidents. As
already noted there is no basis for the claim about severe-weather-related threats or

malnutrition . The claim about heat-related deaths gained a boost during the summer of 2003 because of the tragedy of some 15,000 alleged heat-related deaths
in France as elderly people stayed behind in city apartments without air conditioning while their children enjoyed the heat at the sea shore during the August vacation.
Epidemiological studies of so-called "excess" deaths resulting from heat waves are abused to get the desired results. Similar studies of the impact of cold spells show that they are
far more lethal than heat waves and that it is much easier to adapt to heat than to cold.53 More fundamentally, this, like most of the alarmist literature, ignores the basics of the
AGW hypothesis: the world will not see an exponential increase in summer, daytime heat (and thus more heat waves), but a decrease in night-time and winter cooling,
particularly at higher latitudes and altitudes. Based on the AGW hypothesis, Canada, China, Korea, Northern Europe, Australia, New Zealand, South Africa, Chile, and Argentina
will see warmer winters and warmer nights. There are clear benefits to such a development, even if there may also be problems, but the AGW industry tends to ignore the
spread of malaria, a much repeated claim, is largely unrelated to climate.
positive aspects of their alarmist scenarios. The feared

Malaria’s worst recorded outbreak was in Siberia long before there was any discussion
of AGW . Similarly, the building of the Rideau Canal in Ottawa in the 1820s was severely hampered by outbreaks of malaria due to the proximity of mosquito-infested
wetlands in the area. Malaria remains widespread in tropical countries today in part because of the UN’s lengthy embargo on the use of DDT, the legacy of an earlier alarmist
disaster. Temperature is but one factor, and a minor one at that, in the multiple factors that affect the rise or decline in the presence of disease-spreading mosquitoes. Wealthier
western countries have pursued public health strategies that have reduced the incidence of the dis- ease in their countries. Entomologist Paul Reiter, widely recognized as the
leading specialist on malaria vectors and a contributor to some of the early work of the IPCC, was aghast to learn how his careful and systematic analysis of the potential impacts
had been twisted in ways that he could not endorse. In a recent paper, he concludes: “Simplistic reasoning on the future prevalence of malaria is ill-founded; malaria is not
limited by climate in most temperate regions, nor in the tropics, and in nearly all cases, ’new' malaria at high altitudes is well below the maximum altitudinal limits for
transmission. Future changes in climate may alter the prevalence and incidence of the disease, but obsessive emphasis on ’global warming' as a dominant parameter is

Catastrophic species loss


indefensible; the principal determinants are linked to ecological and societal change, politics and economics.”54

similarly has little foundation in past experience .55 Even if the GHG hypothesis were to be
correct, its impact would be slow , providing significant scope and opportunity for
adaptation ,
including by flora and fauna. One of the more irresponsible claims was made by a group of UK modelers who fed wildly
improbable scenarios and data into their computers and produced the much-touted claim of
massive species loss by the end of the century. There are literally thousands of websites devoted to
spreading alarm about species loss and biodiversity. Global warming is but one of many
claimed human threats to the planet’s biodiversity The claims, . fortunately, are largely hype,
based on computer models and the estimate by Harvard naturalist Edward O. Wilson that 27,000 to 100,000 species are lost annually - a figure he advanced purely

scientists have no idea of the


hypothetically but which has become one of the most persistent of environmental urban myths. The fact is that

extent of the world's flora and fauna , with estimates ranging from five million to 100 million species, and that there are no
reliable data about the rate of loss. By some estimates, 95 per cent of the species that ever existed have
been lost over the eons, most before humans became major players in altering their environment. A much more credible estimate of recent species loss comes
from a surprising source, the UN Environmental Program. It reports that known species loss is slowing reaching its lowest

level in 500 years in the last three decades of the 20th century, with some 20 reported extinctions despite increasing pressure on the biosphere from
growing human population and industrialization.57 The alarmist community has also introduced the scientifically unknown concept of "locally extinct,” often meaning little
more than that a species of plant or animal has responded to adverse conditions by moving to more
hospitable circumstances, e.g., birds or butterflies becoming more numerous north of their range and disappearing at its extreme southern extent. Idso et
al. conclude: “Many species have shown the ability to adapt rapidly to changes in climate . Claims

that global warming threatens large numbers of species with extinction typically rest on a false definition of extinction (the
loss of a particular population rather than en- tire species) and speculation rather than real-world evidence . The

world’s species have proven very resilient , having survived past natural climate cycles that
involved much greater warming and higher C02 concentrations than exist today or are likely to
exist in the coming centuries?“
Military DA---2AC
No one joins the military for healthcare. AND, those that do make readiness worse.
Igor Volsky 8, contributing writer @ NYT, “NYT Blogger: No Health Care For You! Because It
Would Undermine Military Recruitment.” ThinkProgress. May 30th, 2008.
https://thinkprogress.org/nyt-blogger-no-health-care-for-you-because-it-would-undermine-
military-recruitment-102bc4cf8511/
In today’s New York Times, blogger Floyd Norris suggests that universal health care reform
would reduce military recruitment rates by undermining the military’s generous health benefit incentive: A significant
factor for many recruits, it turns out, is the military’s generous health benefits for dependants…It seems a bit perverse that the
incentives for a young person with children to join are greater than the incentives for his childless friend. But that is the way it is. All
that could change if the push for some kind of national health insurance program were to be successful. The notion that Americans
should be deprived of health insurance for national security purposes is both perverse and illogical. In fact, Norris’
implication, which suggests that the government must maintain a disparity between civilian and
military entitlements, overstates the financial benefit of enlisting and contradicts the
needs of the military. Few Americans cash-in from their military service. “The Department
of Defense estimates that its employees take a $20,000-per-year pay-and-benefits hit
relative to civilians the same age throughout their careers.” Moreover, according to Christopher
Jehn, former U.S. assistant secretary of defense for force management and personnel, soldiers
who are forced into service weaken the military’s capabilities . Second, because service
members are all volunteers, the military has far fewer discipline problems, greater experience (because of less turnover) and thus,
more capability. Based on this experience, U.S. military leaders today are thoroughly convinced that a return to the draft could only
weaken the armed forces. This is why, when students at the Naval Postgraduate School (mainly U.S. military officers), are asked
whether they would like to return to the draft, there are few takers. As one put it, “Why would I want to be in charge of people who
don’t want to be there?”

Compensation would be adjusted to maintain incentives to join


Thoma 8 (Mark Thoma, Professor of Economics at the Department of Economics of the
University of Oregon. “Universal Health Care and Military Retention”. May 30, 2008.
http://economistsview.typepad.com/economistsview/2008/05/universal-healt.html)
Continuing a discussion of this topic from not too long ago, the right way to do this is to state the the goals we
are trying to reach, then build incentives into the polices that direct people toward those goals
with as few negative consequences as possible. One possible goal is retention. If you want people to stay longer,
deferred compensation schemes are a way to accomplish that goal. We need to decide how many
people we want to stay for additional terms, and then set the compensation incentives accordingly (these can be tweaked as needed,
e.g. you can have incentives for reenlistment at each decision point, or you can discourage reenlistment after some number of terms
if there is some reason to do so). Yes, it
may require that the government pay people serving in the
military more , at least those who stay longer, but that is simply what it will cost to reach the goal, that's
the price to command these resources. People who applaud the ability of markets to value resources should understand that. If it
costs too much to induce sufficient reenlistment, i.e. if the costs of producing higher retention rates are greater
than the benefits, then it's not a very good policy anyway. But if the goals are different, e.g. if the goal is to provide
educational benefits to make up for lost opportunities in the private sector due to service in the military, the the policy will, of
course, be different as well. When evaluating a proposed policy to, for example, increase educational benefits all of the
consequences, including the effects on retention, should be examined. But this is part of a cost benefit calculation. If the educational
benefit - the goal of the policy - exceeds the retention cost, then it's still worthwhile. And it may not be necessary to give up on the
retention goal just because you offer educational benefits, one does not have to be traded against the other. It's possible - if you are
willing to pay the cost - to offer both higher education benefits and higher deferred compensation so that both goals are attained.
More help for education is available for those who choose to leave when their term ends, but since deferred compensation is higher
for those who reenlist, just as many stay as before. Whether it's worth it to do this is matter of comparing the costs and benefits, but
If national health care is enacted and
increasing education benefits does not have to lower retention rates.
that lowers the incentive to enlist , or to reenlist, then the compensation levels will have to be
adjusted to compensate, but it doesn't have to change retention rates or the ability to
provide education benefits after people leave the military if we are willing to pay what's
needed to induce the desired behavior.
Tax Reform---2AC
Turn: the public loves the plan. That’s key to save Trump.
Branko Marcetic 3-29, editorial assistant at Jacobin, 3-29-2017, "Democrats Against Single
Payer," Jacobin, https://www.jacobinmag.com/2017/03/single-payer-health-care-medicare-
obamacare-sanders-clinton-democrats
What makes this steadfast opposition even more puzzling is the fact that the moment is ripe for making the
push for single payer . It’s not just that the GOP has spectacularly failed to gut Obamacare .
Polling suggests Americans are more amenable to the idea than ever (even if not all polls are as rosy as
Gallup’s). Meanwhile, the last few months have been a spate of editorials in local newspapers extolling the virtues of single payer and
necessitating the need to pass it. The long list includes the: Redding Record Searchlight, Berkshire Eagle, Reno Gazette-Journal,
Fort Wayne Journal-Gazette, Grass Valley and Nevada County’s Union, Winston-Salem Journal, Eugene, Oregon’s Register-Guard,
Napa Valley Register, and the Florida Times-Union. Similar editorials have also appeared in major papers like USA Today, the LA
Times, and the Baltimore Sun. Even Mark Cuban has come out in favor of the policy. Do Democrats really want to be outflanked
on the left by Mark Cuban? The party should thank its lucky stars President Trump remains tethered to a
radically anti-government GOP which hates the thought of the government stepping in to help people in need. Were Trump
allowed to run free — and were his commitment to economic populism authentic and not just a cynical appropriation of a
few slogans — he might actually adopt some form of single-payer proposal himself (no doubt
with some pointed, Roosevelt-style “tactical” exclusions of certain marginalized groups), shoring up his standing as the
self-proclaimed champion of the “forgotten man.” After all, such policies tend to be quite
popular once ordinary Americans start receiving their benefits. The Trump administration
has been unwilling to launch the challenge to corporate interests that would be needed to make such an effort a
success, perhaps because the Ayn Rand–worshiping GOP he has thrown his lot in with won’t entertain the
thought. But the Democrats should be wary of leaving a space on their left open for a
cynical right-wing populist to fill , whether now or down the line. After all, that’s a big part of what
got them into their current predicament.

Tax reform won’t pass – assumes Trump’s push


Faler 9/20/17 (Brian, Politico, "Senate Republicans Declare Independence on Tax Reform")
Senate Republicans are making it clear they’re not going to fall in line behind President Donald
Trump’s tax reform plan, despite a months-long effort with House leaders and top
administration officials to have a smooth rollout for a major overhaul.¶ The proposal by the
"Big Six" will only be advisory , warned Senate Finance Chairman Orrin Hatch of Utah, himself a member of the
group of top GOP lawmakers and administration officials. His Senate colleagues are following that up with plans to seek a $1.5
trillion tax cut, putting them at odds with the House, where Speaker Paul Ryan, another Big Six member, wants permanent changes
that don't add to the deficit.¶ It means the coming outline for a tax-code revamp will provide less of a guide than Republicans had
once hoped. That also threatens a repeat of the sort of struggles Republicans faced uprooting the
Affordable Care Act. Though many Republicans have argued they'll have an easier time with
taxes — the party's top priority in Congress ahead of next year's midterm elections — Hatch says the opposite may be
true.¶ "Tax reform is always the hardest," he said. "It's more complex, more difficult, more
chances that you'll have people who differ on matters — a lot more ideas The issue for Hatch is that
while the Big Six — which also includes Senate Majority Leader Mitch McConnell, House Ways and Means Chairman Kevin
Brady of Texas, National Economic Council Director Gary Cohn, and Treasury Secretary Steven Mnuchin — would like to
hand down a pre-cooked tax plan, and then persuade rank-and-file lawmakers to support it, the
Senate is not a top-down organization .¶ Senators have more power to shape legislation, and
Republicans can afford to lose only two votes , which means every senator will be a king
— much as they have been during the Obamacare repeal battles.¶ That dictates a more bottom-
up approach, with Hatch looking for areas of agreement among his colleagues, rather than imposing his views upon them.
Football thumps
Jackson 9/26/17 (David, USA Today, "Analysis: This is a major political week for Trump. But
his NFL fued is overshadowing his agenda")
This was supposed to be a big week for President Trump.¶ His hopes of repealing and replacing
Obamacare hang in the balance. He's set to push a major tax reform plan. He's campaigned hard for a
Republican candidate whose Alabama primary race is on Tuesday. And he's trying to project strength and efficiency as his
administration grapples with the aftermath of hurricanes in Texas, Florida, and Puerto Rico – and the nuclear threat from North
Korea. ¶ So
what's Trump talking about at the start of this critical week?¶ Football players .¶ So far,
Trump appears to be forgoing a closing argument on his policy proposals – and instead
escalating his feud with the National Football League and superstar athletes. Ever since he called
football players who sit or kneel during the national anthem "sons of b-----" at a rally on Friday and encouraged the NFL to fire them
for their political protest, Trump has perpetuated the controversy – for four straight days.
Trump has tweeted or retweeted comments about the NFL flap at least 18 times since Saturday morning. Star football and basketball
athletes have in turn accused Trump of racially charged threats to free speech – and the squabble playing out over Twitter and the
news media is dwarfing his message on the rest of his agenda.
1AR
Disease
AT: Primary Care
Markets correct physician flight
Caroline Sommers 9, Caroline Sommers is a 2010 JD/MBA Candidate at Pepperdine
University, 2009, ARTICLE: THERE IS NO PERFECT SOLUTION TO HEALTH CARE IN
AMERICA, 2 J. Bus. Entrepreneurship & L. 424, Lexis
One possible solution to the United States' health care spending problem is to significantly reduce "payments to providers of
health services." 78 Such a reduction in the income of physicians, nurses, and other skilled health sector labor would "eliminate a
substantial portion of the alleged excess in U.S. health care expenditures." 79 However, "the return on
investment in medical education is pretty much in line with the . . . return . . . for other
professional occupations, such as an attorney or business executive." 80 Thus, any reduction in income would
result in a decrease in the number of individuals going into the medical profession
since those individuals would get more return on their educational investment in other
professions. 81 In addition, any reduction could be short lived since a low supply of physicians
and nurses coupled with the high demand for their services would likely drive costs back up .
82 Finally, given the current shortage of nurses, any significant reduction in wages would only exacerbate the situation. 83
Framing
1% deficit is 100% deficit---there’s an invisible threshold and it’s impossible to
predict the outbreak
Conniff 13 (Richard Conniff, science journalist, writes for Time, Smithsonian, Atlantic
Monthly, The New York Times Magazine, National Geographic, Audubon Magazine, included in
The Best American Science and Nature Writing in 2000, 2002, and 2006, Guggenheim
Fellowship, Loeb Journalism Award, “Guardians Against a Global Pandemic: Inside the battle to
protect all of us from the next Superbug,” Men’s Health, 4-8-2013,
http://www.menshealth.com/health/guardians-against-global-pandemic?fullpage=true)
Last September, a 49-year-old Qatari man who'd recently traveled to Saudi Arabia was hospitalized in Doha with a nasty respiratory illness. He
deteriorated rapidly, and doctors promptly airlifted him to a London hospital, where he wound up on life support with kidney and lung failure. From
respiratory tract samples, investigators soon teased out an unknown coronavirus—the same one that had just killed an otherwise healthy 60-year-old in
Saudi Arabia.¶ For one tense moment, epidemiologists thought they might be witnessing a replay of the devastating 2003 SARS epidemic, also brought
on by a coronavirus. But the threat this time looked worse: Three million people were about to descend on Saudi Arabia for the hajj, a Muslim
pilgrimage to Mecca already well known for the overnight global redistribution of illnesses via passenger jet.¶ Disease detectives of all specialties caught
the next available flights into the heart of the potential outbreak. Epidemiologists tracked down anyone who had been even remotely associated with
the victims. Veterinarians wearing protective gear went to a farm that one of the victims had visited; they took samples from hundreds of domestic and
wild animals in order to identify the species from which the virus had jumped to humans. This effort, unseen by the public but involving hundreds of
experts around the world, soon established that the disease did not, in fact, spread easily from one person to another. The hajj wasn't a hot zone after
all.¶ It was a lucky break. As of early March, the new virus had sickened only 14 people and killed eight. But the episode was also a reminder that the
supply of emerging diseases in the modern world is almost eye-bleedingly endless, and that they
can turn up anywhere. One such pathogen, West Nile virus, killed 243 people in the United States last year. And a Denver hospital last
summer experienced an alarming outbreak of a notorious New Delhi "superbug," a bacteria with broad resistance to almost all
antibiotics. Health officials will tell you that the Big One, a disease outbreak on the order of the influenza pandemic
of 1918, could happen any day—and that sooner or later it almost certainly will.¶ They'll also tell you that men
in particular need to pay attention to the potential hazards: We travel more than women, particularly for business. Our trips tend to take us to more-
remote destinations. So maybe it shouldn't come as a surprise that we also have a much higher incidence of malaria, dengue fever, hepatitis, and
Legionnaires' disease (which last year killed 13 people in Quebec City, and three at a downtown Chicago Marriott hotel)—and perhaps other diseases yet
unknown. (Nervous about germs? Pick up a copy of Don't Get Sick, a panic-free pocket guide to living in a germ-filled world.)¶ The good news?
Science has become remarkably adept at identifying and containing potential outbreaks right at
the start, even in the most remote locations, and often when only a handful of people—rather than
hundreds—have become sick. In other words, they generally halt the outbreak before it can turn up on a 747 bound for New York City. ¶ Some
of the credit goes to rapidly advancing technologies, from Internet data mining to DNA fingerprinting. In the early 1980s, for instance, it took 3
devastating years to identify the virus that causes AIDS. With modern gene sequencing, says Columbia University virus hunter W. Ian Lipkin, M.D., it
would take just 48 hours today. And part of the credit belongs to governments, which have learned painful lessons about the consequences of allowing a
new disease to get out of hand: Since 1981, AIDS has killed more than 30 million people worldwide, with no end in sight. (How dangerous is AIDS in
2013? Here's What You Need to Know About HIV Today.) ¶ But if
we are lucky enough to see another year pass without
some pandemic lurching up out of nowhere to kill vast swaths of humanity, it's mainly because
of the people who now constantly watch for early signs of trouble—as well as the ones who
parachute in when things go wrong to save lives and stop epidemics. They tend to be unusual characters, people
who can chat casually about "flavors" of Ebola and about the addictive thrill of their work on the front lines of possible outbreaks. But they also know
firsthand what it takes to keep the world safe—and how to stay healthy themselves, even as people all around them die. ¶ At CDC headquarters in
Atlanta one day recently, as the coronavirus investigation was wrapping up, a daily map of trouble spots included an Ebola outbreak in the Democratic
Republic of the Congo, Marburg fever in Uganda, cholera in Haiti, polio in Pakistan, and dengue fever in Portugal. Hantavirus, which is transmitted
through urine, droppings, or saliva mainly from deer mice (and which also disproportionately affects men), had recently killed three vacationers at
Yosemite National Park, and a case of Crimean-Congo hemorrhagic fever had just turned up in, of all places, Glasgow, Scotland.¶ It is a dangerous
world out there, especially because of the kinds of travel we now consider normal. In his office in the division of global migration and quarantine at the
CDC, director Martin Cetron, M.D., plays a computerized display tracking a single day's passenger flights, streams of yellow lights gently flowing in
from the farthest corners of the earth, coalescing in bright megalopolitan splotches of light, then radiating outward again. "This is what makes me
nervous," he says.¶ Nearly a billion people a year cross international borders, some of them inevitably
carrying infections. Each international flight landing on U.S. runways also carries, on average, 1.6 live
mosquitoes. In 1999, one theory holds, some of these jet-setting mosquitoes may have delivered West Nile encephalitis to New York. West Nile
has since spread to 48 states and killed about 1,500 in the United States. As bad as that outbreak was, afflictions that are far more
widespread may yet come if what Dr. Cetron calls the "invisible infrastructure" of disease prevention
ever falters.
AT: Defense
Best studies disprove burnout
Karl-Heinz Kerscher 14 – professor and management consultant, “Space Education”,
Wissenschaftliche Studie, 2014, 92 Seiten
The death toll for a pandemic is equal to the virulence, the deadliness of the pathogen or pathogens,
multiplied by the number of people eventually infected. It has been hypothesized that there is an
upper limit to the virulence of naturally evolved pathogens. This is because a pathogen that quickly
kills its hosts might not have enough time to spread to new ones, while one that kills its hosts
more slowly or not at all will allow carriers more time to spread the infection, and thus likely out-compete a
more lethal species or strain. This simple model predicts that if virulence and transmission are not
linked in any way, pathogens will evolve towards low virulence and rapid transmission. However, this
assumption is not always valid and in more complex models, where the level of virulence
and the rate of transmission are related, high levels of virulence can evolve. The level of
virulence that is possible is instead limited by the existence of complex populations of hosts, with
different susceptibilities to infection, or by some hosts being geographically isolated. The size of the host population and competition
between different strains of pathogens can also alter virulence. There
are numerous historical examples of
pandemics that have had a devastating effect on a large number of people, which makes
the possibility of global pandemic a realistic threat to human civilization.

Defer aff—we cognitively underestimate disease.


Patrick S. ROBERTS 8. Fellow with the Program on Constitutional Government at Harvard
and assistant professor with the Center for Public Administration and Policy at Virginia Tech.
Review of Richard Posner’s “Catastrophe: Risk and Response.” Homeland Security Affairs 4(1).
Emory Libraries.
Most people have difficulty thinking about abstract probabilities as opposed to events they have
observed. Human mental capacity is limited, and startling events such as the attacks of September 11 trigger our
attention. But evaluating risk requires paying attention to what we do not see. There has been
surprisingly little attention in the popular media given to pandemic flu, even though influenza killed
approximately twenty million people in 1918-1919. The disease has no cure, and vaccines are difficult
to produce because of the mutability of the virus. People from all walks of life pay greater attention to issues in
recent memory and tend to give greater weight to confirmatory evidence; the cumulative effect is to underprepare
for catastrophe.
AT: Dispersal
Dispersal doesn’t matter cause zoonosis, mutation, and globalization
CAN 17, California Nurses Association, January 2017, “SARS, EBOLA, AND ZIKA: What
Registered Nurses Need to Know About Emerging Infectious Diseases,” accessed via Google
Cache
Science and
[ INTRODUCTION ] Infectious diseases are a part of life, from the bubonic plague of the 15th century that decimat- ed populations in Europe to the Ebola outbreak of 2015 that has killed over 10,000 people in West Africa.

technology allowed us to escape the effects of many diseases


have through vaccines like yellow fever and rubella . Since 1975,

however over 30 new diseases have appeared including AIDS Ebola Lyme disease
, , , , ,

Legionnaires’ and antibiotic-resistant organisms Most of these new infections are


disease, .

caused by pathogens present in the environment but infecting a new host or different
population new pathogens may evolve to cause a new disease.
. Rarely, Old New or newly noticed diseases are not the only concern.

diseases like malaria and cholera, have made comebacks Underfunded


, public . , declining

health and crowded poor urban environments foster the transmission of diseases that
programs

spread through social contact between peo- ple, like tuberculosis and diphtheria Vector- .

borne infections have also reappeared due to climate change and human disruption of
ecosystems Arboviruses . are responsible for more than 130 human diseases
, which are viruses spread by mosqui- toes and ticks,

and the ranges of the vectors are rapidly expanding . Nurses are at the forefront of healthcare and are in a position to recognize new and re-emerging infectious diseases. Nurses
are often the first to be exposed to infectious diseases. During an ongoing epidemic, little may be known about the disease, how it is transmitted, or what kinds of protections healthcare workers need. In these situations, it is vital — literally — that hospitals and other
healthcare employers adhere to the precautionary principle — even in the face of scientific uncertainty, protective measures should be taken. In this home study, you will read about three recently emerged or re-emerged infectious dis- eases. Primary and secondary
sources are used to demonstrate the kinds of literature that emerge surrounding infectious disease outbreaks. The conditions that led to the rise and/or spread of the outbreak into an epidemic are discussed. I. SEVERE ACUTE RESPIRATORY SYNDROME (SARS)
The first major epidemic of the 21st century, the SARS epidemic of 2003 began in China and spread globally. The progression of the epidemic is described and the forces of urbanization and globalization on the emergence of th e novel disease are discussed. II. ZIKA
VIRUS DISEASE The current Zika epidemic began in Brazil in 2015 and has rapidly expanded to other Latin American and Caribbean countries in late 2015 and early 2016. The status of the epidemic is described. The impact of climate change and fragmented public
health infrastructure on the emergence of the epidemic are discussed. III. EBOLA VIRUS DISEASE The origin and progression of the 2014 Ebola epidemic originating in West Africa is described. The spread of Ebola to the United States is dis- cussed in detail. The
contributions of inadequate protections for healthcare workers and the frag- mented public health infrastructure of the United States are discussed. Page 3 3 [ SECTION I ] SEVERE ACUTE RESPIRATORY SYNDROME (SARS): FROM CHINA TO TORONTO A
previously unknown respiratory disease began ailing people in the southern Chinese province Guangdong in late 2002. It spread rapidly across Asia and around the world, causing severe acute respiratory syndrome (SARS). This epidemic was the first major
infectious disease epidemic of the 21st century and forced the need to reshape understanding of public health as global instead of national. The story of SARS clearly demonstrates the impact of urbanization and globalization on emerging infectious diseases. It also
demonstrates how unprepared public health infrastructure can prolong an epidemic. By the end of 2003, all cases worldwide had been treated and the epidemic was over. Probable SARS cases were identified in 8,096 people worldwide and infection resulted in 774
deaths. On May 20, 2003, the World Health Organization (WHO) published a status report on the epidemic. This excerpt describes how the epidemic was started, which was an important discovery for breaking the transmission cycle. Excerpt below from: Severe
acute respiratory syndrome (SARS): Status of the outbreak and lessons for the immediate future SARS: a puzzling and difficult new disease SARS is the first severe and readily trans- missible new disease to emerge in the 21st century. Though much about the disease
remains poorly understood and frankly puzzling, SARS has shown a clear capacity for spread along the routes of internation- al air travel. At present, the outbreaks of greatest concern are concentrated in trans- portation hubs or spreading in densely populated areas.
WHO regards every coun- try with an international airport, or border- ing an area having recent local transmis- sion, as at potential risk of an outbreak. The first cases of SARS are now known to have emerged in mid-November 2002 in Guangdong Province, China.
The first official report of an outbreak of atypical pneumonia in the province, said to have affected 305 persons and caused 5 deaths, was received by WHO on 11 February. Around 30% of cases were reported to occur in health care workers. Confirmation that cases
were consistent with the defi- nition of SARS was made after permission was granted, on 2 April, for a WHO team to visit the province. In the meantime, SARS was carried out of Guangdong Province on 21 February by an infected medical doctor who had treated
patients in his home town. He brought the virus to the ninth floor of a four-star hotel in Hong Kong. Days later, guests and visi- tors to the hotel’s ninth floor had seeded outbreaks of cases in the hospital systems of Hong Kong, Viet Nam, and Singapore.
Simultaneously, the disease began spread- ing around the world along international air travel routes as guests at the hotel flew home to Toronto and elsewhere, and as other medical doctors who had treated the earliest cases in Viet Nam and Singapore travelled
internationally for medical or other reasons. When the disease moved out of southern China, the outbreaks it seeded — in Hanoi, Hong Kong, Singapore, and Toronto — became the initial “hot zones” of SARS, characterized by rapid increases in the number of cases,
especially in health care workers and their close contacts. In these areas, SARS first took root in hospital settings, where staff, unaware that a new disease had surfaced and fighti ng to save the lives of patients, exposed themselves to the infectious agent without
barrier pro- tection. All of these initial outbreaks were subsequently characterized by chains of secondary transmission outside the health care environment. By 15 March, WHO had received reports of more than 150 cases of a new disease, which it named severe
acute respiratory syndrome. Epidemiological analysis indi- cated that the new disease was spreading along the routes of international air travel. WHO immediately issued emergency travel recommendations to alert health authori- ties, physicians, and the travelling
public to what was now perceived to be a worldwide threat to health. The global alert achieved its purpose. After the recommendations, all countries with imported cases, with the exception of provinces in China, were able, through prompt detection of cases, imme-
diate isolation, strict infection control, and vigorous contact tracing, to either prevent Page 4 4 further transmission or keep the number of additiona l cases very low. During the last week of April, the out- breaks in Hanoi, Hong Kong, Singapore, and Toronto showed
some signs of peak- ing. On 28 April, Viet Nam became the first country to stop local transmission of SARS. However, new probable cases, including cases in hospital staff, additional deaths, and first cases imported to new areas continued to be reported from several
countries. The cumulative total number of cases surpassed 5,000 on 28 April, 6,000 on 2 May, and 7,000 on 8 May, when cases were reported from 30 countries on six continents. At present, most new cases are being reported from Beijing and, increas- ingly, other
parts of mainland China. Of the cumulative global total of 7761 probable cases and 623 deaths reported on 17 May, 5209 cases and 282 deaths had occurred in mainland China. Also of concern is a rapidly growing outbreak in Taiwan, China, with a cumulative total,
on 18 May, of 344 cases, including many in hospital staff, and 40 deaths. [End excerpt] Later, it was discovered that the virus arose from exposure to and between wild animals in wet markets in Guangdong. China has experienced rapid urbanization and
industrialization in recent decades, leading to the formation of a young, wealthy class. The new class seeks to eat exotic wild animals, which has encouraged the growth of wet markets where live animals are kept and sold. China has less wild expanse close to the city
in which to hunt so many people hunt animals in Thailand and other countries. Many different animals who would stay far away from each other in the wild are kept in very close contact in transit and at these markets. It is hypothesized that SARS originated in
horseshoe bats and jumped to other animals nearby, particularly palm civets, a type of wild cat. Because people who sell their catches at the wet markets also live there, the virus has the opportunity to jump from animal to animal to humans. SARS spread to Toronto
in late February 2003, prompting the WHO to issue a global alert on March 12 and elevate the alert on March 15. The events in Toronto clearly demonstrate issues regarding protections for healthcare workers and isolation precautions in an emerging disease event.
Excerpt below from: Learning from SARS: Preparing for the Next Disease Outbreak Phase I of the Toronto SARS Outbreak The index case and her husband had vacationed in Hong Kong and had stayed at a hotel in Kowloon from February 18 to 21, 2003. The index
case began to experience symptoms after her return on February 23 and died at home on March 5. During her illness, family members, including her son (case A), provided care at home. Case A became ill on February 27 and presented to the index hospital on March
7 (Varia et al., 2003). Nosocomial transmission in the hospital began when case A presented to the emer- gency department on March 7 with severe respiratory symptoms. He was placed in a general observation area of the emergency department and received
nebulized salbu- tamol. During this time, SARS was trans- mitted to two other patients in the emer- gency department (cases B and C). Case B, who had presented with rapid atrial fibrillation, was in the bed adjacent to case A, about 1.5 meters away and separated by
a curtain, and was discharged home after 9 hours in the emergency department. Case C, who had presented with shortness of breath secondary to a pleural effusion, was three beds (about 5 meters) away from case A and was transferred to a hos- pital ward and later
discharged home on March 10. The three patients were cared for by the same nurse. Case A was transferred briefly to a medical unit, then to the intensive care unit (ICU) 18 hours after his presentation to the emer- gency department. Three hours later, he was placed
in airborne isolation because tuberculosis was included in his differential diagnosis. Contact and droplet precau- tions were implemented on March 10 by ICU staff caring for case A, and the patient remained in isolation until his death, on March 13. Case A’s family
visited him in the ICU on March 8, 9, and 10. During this time, some family members were febrile, and two were experiencing respiratory symp- toms. Chest radiographs were taken of the family members on March 9 and again on March 11. Four members had
abnormal radiographs and were instructed to wear masks at all times, wash their hands upon entering and leaving the ICU, and limit their visits to the ICU. Page 5 5 On March 12, the WHO alerted the global community to a severe respiratory syn- drome that was
spreading among HCWs in Hanoi, Vietnam, and Hong Kong. The alert was forwarded to infectious disease and emergency department physicians in Toronto. The following day, case A died and it became clear that several other family members had worsening
illness. The clinicians involved and the local public health unit suspected the family’s illness- es might be linked to cases of atypical pneumonia reported in Hong Kong. Four family members were admitted to three different hospitals on March 13, and anoth- er
family member was admitted to hospi- tal on March 14. All were managed using airborne, droplet, and contact precautions. No further transmission from these cases occurred after admission to hospital. Case B became febrile on March 10, 3 days after exposure to
case A in the emergency department and discharge home. Respi- ratory symptoms evolved over the next 5 days. He was brought to the index hospital on March 16 by two Emergency Medical Services paramedics, who did not immedi- ately use contact and droplet
precautions. After 9 hours in the emergency depart- ment, where airborne, contact and droplet precautions were used, case B was trans- ferred to an isolation room in the ICU. His wife became ill on March 16. She was in the emergency department with case B on
March 16 (no precautions used) and visited him in the ICU on March 21 (precautions used); he died later that day. The infection also spread to three other members of case B’s family. SARS developed in a num- ber of people who were in contact with case B and his
wife on March 16, including the 2 paramedics who brought him to the hospital, a firefighter, 5 emergency department staff, 1 other hospital staff, 2 patients in the emergency department, 1 housekeeper who worked in the emergency department while case B was
there, and 7 visitors who were also in the emergency department at the same time as case B (symptom onset March 19 to 26). The 16 hospital staff, visitors, and patients trans- mitted the infection to 8 household mem- bers and 8 other family contacts. In the ICU,
intubation for mechanical ventilation of case B was performed by a physician wearing a surgical mask, gown and gloves. He subsequently acquired SARS and transmitted the infection to a member of his family. Three ICU nurses who were present at the intubation
and who used droplet and contact precautions had onset of early symptoms between March 18 and 20. One transmitted the infection to a household member. Case C became ill on March 13 with symptoms of a myocardial infarction and was brought to the index
hospital by paramedics. It was unknown that he had been in contact with case A on March 7, and thus it was not recognized that he had SARS. As a result, he was not isolated, and other precautions were not used. He was admitted to the coronary care unit (CCU) for
3 days and then trans- ferred to another hospital for renal dialysis. He remained in the other hospital until his death, on March 29. Subsequent transmis- sion of SARS occurred within that hospital (Dwosh et al., 2003). Case C’s wife became ill on March 26. At the
index hospital, case C transmitted SARS to 1 patient in the emergency department, 3 emergency department staff, 1 housekeeper who worked in the emergency department while case C was there, 1 physician, 2 hos- pital technologists, 2 CCU, patients, and 7 CCU
staff. One of the paramedics who transported case C to the index hospital also became ill. Further transmission then occurred from ill staff at the index hospital to 6 of their family members, 1 patient, 1 medical clinic staff, and 1 other nurse in the emergency
department. On March 23, 2003, officials recognized that the number of available negative pres- sure rooms in Toronto was being exhaust- ed. In a 4-hour period on the afternoon of March 23, staff at West Park Hospital, a chronic care facility in the city, recom-
missioned 25 beds in an unused building formerly used to house patients with tuberculosis. Despite the efforts of West Park physicians and nurses, and assistance from staff at the Scarborough Grace and Mount Sinai Hospitals, qualified staff could be found to care
for only 14 patients. Faced with increasing transmission, the Ontario government designated SARS as Page 6 a reportable, communicable, and virulent disease under the Health Protection and Promotion Act on March 25, 2003. This move gave public health officials
the authority to track infected people, and issue orders preventing them from engag- ing in activities that might transmit the new disease. Provincial public health activated its emergency operations center. By the evening of March 26, 2003, the West Park unit and
all available negative pressure rooms in Toronto hospitals were full; however, 10 ill Scarborough Hospital staff needing admissions were waiting in the emergency department, and others who were ill were waiting at home to be seen. Overnight, with the declaration
of a provincial emergency, the Ontario govern- ment required all hospitals to create units to care for SARS patients. By March 25, 2003, Health Canada was reporting 19 cases of SARS in Canada — 18 in Ontario and the single case in Van- couver. But 48 patients with
a presumptive diagnosis of SARS had in fact been admitted to hospital by the end of that day. Many more individuals were starting to feel symptoms, and would subsequently be identified as SARS patients. Epidem- ic curves later showed that this period was the
peak of the outbreak. On March 19, nine Canadians developed “probable” SARS, the highest single-day total. Taking “suspect” and “probable” cases together, the peak was March 26, and the 3 days, March 25 to 27 are the highest 3-day period in the outbreak. The
Ontario government declared SARS a provincial emergency on March 26, 2003. Under the Emergency Management Act, the government has the power to direct and control local governments and facili- ties to ensure that necessary services are provided. All
hospitals in the Greater Toronto Area (GTA) and Simcoe County were ordered to activate their “Code Orange” emergency plans by the government. “Code Orange” meant that the involved hospitals suspend- ed nonessential services. They were also required to limit
visitors, create isolation units for potential SARS patients, and implement protective clothing for exposed staff (i.e., gowns, masks, and goggles). Four days later, provincial officials extended access restrictions to all Ontario hospitals. On May 14, 2003, WHO
removed Toron- to from the list of areas with recent local transmission. This was widely understood to mean that the outbreak had come to an end. Consistent with the notion that the disease was contained, the government of Ontario lifted the emergency on May 17.
Directives continued to reinforce the need for enhanced infection control practices in health care settings. Code Orange status for hospitals was revoked. It appeared that the total number of cases had reached a plateau — 140 probable and 178 suspect infections.
Twenty-four Cana- dians had died, all in Ontario. [End of excerpt] In mid-May of 2003, after hospitals had discontin- ued SARS precautions, five patients in a Toronto rehabilitation hospital reported with febrile illness. Two of these patients were found to have been
hospitalized at North York General Hospital, where a subsequent investigation of pneumonia cases identified eight previously unrecognized SARS cases. The first patient in this second transmis- sion apparently had no history of contact with a SARS patient or
healthcare worker with SARS. The hospital was closed to new admissions on May 23, and infection control directives increased required protections. The second transmission of SARS in Toronto came to an end in June with 79 new SARS cases. Guangdong Province,
China NOV. 2002 Toronto, Canada FEB. 2003 Page 7 7 [Excerpt continued] Transmission The SCoV has been isolated in sputum, nasal secretions, serum, feces, and bronchi- al washings (Drosten et al., 2003; Peiris et al., 2003b). Evidence suggests that SCoV is
transmitted via contact and/or droplets (Peiris et al., 2003a; Poutanen et al., 2003) and that the use of any mask (surgical or N95) significantly decreases the risk of infection (Seto et al., 2003). However, there are cases that defy explanation based on these modes of
transmission suggesting that alternative modes of transmission may also occur (Varia et al., 2003). SCoV remains viable in feces for days and the outbreak at the Amoy Gardens apartments highlights the possibility of an oral-fecal or fecal-droplet mode of
transmission (WHO, 2003m,n). A number of cases occurred in HCWs wearing protective equipment following exposure to high risk aerosol- and drop- let-generating procedures such as airway manipulation, administration of aerosolized medications, noninvasive
positive pressure ventilation, and bronchoscopy or intuba- tion (Lee et al., 2003; Ofner et al., 2003). When intubation is necessary, measures should be taken to reduce unnecessary exposure to health care workers, includ- ing reducing the number of health care
workers present and adequately sedating or paralyzing the patient to reduce cough. Updated interim infection control pre- cautions for patients who have SARS are under development and will be available from CDC at http://www.cdc.gov/ncidod/ sars/index.htm.
Currently, epidemiological evidence sug- gests that transmission does not occur prior to the onset of symptoms or after symptom resolution. Despite this, shedding of SCoV in stool has been documented by reverse-transcription polymerase chain reaction (RT-PCR)
for up to 64 days fol- lowing the resolution of symptoms (Ren et al., 2003). A small group of patients appear to be highly infectious and have been referred to as superspreaders (CDC, 2003a). Such superspreaders appear to have played an important role early in the
epidemic but the reason for their enhanced infectivity remains unclear. Possible explanations for their enhanced infectivity include the lack of early implementation of infection control precautions, higher load of SCoV, or larger amounts of respiratory secretions.
[End of excerpt] The spread of SARS in Toronto was exacerbated due to delayed public health authority action in recognizing the outbreak, declaring an emergency, and tracing and isolating contacts. The complete lack of protections provided to healthcare work- ers
and the late use of precautionary isolation of potential cases presenting with respiratory illness meant that many healthcare workers became infected and continued to infect others before they sickened. Note in the section discussing transmission, no description is
given of the type of personal protective equipment (PPE) that healthcare workers wore during the high risk aerosol-generating procedures. Clearly, it was not protective enough. The outbreak was successfully contained after measures were taken to isolate cases and
provide protection for healthcare workers. The outbreak in Toronto extended into the second phase because of the lack of integration of new information about SARS. Eight pneumonia cases were later identified as SARS cases, only after patients had had contact

with others. Page 8 8 URBANIZATION AND SARS The pace of urbanization has increased significantly in the last century . Only 20% of the
Trends towards urbanization are expected to increase
world’s pop- ulation lived in cities about 100 years ago. in all countries from 45% in 1995 to 61% in 2030.

Urban infrastructure has lagged and many cities host dense regions of people living in behind

crowd- ed slums with limited fresh water, sanitation, and healthcare access
, . The United Nations (UN) pre- dicts that

“slums will become the dominant urban form within the next 15 years People living in these .”

slums do and will continue to be dispropor- tionately affected by infectious diseases through
more exposure to pathogens and vectors and less availability of healthcare and
prophylaxis than their wealthier counterparts The destruction of environment . to create cities and even in rural areas

increases the contact between humans and animals This can accelerate the introduction .

of new zoonotic diseases , like SARS, into humans. More than 60% of the 335 emerging infectious diseases identified between 1940 and 2004 have been zoonotic. Living in close contact with wild or domesticated
animals, hunting, killing, or preparing food can be risk factors for an emerging disease to jump species to humans. Close contact between bats and primates in particular is thought to be a significant risk factor. This kind of new and close contact between different
species was seen in China’s wet markets where SARS emerged. Food handlers at the wet market in Guangdong were found to be disproportionately affected by SARS early in the epidemic. Urbanization of China forced hunters to travel to new places, bringing
different animals back to small areas in wet mar- kets waiting for sale. A key fact to dealing with the SARS epidemic was recognizing that a significant proportion of the initial illnesses occurred in food handlers catch ing, selling, and killing wild animals.
Understanding how diseases are introduced into the population is critical to controlling ongoing epidemics and to preventing outbreaks from progressing to epidemics. GLOBALIZATION AND SARS The spread of SARS from Sin gapore and Hong Kong to Toronto

Historically, infectious disease outbreaks were geographically


served as a wake-up call for many about how connected the world had become.

confined. International shipping transported some diseases like cholera and the technological developments of the Industrial Revolution like the steam engine and the railroad allowed diseases to be transported more quickly. International

airplane travel significantly decreased the amount of time it takes to get from one place
to another, allowing not-yet-symptom- atic people incubating a disease to travel to a new
place before they even know they are sick . Inter- national tourist arrivals have exploded from 25 million in 1950 to more than one billion in 2013. Author Sonia Shah chronicles the
development of several different infectious diseases over the past two centuries. In her book, Pandemic: Tracking Contagions, from Cholera to Ebola and Beyond, she describes the effect of increased global travel: [People] don’t just fly in and out of a hand- ful of
prominent airports in major cities, but into and out of tens of thousands of airports in small towns and minor cities in even the most remote and far-flung nations. There are some fifteen thousand airports in the United States, but not only that: there are also more
than two hundred in the Democratic Republic of Congo, one hundred in Thailand, and, as of 2013, nearly five hundred in China. New York City is no longer the center of today’s global trans- portation network, of course. The hub has shifted. Of the ten largest and
busiest air- ports in the world, nine are in Asia, seven in China alone. And just as the United States’ gateway to the world was once New York City, China’s gateway to the world is Hong Kong, where more cargo — both visible and invisible — is loaded onto airplanes

than anywhere else. Increased globalization enabled the spread of SARS to Toronto from China and later we will see how globalization contributes to other epidemics . A globalized, integrated public
health system is needed to protect all people’s health in our inter- connected, modern world. Page 9 9 [ SECTION II ] ZIKA: AN EMERGING EPIDEMIC IN PROGRESS Note: The info on Zika is current as of the time of writing. As it is an ongoing epidemic, new
informa- tion may emerge in the coming months. Zika virus is a positive sense, single-stranded RNA virus in the same family of mosquito-borne arbo- viruses as yellow fever, dengue, West Nile virus, and encephalitis. Zika virus was first isolated in 1947 during
surveillance of diseases in macaques in Uganda. The first documented human infection was in 1954, and the virus spread slowly through sub-Saharan Africa to Asia by the 21st century. Outbreaks of Zika have been identified only in recent years: Yap Island in the
Federated States of Micronesia in 2007, French Polynesia in 2013, and Brazil spreading to other parts of Latin America in 2015-16. Information about Zika virus is limited including symptoms, length of viremia, transmission, and potential neurological
complications. A signifi- cant amount of data has become available during the 2013 and 2015-16 epidemics. Prior to the first recorded outbreak in 2009, only 25 papers were published in peer-reviewed literature compared to 225 in the first three months of 2016
alone. How- ever, few definitive answers have been reached. Zika virus has shown an extremely unusual propensity for multiple transmission pathways. Initially, it was thought that Zika was transmitted only by mosquitoes from human to human and possibly
monkey to human. Now, there is evidence that Zika is sexually transmitted, transmitted through blood transfusions, and from a mother to her baby during pregnancy and birth. Additionally, there is one case where Zika was transmitted through bodily fluids from a
patient with extreme- ly high levels of Zika virus in his body. Zika virus has been found in various bodily fluids, including saliva, urine, breast milk, the female genital tract, and semen. Viral particles appear to remain in semen for at least 90 days and the female
genital tract for at least 14 days. Mosquito-borne transmission of Zika virus con- tinues to be the pathway of most concern in stopping the epidemic. Aedes mosquitoes have adapted to live near humans, requiring only the smallest amount of still water to reproduce.
They are active and biting during the day, unlike other mosquito species that only feed at night. In the United States, the Aedes mosquitoes were nearly eradicated in the 1970’s through pesticide appli- cation, but they have made comebacks in some areas after
pesticide use has declined or ceased. Urban poverty in Brazil has created “the perfect set of conditions for the transmission of such mosquito-borne viruses.” The lack of infrastructure and water security in conjunction with crowding and poor housing conditions has
created a situ- ation where an abundance of breeding grounds exists in close proximity to living quarters where residents also have limited access to prevention like bug spray and air conditioning as well as limited healthcare access. These same conditions exist in
many places in the United States that are vulnerable to mosquito-borne disease outbreaks, including Florida, Texas, and other Gulf Coast states. While the Aedes species is the confirmed Zika vector, some suspect other species may have adapted to carry Zika virus,
which would help explain the sudden widespread nature of the Brazil outbreak as compared to previous progression of the disease. Global travel has also accelerated the spread of Zika virus. Symptoms of Zika virus infection are typically mild and self-limiting and
include fever, itchy maculo- papular rash, joint paint, and conjunctivitis. The case definition of Zika virus disease has evolved during the 2015-16 epidemic from two symptoms with exposure to just one symptom with exposure. Symptoms last a few days to a week;
severe illness and death are rare. The incubation period is esti- mated to be between three and twelve days. Up to 80% of people infected with the virus have no symptoms. When diagnostic assa ys are of limited availability as they have been in the Zika epidemic,
establish- ing reliable and consistent case definition is crucial for treatment and prevention of further spread. Fever Conjunctivitis Rash Joint Pain Only 1 out of 5 people develop s ymptoms Page 10 10 ZIKA: INFORMATION EVOLVES DURING EPIDEMICS The
most recent two outbreaks in French Poly- nesia and Brazil have brought to light potential neurological complications geographically and temporally associated with Zika virus infections. The 2013 outbreak in French Polynesia was accompanied by a “concomitant
epidemic of 73 cases of Guillain-Barré syndrome and other neuro- logical conditions in a population of approximately 270,000.” Guillain-Barré Syndrome (GBS) is a rare auto-immune disorder that results in damaged nerve cells, weakened muscles, and paralysis.
Most people recover from GBS, but some suffer perma- nent damage or death. The most recent outbreak that began in Brazil in 2015 has been accompanied by “an apparent 20-fold increase in incidence from 2014 to 2015” in microcephaly rates. Microceph- aly

(head smaller than average) has been seen in infants born to women infected with Zika virus during pregnancy and is related to developmental delay, intellectual disability, vision problems, and other effects. CLIMATE CHANGE AND ARBOVIRUSES The
rapid spread of Zika should serve as an exposition of the disastrous effects of
through Latin Ameri- can countries

climate change and the interactive effect with poverty on infectious disease Climate .

change has been influencing weather patterns all over the globe, making them less predictable and weather more severe. The 2015 El Niño, “which is characterized by warming

brought “warmer temperatures and shifting precipitation patterns


waters in the central and eastern Pacific Ocean,” has and to South America

can create conditions that help mosquito populations, and the diseases they can transmit,
thrive.” Mas- sive flooding in parts of Uruguay, southern Brazil, and Paraguay in recent months has displaced 150,000 people and led to standing water, provid- ing breeding ground for mosquitoes and disrupted living situations and access to
healthcare, water, and other vital services. On the other hand, north- ern Brazil, Venezuela, Guyana, and Suriname have had drier than usual weather. Because these areas lack a consistent water supply, many people have begun stockpiling water, creating mosquito
breed- ing grounds near human dwellings. Additionally, 2015 was the hottest year on record; these warmer temperatures may mean mosquitoes are more active, reproducing more, and biting more there- fore infecting more people. Diseases carried by mosquitoes
are particularly sensitive to meteorological conditions — warmer temperatures increase mosquito reproduction and biting activity and the rate at which pathogens mature inside them. Te mperature also limits the range of mosquito vectors. Freezing kills Aedes larvae
and eggs. As the earth warms, fewer plac- es will freeze over completely and Aedes vectors will increase their territory and spread infectious diseases to new places. Fossil evidence from the end of the last Ice Age “demonstrate[s] that rapid, poleward shifts of insects

accompanied warming.” POVERTY AND INFECTIOUS DISEASES Many of the areas where Zika has been the biggest problem are also the
poorest areas of Brazil. The low quality housing, lacking screens and air conditioning that help prevent exposure to mosquitoes, in addition to no reliable water or waste disposal systems creates situations where breeding grounds abound in urban,
crowded areas. Brazil eradicated Aedes mosquitoes in 1958 through coordinated efforts and funding. However, over the years, the mosquitoes have returned and multiplied. Not only do residents in these areas have a higher risk for contracting Zika, they will also

We see similar conditions in the United


experience more challenges if they develop GBS or give birth to a baby with microcephaly. Lack of resources compounds the ramifications of disability.

States in areas where Aedes mosquitoes are common These states have large impov- , like Florida and Texas.

erished populations, a warm climate, and did not expand Medicaid . Not only are the mosquitoes present, which increases the risk for transmitting
the disease from a returning infected traveler, the Page 11 11 housing stock in some areas is dilapidated, missing screens and air conditioning, and trash is abandoned to become breeding grounds for mosquitoes after rainfall. The mortgage foreclo- sure crisis hit
Florida especially hard, where many houses remain empty, creating mosquito breeding territory. Further, if people begin to be infected locally and develop GBS or microcephaly, their access to healthcare is extremely limited due to their states’ limitations on
Medicaid.
T All People
C/I---1AR
Canada only covers 97 percent of the population and is the gold standard for the
topic
Carolyn Hughes Tuohy 9 University of Toronto Single Payers, Multiple Systems:The Scope and
Limits of Subnational Variation under a Federal Health Policy Framework Journal of Health
Politics, Policy and Law, Vol. 34, No. 4, August 2009 DOI 10.1215/03616878-2009-011
The history of the term is somewhat murky. The concept, if not the term, was introduced into the health policy

arsenal by U.S. health care reformers explicitly advocating for a Canadian-style


system in the health reform battles of the 1970s. These advocates argued that the Canadian model
represented the best route for the United States, given the great similarities in the delivery
systems of the two countries. The fact that Canada retained this delivery system while adopting
publicly funded universal health insurance suggested that the United States could take a
similar route (see, for example, Lee 1974; Andreopoulos 1975). The term “single-payer,” however, does not appear to have entered the lexicon
until the early 1990s. In his 2001 memoir, Caspar Weinberger (Weinberger and Roberts 2001) uses the term single-payer to refer to Senator Ted
Kennedy’s 1974 proposal, but Weinberger appears to have adopted the term retrospectively. A search of online databases for the period 1970 –2008
turns up nothing before 1990, when it was used very differently to describe a proposal in New York state to establish a “single payer” as a public agency
to process all claims for a common basic package to be carried by existing private insurers (Beauchamp and Rouse 1990)—in apparent contrast to the
“all-payer” systems of rate regulation that had been attempted in the past. The
term seems to have appeared first in its
current usage in a now-classic 1991 study by Stephanie Woolhandler and David Himmelstein (1991) that compared
administrative costs in the United States and Canadian health care systems, contrasting the costs incurred
by both providers and insurers in the competitive market of multiple insurers in the United States with the relatively streamlined Canadian “single-
payer” system. The term was quickly picked up and extensively used in an influential report on the Canadian system by the General Accounting Office
of the U.S. Congress (United States General Accounting Office 1991). The group Physicians for a National Health Program, of which Woolhandler and
Himmelstein were leading members, also used the term in their advocacy for a Canadian-style program, as did Senator Paul Wellstone in his proposed
American Health Security Act in 1992 (Wellstone and Shaffer 1993). The use of the term broadened in the course of this process and in the following
years. Its initial focus on streamlined administration gave way to a broader mapping onto the concept of the marriage of a U.S.-style delivery system
with universal public coverage, and often beyond that to any system funded through general taxation

NHI is distinct from universal coverage – our interpretation preserves integrity of


distinct terminology in the field
Green Bae Park 14 Ewha Womans University, South Korea Changing patterns of terminology
related to universal health coverage
http://healthsystemsresearch.org/hsr2014/sites/default/files/Poster-Presentations.pdf
Background: Implementation of universal health coverage (UHC) has become a public health priority

terminology related to this topic is


and is the focus of a United Nations resolution passed in 2012. However,

varied and often has multiple, and sometimes conflicting, meanings.


Building consistency and consensus in this field first requires
understanding of the terminology , definitions and patterns related to use in previous
literature and case studies. Methods: A citation analysis and systematic literature review using search engines including PubMed, Google Scholar, and Scopus, were
conducted to select influential publications related to UHC. Results: Our search produced a database of 363 articles on UHC. When we ranked articles based on citation
frequencies by every decade since 1980, the number of articles that were related to UHC increased exponentially recently, especially from 2010. Our systematic literature review
showed that recent discussion of UHC broadened the global health agenda from a disease-based approach to a health system-based approach. At the same time, both developed

In terms of definitions, the meaning of the


and developing countries are included in its application.

terminology evolved over the past two centuries, resulting in greater


distinction between terms such as 'national health insurance' and
'universal coverage.' Additionally, the frequency in which terms related to UHC were used in peer-
greatly increased beginning in the decade of 2000 and the term 'national health
reviewed publications

insurance' was most frequently used in all decades. Conclusion: Literature over the past 60
years focuses heavily on insurance funding as opposed to equity of
access . However, changes related to the way certain terms are defined and their frequency of use
suggests a recent paradigm shift. To achieve improved health equity and rights, the authors
advocate for use of a definition that addresses all aspects of access, such as the World Health
Organization's definition.

Can exempt entire groups


Jeff Stein 8/25/17 VOX Republicans’ coming 2018 attack: single-payer would “abolish the VA”
It doesn't — at least as Bernie Sanders and Democrats imagine it now.
https://www.vox.com/policy-and-politics/2017/8/25/16200006/single-payer-republicans-
2018
Rep. John Conyers’s (D-MI) single-payer bill, which more than 60 percent of House Democrats have now co-sponsored, includes a

provision explicitly stating that the VA and veterans’ benefits will “remain independent” of the
new single-payer system, at least in the first decade. At the end of those 10 years, Congress will then “reevaluate whether such programs shall remain
independent or be integrated into the Medicare For All Program,” the bill states. In the Senate, Sanders is expected to release a new version of his single-payer bill in early

the 2013 version of his single-payer proposal — the only existing single-payer bill
September. But

in the Senate — says “nothing in this Act shall affect the eligibility of veterans for the medical
benefits and services” provided through the VA. An aide to Sanders confirmed that the provision
exempting the VA from the new single-payer insurer will remain in the upcoming bill. “ We
leave the VA and Indian Health Service completely intact ,” said Josh Miller-Lewis,
Sanders’s spokesperson, in an interview. “ This attack is a total misrepresentation of our bill and of
Rep. Conyers’ bill.”

National Health Insurance is a spectrum of coverage – the perm is still national


health insurance - their interpretation adds a modifier– there is no comprehensive
or universal in the resolution
Michael Jackonis, Jr 4 Judge Advocate General's Corps, U.S. Navy (B.A., University of
Virginia, 1986; J.D., Marshall-Wythe School of Law of the College of William and Mary, 1994;
LL.M., George Washington University Law School, 2003) currently serves as the Assistant Staff
Judge Advocate (Health Law and Policy) for the Surgeon General of the Navy, and Chief, Bureau
of Medicine and Surgery. 13 Ann. Health L. 179 ARTICLE: Considerations in Medicare Reform:
The Impact of Medicare Preemption on State Laws
A. The Issue of National Health Insurance and the Development of Medicare
The implementation of a national health insurance program for the elderly
resulted from a compromise in the twentieth century political movement for
comprehensive national health insurance . 21 To appreciate the current role that Medicare plays in the
American health insurance system, it is necessary to have an understanding of the development and failures of the health insurance
market leading up to Medicare's enactment. Government's increasing role in health care resulted from issues of increased costs,
advances in medicine as well as social concerns for greater equity and the general welfare in light of the challenge of allocating scarce
resources. 22 Historically, there was no insurance for hospitalization until 1929, when Baylor Hospital offered a prepaid plan to
Dallas schoolteachers. 23 Blue Cross plans followed this development by initially offering prepaid service benefits in a single facility;
this evolved into "free choice" plans with access to several local facilities. 24 These Blue Cross plans were designed to protect
hospital income by controlling the payment system and were assisted by state-enabling legislation that included tax exemptions. 25
Commercial insurers eventually recognized a market for health insurance and developed cash indemnity plans in the 1930s that
grew rapidly during the Second World War, aided by exemptions for health benefits from wage stabilization measures and the
buying power of group purchasing. 26 Commercial insurers, who could rely on experience rating to define healthier beneficiary
pools, soon had an advantage over Blue Cross and Blue Shield plans that relied on the less-healthy community rating premiums. 27
[*184] Reacting to the shortfalls of this free-market approach that left many uninsured, the first unsuccessful attempts at legislating
comprehensive health insurance were made at the state level. 28 The federal Social Security legislation of the New Deal
conspicuously lacked any health insurance provisions due to the opposition of organized medicine. 29 Subsequently, President
Truman advocated comprehensive national health insurance, but by the early 1950s the
Democratic efforts had shifted to obtaining catastrophic coverage only , 30 and
eventually focused on coverage for the aged (as the most vulnerable population) by the end of the
decade. 31 Opposition by organized medicine continued, and it was not until President Johnson's landslide victory in the 1964
election that there was enough political power to enact Medicare. 32 What resulted as the original Medicare program was a richly
complex, federally-defined benefit program with detailed entitlements, regulations, and payment methodologies. 33 In effect, the
mechanics of a health insurance plan were codified into federal statutes. 34 Although aspects of Medicare have changed over the
years, "its basic character, design, and structure have remained stable." 35 Medicare consists of three general sections: Part A, which
covers inpatient hospital services; Part B, which covers physician services; and Part C, which currently provides for the
Medicare+Choice (M+C) managed care option. 36 Medicare sets forth minimum coverage requirements for both inpatient and
outpatient treatment, as well as skilled nursing facility, hospice, and home health care. 37 The program provides for specific rights
concerning covered benefits requiring adequate notice of coverage denials; it also delineates appeal procedures to include judicial
review after final agency action on a [*185] set timeframe. 38 Moreover, it sets forth extensive regulations of safety standards,
payment procedures, fiscal administration, provider participation, and peer review. 39 The extent of these regulations reflects the
vast and complex environment of beneficiaries, providers, and administrators in which the Medicare programs operate. 40

You might also like