Professional Documents
Culture Documents
Surveillance is not the only process that can provide information to inform public health action. The results
of research can also inform public health action. Sometimes the distinction between surveillance,
particularly surveillance of (or for) emerging infections, and research can be difficult to define. This can
have important implications, particularly where different ethical, legal (usually data protection) and funding
rules apply to research compared to surveillance. As a general rule, the key distinction is that surveillance
should always be justified by, and seen as an integral component of, ongoing established prevention and
control programmes.
There are a number of other dimensions on which research and surveillance can be compared, as
outlined table 1. Although the distinction between research and surveillance is not always clear-cut for
some of the criteria proposed in the table, and sometimes research or surveillance will not fit with all of
the criteria suggested, this provides a framework for assessing whether a problem should be addressed
through surveillance or research.
Research
Surveillance
Is based on a hypothesis
interventions
Is based on a scientifically valid sample
publishable
Responsibility to act on findings is not
necessarily clear
Findings may result in the application of new
Statistics assumes that your data points (the numbers in your list) are clustered around some central
value. The "box" in the box-and-whisker plot contains, and thereby highlights, the middle half of these
data points.
To create a box-and-whisker plot, you start by ordering your data (putting the values in numerical order), if
they aren't ordered already. Then you find the median of your data. The median divides the data into two
halves. To divide the data into quarters, you then find the medians of these two halves. Note: If you have
an even number of values, so the first median was the average of the two middle values, then you include
the middle values in your sub-median computations. If you have an odd number of values, so the first
median was an actual data point, then you do not include that value in your sub-median computations.
That is, to find the sub-medians, you're only looking at the values that haven't yet been used.
You have three points: the first middle point (the median), and the middle points of the two halves (what I
call the "sub-medians"). These three points divide the entire data set into quarters, called "quartiles". The
top point of each quartile has a name, being a " Q" followed by the number of the quarter. So the top point
of the first quarter of the data points is "Q1", and so forth. Note that Q1 is also the middle number for the
first half of the list, Q2 is also the middle number for the whole list, Q3 is the middle number for the second
half of the list, and Q4 is the largest value in the list.
Once you have these three points, Q1, Q2, and Q3, you have all you need in order to draw a simple boxand-whisker plot. Here's an example of how it works.
5.1, 3.9, 4.5, 4.4, 4.9, 5.0, 4.7, 4.1, 4.6, 4.4, 4.3, 4.8, 4.4, 4.2, 4.5, 4.4
3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first number I need is the median of the entire set. Since there are seventeen values in this
list, I need the ninth value:
3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The median is
Q2 = 4.4.
The next two numbers I need are the medians of the two halves. Since I used the " 4.4" in the
middle of the list, I can't re-use it, so my two remaining data sets are:
3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4 and 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first half has eight values, so the median is the average of the middle two:
Q1 to Q3:
By the way, box-and-whisker plots don't have to be drawn horizontally as I did above; they can be vertical,
too.
useful because human observers will not necessarily interpret answers the same way;
raters may disagree as to how well certain responses or material demonstrate
knowledge of the construct or skill being assessed.
Example: Inter-rater reliability might be employed when different judges are
evaluating the degree to which art portfolios meet certain standards. Inter-rater
reliability is especially useful when judgments can be considered relatively
subjective. Thus, the use of this type of reliability would probably be more likely
when evaluating artwork as opposed to math problems.
Internal consistency reliability is a measure of reliability used to evaluate the degree
to which different test items that probe the same construct produce similar results.
Example: When designing a rubric for history one could assess students knowledge
across the discipline. If the measure can provide information that students are lacking
knowledge in a certain area, for instance the Civil Rights Movement, then that
assessment tool is providing meaningful information that can be used to improve the
course or program requirements.
5. Sampling Validity (similar to content validity) ensures that the measure covers the
broad range of areas within the concept under study. Not everything can be covered,
so items need to be sampled from all of the domains. This may need to be completed
using a panel of experts to ensure that the content area is adequately sampled.
Additionally, a panel can help limit expert bias (i.e. a test reflecting what an individual
personally feels are the most important or relevant areas).
Example: When designing an assessment of learning in the theatre department, it
would not be sufficient to only cover issues related to acting. Other areas of theatre
such as lighting, sound, functions of stage managers should all be included. The
assessment should reflect the content area in its entirety.
What are some ways to improve validity?
1. Make sure your goals and objectives are clearly defined and operationalized.
Expectations of students should be written down.
2. Match your assessment measure to your goals and objectives. Additionally, have
the test reviewed by faculty at other schools to obtain feedback from an outside
party who is less invested in the instrument.
3. Get students involved; have the students look over the assessment for
troublesome wording, or other difficulties.
4. If possible, compare your measure with other measures, or data that may be
available
Also, a negatively skewed curves can be of entirely positive numbers and, positively skewed
curves can be of entirely negative numbers. "Positive" and "negative" provides you the direction
of the curves tail and, the direction that numbers are moving on the x-axis.
6.
7. Positive skew: points in positive direction
8. numbers on the x-axis, under the tail, are more than the numbers under the
hump; positively skewed curves do NOT necessarily have positive numbers
(as in example below)
9.
10.
11.
12.
13.
14.
What is a Correlation?
A baby boom is any period marked by a greatly increased birth rate. This
demographic phenomenon is usually ascribed within certain geographical bounds.
People born during such a period are often called baby boomers; however, some
experts distinguish between those born during such demographic baby booms and
those who identify with the overlapping cultural generations. Conventional wisdom
states that baby booms signify good times and periods of general economic growth
and stability,[citation needed] however in circumstances where baby booms lead to very
large number of children per family unit, such as in the case in lower income regions
of the world, the outcome may be different. One common baby boom was right after
WWII during the Cold War
15.
implied by the populations mortality and fertility and the stationary age distribution implied by
the populations death rates; and nonstable momentum measures deviations between the
observed population age distribution and the implied stable age distribution.
To understand the usefulness of stable and nonstable momentum, consider the case of a
population with unchanging vital rates. Over time, stable momentum remains constant as both
the stable age distribution and the stationary age distribution are unchanging. In this sense we
may consider stable momentum to be the permanent component of population momentum; it
persists as long as mortality and fertility do not change. In contrast, nonstable momentum in this
population gradually becomes weaker and eventually vanishes as the populations age
distribution conforms to the stable age distribution. In this sense we may consider nonstable
momentum to be the temporary or transitory component of population momentum. Of course,
most populations exhibit some year-to-year fluctuation in fertility and mortality, so in empirical
analyses we commonly observe concurrent changes in both the permanent and the temporary
components of momentum. Nevertheless, how overall momentum is composed and what part is
contributed by stable versus nonstable momentum have implications for future population
growth or decline.1
In showing patterns over time in total population momentum, stable momentum, and nonstable
momentum, we pursue three distinct ends. First and most simply, we trace how momentum
dynamics have historically unfolded, not only across demographic transitions but also in the
midst of fertility swings and other demographic cycles. This is a straightforward task that has not
yet been undertaken. Second, we demonstrate some previously ignored empirical regularities of
the demographic transition, as it has occurred around the globe and at various times over the last
three centuries. Third, although population momentum is by definition a static measure, our
results suggest that momentum can also be considered a dynamic process. Across the
demographic transition, momentum typically increases and then decreases as survival first
improves and fertility rates later fall. This dynamic view of momentum is further supported by
trends in stable and nonstable momentum. A change in stable momentum induced by a change in
fertility will initiate a demographic chain reaction that affects nonstable momentum both
immediately and subsequently.
rates were declining, birth rates remained more or less stable, or at least they declined much
more slowly, so that year after year, for decades if not centuries, the number of births exceeded
the number of deaths by a substantial margin. In 1700 the population of Europe was an estimated
30 million. By 1900 it had more than quadrupled to 127 million (Livi-Bacci 2007). Europeans
also migrated to North America and Australia by the millions. The population continued to grow
despite this out-migration, since most of Europe did not experience substantial declines in the
number of children per woman until sometime in the late nineteenth or early twentieth century.
Fertility reached replacement in many parts of Europe around the mid-twentieth century, and
since then has fallen well below replacement in much of the continent.
Demographic transition has occurred much faster in the developing world than it did in Europe.
In 195055, for example, life expectancy at birth in India was about 38 years for both sexes
combined; 15 years later, life expectancy was nearly 47 (United Nations 2009b). Over the same
period in Kenya, life expectancy rose from 42 to 51 years, while in Mexico it rose from 51 to 60
(United Nations 2009b). This rapid mortality decline, brought about in part by technology
adopted from the West and accompanied initially by little or no decrease in fertility, led not to the
long period of steady population expansion that Europe experienced starting more than a century
earlier, but rather to rapid population growth, especially in the third quarter of the twentieth
century. Following World War II, developing countries grew at an average annual rate of more
than 2 percent, with some countries posting yearly population gains of more than 3 or even 4
percent, as in Ivory Coast, Jordan, and Libya (United Nations 2009b).
Unlike in Europe, rapid fertility decline often followed within just a few decades. Although much
of sub-Saharan Africa still has fertility well above replacement, most of the rest of the world
appears to have completed the demographic transition. Today every country in East Asia has subreplacement fertility, and even in countries like Bangladesh and Indonesia, once the cause of
much hand-wringing among population-control advocates (Connelly 2008: 11, 305), fertility is
now barely above replacement (United Nations 2009b). The concept of a demographic transition
therefore describes developing-world experience about as well as it seems to have portrayed
earlier developed-world experience. The major differences between these two situations are the
speed of mortality decline, the speed of fertility decline, and, as has received most attention both
then and now, the rate of population growth. Today it is very unusual to see the kind of
population doubling timesin some cases less than 20 yearsthat were so alarming to
policymakers and scholars throughout the 1960s and 1970s (Ehrlich 1968).
16.
Study designs, which study designs, are best for which studies?
17.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal
ones independently from (and prior to specifying) the cost context or the class
distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of
diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during
World War II for detecting enemy objects in battlefields and was soon introduced to
psychology to account for perceptual detection of stimuli. ROC analysis since then has
been used in medicine, radiology, biometrics, and other areas for many decades and is
increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a
comparison of two operating characteristics (TPR and FPR) as the criterion changes.[1
18.
Type I error is often referred to as a 'false positive', and is the process of incorrectly
rejecting the null hypothesis in favor of the alternative. In the case above, the null hypothesis
refers to the natural state of things, stating that the patient is not HIV positive.
The alternative hypothesis states that the patient does carry the virus. A Type I error would
indicate that the patient has the virus when they do not, a false rejection of the null.
Type II Error
A Type II error is the opposite of a Type I error and is the false acceptance of the null
hypothesis. A Type II error, also known as a false negative, would imply that the patient is
free of HIV when they are not, a dangerous diagnosis.
In most fields of science, Type II errors are not seen to be as problematic as a Type I error.
With the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is
inferred from a non-rejected null. The Type I error is more serious, because you have
wrongly rejected the null hypothesis.
Medicine, however, is one exception; telling a patient that they are free of disease, when
they are not, is potentially dangerous.
19.
Types of validity, how will you increase the internal and external
validity?
20.
Health index
Life expectancy at birth expressed as an index using a minimum value of 20 years
and a maximum value of 85 years. ????????
Health Index
is a network of physicians and researchers whose goal is to help promote world
health by providing extensive information on prevention, wellness, and therapy
to the world community by:
conducting surveillance?
What is HMIS, what are the weaknesses?
What are the criteria for good governance?
What are the criteria for evaluation?
What are policy measures to increase the utilization of services?
What is the difference between NIDs & HIV, malaria, TB in Pakistan?
How many goals of MDGs, How many indicators & targets. Post 2015,
Pearl Index
The Pearl Index, also called the Pearl rate, is the most common technique used in clinical trials for
reporting the effectiveness of a birth control method.
P E A RL I ND E X
Methods of contraception are compared by the Pearl index. A high Pearl index stands for a high
chance of unintentionally getting pregnant; a low value for a low chance. The Pearl index will be
determined by the number of unintentional pregnancies related to 100 women years. E.g. 100
women can contracept for 1 year each with the method that is going to be examined. If three
pregnancies occur during this period in this group, the Pearl index will be 3.0.
To convert this abstract value to a concrete one, it is possible to multiply the Pearl index of a
method by 0.4. The result is the number of pregnancies you will get in your life, if you use this
particular contraception method during the whole of your fertile time (from 12 till 52 years of
age).
Some examples of different birth control methods' Pearl indices
Knaus-Ogino method
15-35
Cervical Cap
4-20**
Standard Days
Method
4.8-12
Condom
3-12**
PERIMON
1.5-12*
Persona computer
NuvaRing
0.65-1.86
Sympto-thermal
method
0.5-2
Intrauterine Device
0.1-1.5
Plaster
0.5-1
0.1-1
Sterilization
0.1-0.4
To make sure not to use a less safe method too long, it is good not to use a method longer than
(80 divided through Pearl index) years. E.g. if the Pearl index of a method is 3, you shouldn't use
this method longer than 80/3=26 years for contraception. (Please mind that this is only an
statistic calculation and can absolutely not guarantee to prevent pregnancy.)
Life table
In actuarial science and demography, a life table (also called a mortality
table or actuarial table) is a table which shows, for each age, what the probability is
that a person of that age will die before his or her next birthday ("probability of death").
From this starting point, a number of inferences can be derived.
Life tables are also used extensively in biology and epidemiology. The concept is also of
importance in product life cycle management.
Period or static life tables show the current probability of death (for people of
different ages, in the current year)
Cohort life tables show the probability of death of people from a given cohort
(especially birth year) over the course of their lifetime.
Static life tables sample individuals assuming a stationary population with overlapping
generations. "Static Life tables" and "cohort life tables" will be identical if population is in
equilibrium and environment does not change. "Life table" primarily refers to period life
tables, as cohort life tables can only be constructed using data up to the current point,
and distant projections for future mortality.
Life tables can be constructed using projections of future mortality rates, but more often
they are a snapshot of age-specific mortality rates in the recent past, and do not
necessarily purport to be projections. For these reasons, the older ages represented in
a life table may have a greater chance of not being representative of what lives at these
ages may experience in future, as it is predicated on current advances in
medicine, public health, and safety standards that did not exist in the early years of this
cohort.
Life tables are usually constructed separately for men and for women because of their
substantially different mortality rates. Other characteristics can also be used to
distinguish different risks, such as smoking status, occupation, and socioeconomic class
states of poor health or disability."[2] In so doing, mortality and morbidity are combined
into a single, common metric.
Traditionally, health liabilities were expressed using one measure: (expected or average
number of) 'Years of Life Lost' (YLL). This measure does not take the impact of disability
into account, which can be expressed by: 'Years Lived with Disability' (YLD). DALYs are
calculated by taking the sum of these two components. In a formula:
DALY = YLL + YLD.[3]
The DALY relies on an acceptance that the most appropriate measure of the effects
of chronic illness is time, both time lost due to premature death and time spent
disabled by disease. One DALY, therefore, is equal to one year of healthy life
lost. Japanese life expectancy statistics are used as the standard for measuring
premature death, as the Japanese have the longest life expectancies. [4]
Disability
Disability is the consequence of an impairment that may be physical, cognitive, mental,
sensory, emotional, developmental, or some combination of these. A disability may be
present from birth, or occur during a person's lifetime.
Disabilities is an umbrella term, covering impairments, activity limitations, and
participation restrictions. Animpairment is a problem in body function or structure;
an activity limitation is a difficulty encountered by an individual in executing a task or
action; while a participation restriction is a problem experienced by an individual in
involvement in life situations. Thus, disability is a complex phenomenon, reflecting an
interaction between features of a persons body and features of the society in which he
or she lives.[1]
An individual may also qualify as disabled if they have had an impairment in the past or
is seen as disabled based on a personal or group standard or norm. Such impairments
may include physical, sensory, and cognitive or developmental disabilities. Mental
disorders (also known as psychiatric or psychosocial disability) and various types
of chronic diseasemay also qualify as disabilities.
Some advocates object to describing certain conditions (notably deafness and autism)
as "disabilities", arguing that it is more appropriate to consider them developmental
of unknown origin.
Physical disability[edit]
Any impairment which limits the physical function of limbs, fine bones, or gross motor
ability is a physical impairment, not necessarily a physical disability. The Social Model of
Disability defines physical disability as manifest when an impairment meets a nonuniversal design or program, e.g. a person who cannot climb stairs may have a physical
impairment of the knees when putting stress on them from an elevated position such as
with climbing or descending stairs. If an elevator was provided, or a building had
services on the first floor, this impairment would not become a disability. Other physical
disabilities include impairments which limit other facets of daily living, such as
severe sleep apnea.
Sensory disability[edit]
Sensory disability is impairment of one of the senses. The term is used primarily to refer
to vision and hearing impairment, but other senses can be impaired.
Vision impairment[edit]
Vision impairment (or "visual impairment") is vision loss (of a person) to such a degree
as to qualify as an additional support need through a significant limitation
of visual capability resulting from either disease, trauma, or congenital or degenerative
conditions that cannot be corrected by conventional means, such as refractive
correction, medication, or surgery.[8][9][10] This functional loss of vision is typically defined
to manifest with
1.
best corrected visual acuity of less than 20/60, or significant central field defect,
2.
3.
Hearing impairment[edit]
Hearing impairment or hard of hearing or deafness refers to conditions in which
individuals are fully or partially unable to detect or perceive at least some frequencies of
sound which can typically be heard by most people. Mild hearing loss may sometimes
not be considered a disability.
Complete loss of the sense of taste is known as ageusia, while dysgeusia is persistent
abnormal sense of taste,
Somatosensory impairment[edit]
Insensitivity to stimuli such as touch, heat, cold, and pain are often an adjunct to a more
general physical impairment involving neural pathways and is very commonly
associated with paralysis (in which the motor neural circuits are also affected).
Balance disorder[edit]
A balance disorder is a disturbance that causes an individual to feel unsteady, for
example when standing or walking. It may be accompanied by symptoms of being
giddy, woozy, or have a sensation of movement, spinning, or floating. Balance is the
result of several body systems working together. The eyes (visual system), ears
(vestibular system) and the body's sense of where it is in space (proprioception) need to
be intact. The brain, which compiles this information, needs to be functioning effectively.
Intellectual disability[edit]
Intellectual disability is a broad concept that ranges from mental retardation to cognitive
deficits too mild or too specific (as in specific learning disability) to qualify as mental
retardation. Intellectual disabilities may appear at any age. Mental retardation is a
subtype of intellectual disability, and the term intellectual disability is now preferred by
many advocates in most English-speaking countries.
Developmental disability[edit]
Developmental disability is any disability that results in problems with growth and
development. Although the term is often used as a synonym or euphemism for
intellectual disability, the term also encompasses many congenital medical
conditions that have no mental or intellectual components, for example spina bifida.
Nonvisible disabilities[edit]
Several chronic disorders, such as diabetes, asthma, inflammatory bowel
disease, epilepsy, narcolepsy, fibromyalgia, or some sleep disorders may be counted as
nonvisible disabilities, as opposed to disabilities which are clearly visible, such as those
requiring the use of a wheelchair.
The Healthy Life Years indicator (HLY) is a European structural indicator computed
by Eurostat. It is one of the summary measures of population health, known as health
expectancies,[1] composite measures of health that combine mortality and morbidity data to
represent overall population health on a single indicator.[2]HLY measures the number of
remaining years that a person of a certain age is expected to live without disability. It is actually
a disability-free life expectancy.
QALY model requires utility independent, risk neutral, and constant proportional tradeoff
behaviour.[3]
The QALY is based on the number of years of life that would be added by the
intervention. Each year in perfect health is assigned the value of 1.0 down to a value of
0.0 for being dead. If the extra years would not be lived in full health, for example if the
patient would lose a limb, or be blind or have to use a wheelchair, then the extra lifeyears are given a value between 0 and 1 to account for this. [citation needed] Under certain
methods, such as the EQ-5D, QALY can be negative number.
Uses
The QALY is often used in cost-utility analysis to calculate the ratio of cost to QALYs saved for a
particular health care intervention. This is then used to allocatehealthcare resources, with an
intervention with a lower cost to QALY saved (incremental cost effectiveness) ratio ("ICER")
being preferred over an intervention with a higher ratio
Calculation
The QALY is a measure of the value of health outcomes. Since health is a function of length of
life and quality of life, the QALY was developed as an attempt to combine the value of these
attributes into a single index number. The basic idea underlying the QALY is simple: it assumes
that a year of life lived in perfect health is worth 1 QALY (1 Year of Life 1 Utility value = 1
QALY) and that a year of life lived in a state of less than this perfect health is worth less than 1.
In order to determine the exact QALY value, it is sufficient to multiply the utility value associated
with a given state of health by the years lived in that state. QALYs are therefore expressed in
terms of "years lived in perfect health": half a year lived in perfect health is equivalent to 0.5
QALYs (0.5 years 1 Utility), the same as 1 year of life lived in a situation with utility 0.5 (e.g.
bedridden) (1 year 0.5 Utility). QALYs can then be incorporated with medical costs to arrive at
a final common denominator of cost/QALY. This parameter can be used to develop a costeffectiveness analysis of any treatment.
Decrement tables, also called life table methods, are used to calculate the probability of
certain events.
Birth control
Life table methods are often used to study birth control effectiveness. In this role, they
are an alternative to the Pearl Index.
Survival analysis
Survival analysis is a branch of statistics which deals with analysis of time duration to
until one or more events happen, such as death in biological organisms and failure in
mechanical systems. This topic is called reliability theory or reliability
analysis in engineering, and duration analysis or duration
modeling in economicsor event history analysis in sociology. Survival analysis
attempts to answer questions such as: what is the proportion of a population which will
survive past a certain time? Of those that survive, at what rate will they die or fail? Can
multiple causes of death or failure be taken into account? How do particular
circumstances or characteristics increase or decrease the probability of survival?
To answer such questions, it is necessary to define "lifetime". In the case of biological
survival, death is unambiguous, but for mechanical reliability, failure may not be welldefined, for there may well be mechanical systems in which failure is partial, a matter of
degree, or not otherwise localized in time. Even in biological problems, some events (for
example, heart attack or other organ failure) may have the same ambiguity.
The theory outlined below assumes well-defined events at specific times; other cases
may be better treated by models which explicitly account for ambiguous events.
More generally, survival analysis involves the modeling of time to event data; in this
context, death or failure is considered an "event" in the survival analysis literature
traditionally only a single event occurs for each subject, after which the organism or
mechanism is dead or broken. Recurring event or repeated eventmodels relax that
assumption. The study of recurring events is relevant in systems reliability, and in many
areas of social sciences and medical research.
A norm is a group-held belief about how members should behave in a given context.
1.Informal guideline about what is considered normal (what is correct or
incorrect) social behavior in a particular group or social unit. Norms form the
basis of collective expectations that members of a community have from
each other, and play a key part in social control and social order by exerting
a pressure on the individual to conform. In short, "The way we do things
around here."
2.Formal rule or standard laid
down
by legal,
religious,
or
social authority against which appropriateness (what is right or wrong) of an
individual's behavior is judged
NORM, an abbreviation for naturally occurring radioactive materials
procurement
objective
1. A specific result that a person or system aims to achieve within a time
frame and with available resources.
In general, objectives are more specific and easier to measure than goals.
Objectives are basic tools that underlie all planning and strategic activities.
They serve as the basis for creating policy and evaluating performance.
Some examples of business objectives include minimizing expenses,
expanding internationally, or making a profit.
2.
Goals tend to change your mindset by changing your focus. And as your focus changes,
it takes your thinking with it. This is why goals are often accompanied by affirmations,
which involve projecting yourself into the desired (but as yet unattained) destination.
People set goals all the time, without ever being very specific. Organizations do it too. A
company can set a goal of returning to profitability in two years, or becoming the leader
in their industry in five years, all without ever determining how that will be accomplished.
And once again, the details are worked out later, after the big picture changes of
direction and destination or goals have been changed and defined.
Objectives: Establishing a Series of Concrete Steps
If goals are about the big picture, then objectives are all about tactics. Mechanically,
tactics are action plans to get from where you are to where you want to be. A goal
defines the direction and destination, but the road to get there is accomplished by a
series of objectives.
A good example of this is a person who owes $50,000 in credit card debt on ten
different cards and wants to become debt-free. Getting out of debt is the goal. But it is
achieved by paying off each of the ten credit cards, one at a time. The payoff of each
credit card is an objective the series of smaller targets that need to be hit in order to
achieve the big picture goal of becoming debt-free.
The methodology for paying off each credit card will be very specific i.e., youll need
to pay X amount of extra money to Credit Card #1 for Y number of months in order to
meet the objective of paying it off. Then you need to repeat the action for the remaining
nine credit cards. The tactics which are the objectives are very specific.
How Objectives Can Help You Reach Your Goals
In nearly any goal you want to reach you can use the credit card example to help you
get there. First, you define the goal what ever it may be. Unless the goal is a small
one and easily obtained, its usually best to break big goals down to a series of specific
action steps its a way of using the divide-and-conquer strategy to accomplish a goal
thats far too large to do in the near term.
The action steps have specific targets, as well as methods to reach them. Each target is
an objective. Once its accomplished you move on to the next one, gradually moving
toward your goal as each target is completed.
Though goals generally control objectives, objectives can also control goals as they
unfold. For example, since a goal is general in nature, it may be refined and altered as
objectives are completed. The completion of an objective or a series of them, could
cause you to either raise or lower the ultimate goal.
Goals
Objectives
Definition
Time Frame
Usually long-term.
Magnitude
Outcome of
immediate
action
Purpose of
action
Example
Hierarchy
accounting system
Organized set of manual and computerized accounting methods, procedures, and controls established to
gather, record, classify, analyze, summarize, interpret, and present accurate and
timely financial data for management decisions.
Taylorism
Production efficiency methodology that breaks every action, job, or task into small
and simple segments which can be easily analyzed and taught. Introduced in the
early 20th century, Taylorism (1) aims to achieve maximum job fragmentation to
minimize skill requirements and job learning time, (2)
separates execution of work from work-planning, (3) separates direct
labor from indirect labor (4) replaces rule of
thumb productivity estimates with precise measurements, (5) introduces time and
motion study for optimum job performance, cost accounting, tool and work
station design, and (6) makes possible payment-by-result method of wage
determination. Named after the US industrial engineerFrederick Winslow Taylor
(1856-1915) who in his 1911 book 'Principles Of Scientific Management' laid
down the fundamental principles of large-scale manufacturing through assemblyline factories. He emphasized gaining maximum efficiency from
both machine and worker, and maximization of profit for the benefit of
both workers and management. Although rightly criticized for alienating workers
by (indirectly but substantially) treating them as mindless, emotionless, and easily
replicable factors of production, Taylorism was a critical factor in the
unprecedented scale of US factory output that led to Allied victory in Second
World War, and the subsequent US dominance of the industrial world
motivation
Internal and external factors that stimulate desire and energy in people to be
continually interested and committed to a job, role or subject, or to make an effort
to attain a goal.
Motivation results from the interaction of both conscious and
unconscious factors such as the (1) intensity of desire or need,
(2) incentive or reward value of the goal, and (3) expectations of
the individual and of his or her peers. These factors are the reasons one has for
behaving a certain way. An example is a student that spends extra time studying
for a test because he or she wants a better grade in the class.
interpersonal conflict
Human resource management: A situation in which
an individual or group frustrates, or tries to frustrate, the goal attainment
efforts of the other.
performance
The accomplishment of a given task measured against preset known standards
of accuracy, completeness, cost, and speed. In a contract, performance
is deemed to be the fulfillment of an obligation, in a manner that releases the
performer from all liabilities under the contract.
quality of performance
A numerical measurement of the performance of an organization, division, or
process. Quality of performance can be assessed through measurements of
physical products, statistical sampling of the output of processes, or
through surveys of purchasers of goods or services. Also referred to as quality of
service.
agreement
quality management
Management activities and functions involved in determination of quality
policy and its implementation through means such as quality
planning and quality assurance (including quality control). See
also total quality management (TQM).
should base its decisions solely on the analysis of data and information; (8)
Mutually beneficial supplier relationships: Management should enhance the
interdependent relationship with itssuppliers for mutual benefit and in creation
of value.
certificate of compliance
A document certified by a competent authority that the supplied good
or service meets the required specifications. Also called certificate of
conformance, certificate of conformity.
certificate of conformance
A document certified by a competent authority that the supplied good
or service meets the required specifications. Also called certificate of
compliance, certificate of conformity.
certificate of conformity
A document certified by a competent authority that the supplied good
or service meets the required specifications. Also called certificate of
conformance or certificate of compliance.
quality
In manufacturing, a measure of excellence or a state of
being free from defects, deficiencies and significant variations. It is brought about
by strict and consistent commitment to certain standards
that achieve uniformity of a product in order to satisfy
specific customer or user requirements. ISO 8402-1986 standard defines quality
as "the totality offeatures and characteristics of a product
or service that bears its ability to satisfy stated or implied needs." If
an automobile company finds a defect in one of their cars and makes a
product recall, customer reliability and therefore production will decrease
because trust will be lost in the car's quality.
fixed cost
A periodic cost that remains more or less unchanged irrespective of the output
level or sales revenue, such as depreciation, insurance, interest, rent, salaries,
and wages.
marginal cost
The increase or decrease in the total cost of a production run for making one
additional unit of an item. It is computed in situations where the breakeven
point has been reached: the fixed costs have already been absorbed by the
already produced items and only the direct (variable) costs have to be accounted
for.
process
Sequence of interdependent and linked procedures which, at every stage,
consume one or more resources (employee time, energy, machines, money) to
convert inputs (data, material, parts, etc.) into outputs. These outputs then serve
as inputs for the next stage until a known goal or end result is reached.
production
The processes and methods used to transform tangible inputs (raw
materials, semi-finished goods, subassemblies) and intangible inputs
(ideas, information, knowledge) into goods or services. Resources are used in
this process to create an output that is suitable for use or has exchange value.
asset
1.Something valuable that an entity owns, benefits from, or has use of, in
generating income.
2.
Accounting: Something that an entity has acquired or purchased, and that
has money value (its cost, book value, market value, or residual value). An
asset can be (1) something physical, such as cash,
machinery, inventory, land and building, (2) an enforceable claim against
others, such as accounts receivable, (3)right, such as copyright, patent,
trademark, or (4) an assumption, such as goodwill.
revenue
The income generated from sale of goods or services, or any other use
of capital or assets, associated with the main operations of
an organization before any costs or expenses are deducted. Revenue is shown
usually as the top item in an income (profit and loss) statement from which
all charges, costs, and expenses are subtracted to arrive at net income.
Also called sales, or (in the UK) turnover.
A business or financial road map that identifies revenues and expenses. This
type of plan tracks where money comes from and where it goes in a business
operation. It defines specific goals such
as budgeting, costs associated with operations, and sales projections. A
financial operating plan uses historic and recent performance to predict expected
outcomes in the near future. The plan must be updated periodically to adjust for
changing circumstances.
depreciation
1. Accounting: The gradual conversion of the cost of a tangible capital
asset or fixed asset into an operational expense (called depreciation
expense) over the asset's estimated useful life.
The objectives of computing depreciation are to (1) reflect reduction in
the book value of the asset due to obsolescence or wear and tear,
(2) spread a large expenditure (purchase price of the asset) proportionately
over a fixed period to match revenue received from it, and (3) reduce
the taxable income by charging theamount of depreciation against
the company's total income. In effect, charging of
depreciation means the recovery of invested capital, by gradual sale of the
asset over the years during which output or services are received from it.
Depreciation is computed at the end of an accounting period (usually a
year), using amethod best suited to the particular asset.
When applied to intangible assets, the preferred term is amortization.
2.Commerce: The decline in the market value of an asset.
personnel management
Administrative discipline of hiring and developing employees so that they become
more valuable to the organization. It includes (1) conducting job analyses,
(2) planning personnel needs, and recruitment, (3) selecting the right people for
the job, (4) orienting and training, (5) determining
and managing wages and salaries, (6) providing benefits andincentives,
(7) appraising performance, (8) resolving disputes, (9) communicating with all
employees at all levels.
organization
A social unit of people that is structured and managed to meet a need or to
pursue collective goals. All organizations have a management structure that
determines relationships between the different activities and the members, and
subdivides and assigns roles, responsibilities, and authority to carry out
different tasks. Organizations are opensystems--they affect and are affected by
their environment.
chain of command
The order in which authority and power in an organization is wielded and
delegated from top management to every employee at every level of the
organization. Instructions flow downward along the chain of command
and accountability flows upward.
According to its proponent Henri Fayol (1841-1925), the more clear cut the chain
of command, the more effective the decision making process and greater
the efficiency. Military forces are an example of straight chain of command that
extends in unbroken line from the top brass to ranks. Also called line of
command.
scalar principle
Classical-management rule that subordinates at every level should follow
the chain of command, and communicate with their seniors only through the
immediate or intermediate senior. According to its proponent, the
French management pioneer Henri Fayol (1841-1925), a clear understanding of
this principle is necessary for the proper management of any organization.
stakeholder
A person, group or organization that has interest or concern in an organization.
Stakeholders can affect or be affected by
the organization's actions, objectives and policies. Some examples of key
1.
A health system, also sometimes referred to
as health care system or healthcare system, is the organization of people,
institutions, and resources that deliver health care services to meet
the health needs of target populations.
A good health system delivers quality services to all people, when and where they need them.
The exact configuration of services varies from country to country, but in all cases requires a
robust financing mechanism; a well-trained and adequately paid workforce; reliable information
on which to base decisions and policies; well maintained facilities and logistics to deliver quality
medicines and technologies
An ROC curve
1.
2.
It
and
will be
3.
The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test.
4.
The slope of the tangent line at a cutpoint gives the likelihood ratio (LR) for that value of the test. You can
check this out on the graph above. Recall that the LR for T4 < 5 is 52. This corresponds to the far left, steep
portion of the curve. The LR for T4 > 9 is 0.2. This corresponds to the far right, nearly horizontal portion of
the curve.
5.
To illustrate, consider the following set of data (source). Of a total of 125 subjects, 32 are known to
be hypothyroid and 93 are known to have normal thyroid function. All subjects are
assessed with respect to T4 (thyroxine) levels, and then sorted among the four
ordinal categories: T4<5.1, T4=5.1 to 7.0, T4=7.1 to 9.0, and T4>9.0. Of the 19
subjects with T4 levels lower than 5.1, 18 were in fact hypothyroid while only 1 was
euthyroid. Thus, if a T4 of 5 or less were taken as an indication of hypothroidism,
this measure would yield 18 true positives and 1 false positive, with a true-positive
rate (sensitivity) of 18/32=.5625 and a false-positive rate (1-specificity) of
1/93=.0108.
Observed Frequencies
T4 Value
Cumulative Rates
Diagnostic
Level
False
Positive
True
Positive
False
Positive
True
Positive
<5.1
18
.0108
.5625
5.1-7.0
17
.1935
.7813
7.1-9.0
36
.5806
.9063
>9.0
39
1.0
1.0
Totals:
93
32
To proceed, enter into the cells of the following table either the observed frequencies or the cumulative rates for
each of the k diagnostic levels, up to a maximum of k=10. (Note that this procedure makes no sense with k<4.) If
you are entering observed frequencies, cumulative rates will be calculated automatically. If you are entering
cumulative rates, the final entry in each of the two columns on the right must always be equal to 1.0. Cumulative
rates can be entered as either decimal fractions (.5625) or common fractions (18/32). When all values have been
entered, click the Calculate button, and the results of the analysis will appear in the scrolling text box that follows
the data- entry table.
Consider patients in intensive care (ICU). One of the major causes of death in such patients is "sepsis". Wouldn't it
be nice if we had a quick, easy test that defined early on whether our patients were "septic" or not? Ignoring for the
moment what sepsis is, let's consider such a test. We imagine that we take a population of ICU patients, and do two
things:
1. Perform our magical TEST and record the results;
2. Use some "gold standard" to decide who REALLY has "sepsis", and record this result
(in a blinded fashion).
Please note (note this well) that we have represented our results as fractions, and that:
FNF + TPF = 1
In other words, given FNF, the False Negative Fraction, you can work out TPF, the True Positive Fraction, and vice
versa. Similarly, the False Positive Fraction and True Negative Fraction must also add up to one - those patients who
really have NO sepsis (in our example) must either be true negatives, or misclassified by the test as positives despite
the absence of sepsis.
In our table, TPF represents the number of patients who have sepsis, and have this corroborated by having a "high"
TEST (above whatever cutoff level was chosen). FPF represents false positives - the test has lied to us, and told us
that non-septic patients are really septic. Similarly, true negatives are represented by TNF, and false negatives by
FNF.
In elementary statistical texts, you'll encounter other terms. Here they are:
The sensitivity is how good the test is at picking out patients with sepsis. It is simply
the True Positive Fraction. In other words, sensitivity gives us the proportion of cases
picked out by the test, relative to all cases who actually have the disease.
Specificity is the ability of the test to pick out patients who do NOT have the
disease. It won't surprise you to see that this is synonymous with the True Negative
Fraction.
Translation
sensitivity, =true positive fraction, =TPF
P(T- | D-)
specificity, TNF
P(T+ | D-)
FPF
P(T- | D+)
FNF
Using similar notation, one can also talk about the prevalence of a disease in a population as "P(D+)". Remember
(we stress this again!) that the false negative fraction is the same as one minus the true positive fraction, and
similarly, FPF = 1 - TNF.
KISS
We'll keep it simple. From now on, we will usually talk about TPF, TNF, FPF and FNF. If you like terms like
sensitivity, specificity, bully for you. Substitute them where required!
Truth
Consider our table again:
Actuality v the TEST
SEPSIS
NO sepsis
TPF
FPF
FNF
TNF
See how we've assumed that we have absolute knowledge of who has the disease (here, sepsis), and who doesn't. A
good intensivist will probably give you a hefty swipe around the ears if you go to her and say that you have an
infallible test for "sepsis". Until fairly recently, there weren't even any good definitions of sepsis! Fortunately, Roger
Bone (and his committee) came up with a fairly reasonable definition. The ACCP/CCM consensus criteria [Crit Care
Med 1992 20 864-74] first define something called the Systemic Inflammatory Response Syndrome, characterised
by at least two of:
1. Temperature under 36oC or over 38oC;
2. Heart rate over 90/min;
3. Respiratory rate over 20/min or PaCO2 under 32 mmHg;
4. White cell count under 4000/mm3 or over 12000/mm3 or over 10% immature forms;
The above process is often abbreviated to "SIRS". The consensus criteria then go on to define sepsis:
When the systemic inflammatory response syndrome is the result of a confirmed infectious process, it is termed
'sepsis'.
Later, they define 'severe sepsis' (which is sepsis associated with organ dysfunction, hypoperfusion, or hypotension.
"Hypoperfusion and perfusion abnormalities may include, but are not limited to lactic acidosis, oliguria, or an acute
alteration in mental status"). Finally, 'septic shock' is defined as sepsis with hypotension, despite adequate fluid
resuscitation, along with the presence of perfusion abnormalities. Hypotension is a systolic blood pressure under 90
mmHg or a reduction of 40(+) mmHg from baseline.
The above definitions have been widely accepted. Now, there are many reasons why such definitions can be
criticised. We will not explore such criticism in detail but merely note that:
1. The definition of SIRS appears to be over-inclusive (Almost all patients in ICU will
conform to the definition at some time during their stay);
2. Various modifications of the third criterion (respiratory rate) have been used to
accommodate patients on mechanical ventilation;
3. The use of high or low values for temperature and white cell count appears to
exclude patients who might be 'in transition' from low to high, or high to low values!
4. Proof that SIRS "is the result of an infectious process" may be difficult or impossible
to achieve. 'Proof' of anything in ICU (as opposed to 'showing an association') is
particularly difficult because of the multiple problems experienced by patients. (Quite
apart from the philosophical problems posed by 'proof')!
2. Find out why the area under the ROC curve is non-parametric, and why this is
important;
3. Learn to calculate required sample sizes;
4. Compare the areas under two ROC curves;
5. Examine the effects of noise, a bad 'gold standard', and other sources of error.
Let's play some more. In the following example, see how closely the two curves are superimposed, and how flat the
corresponding ROC curve is! This demonstrates an important property of ROC curves - the greater the overlap of
the two curves, the smaller the area under the ROC curve.
Vary the curve separation using the upper "slider" control, and see how the ROC curve changes. When the curves
overlap almost totally the ROC curve turns into a diagonal line from the bottom left corner to the upper right corner.
What does this mean?
Once you've understood what's happening here, then the true power of ROCs will be revealed. Let's think about this
carefully..
Let's make an ROC curve
Consider two populations, one of "normal" individuals and another of those with a disease. We have a test for the
disease, and apply it to a mixed group of people, some with the disease, and others without. The test values range
from (say) zero to a very large number - we rank the results in order. (We have rather arbitrarily decided that patients
with bigger test values are more likely to be 'diseased' but remember that this is not necessarily the case. Of the
thousand possibilities, consider patients with low serum calcium concentrations and hypoparathyroidism - here the
low values are the abnormal ones). Now, here's how we construct our curve..
1. Start at the bottom left hand corner of the ROC curve - here we know that both FPF
and TPF must be zero (This corresponds to having the green 'test threshold' line in
our applet way over on the right);
2. Now examine the largest result. In order to start constructing our ROC curve, we set
our test threshold at just below this large result - we move the green marker slightly
left. Now, if this, the first result, belongs to a patient with the disease, then the case
is a true positive, the TPF must now be bigger, and we plot our first ROC curve point
by moving UP on the screen and plotting a point. Conversely, if the disease is absent,
we have a false positive, the FPF is now greater than zero, and we move RIGHT on
the screen and plot our point.
3. Set the test threshold lower, to just below the second largest result, and repeat the
process described in (2).
4. .. and so on until we've moved the threshold down to below the lowest test value. We
will now be in the upper right hand corner of the ROC curve - because our green
threshold marker is below the lowest value, all results will be classified as positive, so
the TPF and FPF will both be 1.0 !
Consider two tests. The first test is good at discriminating between patients with and without the disease. We'll call it
test A. The second test is lousy - let's call it test Z. Let's examine each:
Test Z. Because this is a lousy test, as we move our green marker left, picking off
either false or true positives, our likelihood of encountering either is much the same.
For every true positive (that moves us UP) we are likely to encounter a false positive
that moves us to the RIGHT, as we plot the graph. You can see what will happen we'll get a more-or-less diagonal line from the bottom left corner of the ROC curve, up
to the top right corner.
Test A. This is a good test, so we're initially more likely to encounter true positives as
we move our green marker left. This means that initially our curve will move steeply
UP. Only later, as we start to encounter fewer and fewer true positives, and more and
more false positives, will the curve ease off and become more horizontal!
From the above, you can get a good intuitive feel that the closer the ROC curve is to a diagonal, the less useful the
test is at discriminating between the two populations. The more steeply the curve moves up and then (only later)
across, the better the test. A more precise way of characterising this "closeness to the diagonal" is simply to look at
the AREA under the ROC curve. The closer the area is to 0.5, the more lousy the test, and the closer it is to 1.0, the
better the test!
The Area under the ROC curve is non-parametric!
The real beauty of using the area under this curve is its simplicity. Consider the above process we used to construct
the curve - we simply ranked the values, decided whether each represented a true or false positive, and then
constructed our curve. It didn't matter whether result number 23 was a zillion times greater than result number 24, or
0.00001% greater. We certainly didn't worry about the 'shapes of the curves', or any sort of curve parameter. From
this you can deduce that the area under the ROC curve is not significantly affected by the shapes of the underlying
populations. This is most useful, for we don't have to worry about "non-normality" or other curve shape worries, and
can derive a single parameter of great meaning - the area under the ROC curve!
We're about to get rather technical, so you might wish to skip the following,
and move on to the nitty gritty!
In an authoritative paper, Hanley and McNeil [Radiology 1982 143 29-36] explore the concept of the area under the
ROC curve. They show that there is a clear similarity between this quantity and well-known (at least, to statisticians)
Wilcoxon (or Mann-Whitney) statistics. Considering the specific case of randomly paired normal and abnormal
radiological images, the authors show that the area under the ROC curve is a measure of the probability that the
perceived abnormality of the two images will allow correct identification. (This can be generalised to other uses of
the AUC). Note that ROC curves can be used even when test results don't necessarily give an accurate number! As
long as one can rank results, one can create an ROC curve. For example, we might rate x-ray images according to
degree of abnormality (say 1=normal, 2=probably normal, and so on to 5=definitely abnormal), check how this
ranking correlates with our 'gold standard', and then proceed to create an ROC curve.
Hanley and McNeil explore further, providing methods of working out standard errors for ROC curves. Note that
their estimates for standard error (SE) depend to a degree on the shapes of the distributions, but are conservative so
even if the distributions are not normal, estimates of SE will tend to be a bit too large, rather than too small. (If
you're unfamiliar with the concept of standard error, consult a basic text on statistics).
In short, they calculate standard error as
___________________________________________
/
SE = __
/
\
\/
Where A is the area under the curve, na and nn are the number of abnormals and normals respectively, and Q1 and
Q2 are estimated by:
Q1 = A / (2 - A)
Q2 = 2A2 / (1 + A)
Note that it is extremely silly to rely on Gaussian-based formulae to calculate standard error when the number of
abnormal and normal cases in a sample are not the same. One should use the above formulae.
Sample Size
Now that we can calculate the standard error for a particular sample size, (given a certain AUC), we can plan sample
size for a study! Simply vary sample size until you achieve an appropriately small standard error. Note that, to do
this, you do need an idea of the area under the ROC curve that is anticipated. Hanley and McNeil even provide a
convenient diagram (Figure 3 in their article) that plots number against standard error for various areas under the
curve. As usual, standard errors vary with the square root of the number of samples, and (as you might expect)
numbers required will be smaller with greater AUCs.
Planning sample size when comparing two tests
ROC curves should be particularly valuable if we can use them to compare the performance of two tests. Such
comparison is also discussed by Hanley and McNeil in the above mentioned paper, and a subsequent one [Hanley JA
& McNeil BJ, Radiology 1983 148 839-43] entitled A method of comparing the areas under Receiver Operating
Characteristic curves derived from the same cases.
Commonly in statistics, we set up a null hypothesis (that there is no statistically significant difference between two
populations). If we reject such a hypothesis when it should be accepted, then we've made a Type I error. It is a
tradition that we allow a one in twenty chance that we have made a type I error, in other words, we set our criterion
for a "significant difference" between two populations at the 5% level. We call this cutoff of 0.05 "alpha".
Less commonly discussed is "beta", () the probability associated with committing a Type II error. We commit a type
II error if we accept our null hypothesis when, in fact, the two populations do differ, and the hypothesis should have
been rejected. Clearly, the smaller our sample size, the more likely is a type II error. It is common to be more
tolerant with beta - to accept say a one in ten chance that we have missed a significant difference between the two
populations. Often, statisticians refer to the power of a test. The power is simply (1 - ), so if is 10%, then the
power is 90%.
In their 1982 paper, Hanley & McNeil provide a convenient table (Table III) that gives the numbers of normal and
abnormal subjects required to provide a probability of 80%, 90% or 95% of detecting differences between various
ROC areas under the curve (with a one sided alpha of 0.05). For example, if we have one AUC of 0.775 and a
second of 0.900, and we want a power of 90%, then we need 104 cases in each group (normals and abnormals).
Note that generally, the greater the areas under both curves, the smaller the difference between the areas needs to be,
to achieve significance. The tables are however not applicable where two tests are applied to the same set of cases.
The approach to two different tests being applied to the same cases is the subject of Hanley & McNeil's second
(1983) paper. This approach is discussed next.
Actually comparing two curves
This can be non-trivial. Just because the areas are similar doesn't necessarily mean that the curves are not different
(they might cross one another)! If we have two curves of similar area and still wish to decide whether the two curves
differ, we unfortunately have to use complex statistical tests - bivariate statistical analysis.
In the much more common case where we have different areas derived from two tests applied to different sets of
cases, then it is appropriate to calculate the standard error of the difference between the two areas, thus:
___________________
_
SE(A1 - A2) =
/
\/
SE2(A1) + SE2(A2)
Such an approach is NOT appropriate where two tests are applied to the same set of patients. In their 1983 paper,
Hanley and McNeil show that in these circumstances, the correct formula is:
_
SE(A1 - A2) =
/
\/
__________________________________
SE2(A1) + SE2(A2) - 2r.SE(A1)SE(A2)
where r is a quantity that represents the correlation induced between the two areas by the study of the same set of
cases. (The difference may be non-trivial - if r is big, then we will need far fewer cases to demonstrate a difference
between tests on the same subjects)!
Once we have the standard error of the difference in areas, we can then calculate the statistic:
z = (A1 - A2) / SE(A1-A2)
If z is above a critical level, then we accept that the two areas are different. It is common to set this critical level at
1.96, as we then have our conventional one in twenty chance of making a type I error in rejecting the hypothesis that
the two curves are similar. (Simplistically, the value of 1.96 indicates that the areas of the two curves are two
standard deviations apart, so there is only an ~5% chance that this occurred randomly and that the curves are in fact
the same).
In the circumstance where the same cases were studied, we still haven't told you how to calculate the magic number
r. This isn't that simple. Assuming we have two tests T1 and T2, that classify our cases into either normals (n) or
abnormals (a), and we have already calculated the ROC AUCs for each test (Let's call these areas A1 and A2). The
procedure is as follows:
1. Look at (n), the non-diseased patients. Find how the two tests correlate for these
patients, and obtain a value rn for this correlation. (We'll soon reveal how to obtain
this value);
2. Look at (a), the abnormals, and similarly derive ra, the correlation between the two
tests for these patients;
3. Average out rn and ra;
4. Average out the areas A1 and A2, in other words, calculate (A1+A2)/2;
5. Use Hanley and McNeil's Table I to look up a value of r, given the average areas, and
average of rn and ra.
You now have r and can plug it into the standard error equation. But wait a bit, how do we
calculate rn and ra? This depends on your method of scoring your data - if you are measuring
things on an interval scale (for example, blood pressure in millimetres of mercury), then
something called the Pearson product-moment correlation method is appropriate. For ordinal
information (e.g. saying that 'this image is definitely abnormal and that one is probably
abnormal'), we use something called the Kendall tau. Either can be derived from most
statistical packages.
Sources of Error
The effect of noise
Let's consider how "random noise" might affect our curve. Still assuming that we have a 'gold standard' which
confirms the presence or absence of disease, what happens as 'noise' confuses our test, in other words, when the test
results we are getting are affected by random variations over which we have no control. If we start off by assuming
our test correlates perfectly with the gold standard, then the area under the ROC curve (AUC) will be 1.0. As we
introduce noise, so some test results will be mis-classified - false positives and false negatives will creep in. The
AUC will diminish.
What if the test is already pretty crummy at differentiating 'normals' from 'abnormals'? Here things become more
complex, because some false positives or false negatives might accidentally be classified as true values. You can see
however, that on average (provided sample numbers are sufficient and the test has some discriminatory power),
noise will in general degrade test performance. It's unlikely that random noise will lead you to believe that the test is
performing better than it really is - a most desirable characteristic!
Independence from the gold standard
The one big catch with ROC curves is where the test and gold standard are not independent. This interdependence
will give you spuriously high area under the ROC curve. Consider the extreme case where the gold standard is
compared to itself (!) - the AUC will be 1.0, regardless. This becomes extremely worrying where the "gold standard"
is itself a bit suspect - if the test being compared to the standard now also varies as does the standard, but both have
a poor relationship to the disease you want to detect, then you might believe you're doing well and making
appropriate diagnoses, but be far from the truth! Conversely, if the gold standard is a bit shoddy, but independent
from the test, then the effect will be that of 'noise' - the test characteristics will be underestimated (often called
"nondifferential misclassification" by those who wish to confuse you)!
Other sources of error
It should also be clear that any bias inherent in a test is not transferred to bias the ROC curve. If one is biased in
favour of making a diagnosis of abnormality, this merely reflects a position on the ROC curve, and has no impact on
the overall shape of the curve.
Other errors may still creep in. A fine article that examines sources of error (and why, after initial enthusiasm, so
many tests fall into disfavour) is that of Ransohoff and Feinstein [New Engl J Med 1978 299(17) 926-30]. With
every examination of a test one needs to look at:
1. Whether the full spectrum of a disease process is being examined. If only severe
cases are reported on, then the test may be useless in milder cases (both pathologic
and clinical components of the disease should represent its full spectrum). A good
example is with malignant tumours - large, advanced tumours will be easily picked
up, and a screening test might also perform well in this setting, but miss early
disease!
2. Comparative ('control') patients. These should be similar - for example, "the search
for a comparative pathological spectrum should include a different process in the
same anatomical location .. and the same process in a different anatomical location"
(citing the case of a test for say, cancer of the colon);
3. Co-morbid disease. This may affect the positivity or negative status of a test.
4. Verification bias. If the clinician is not blinded to the result of the test, a positive may
make him scrutinise the patient very carefully and find the disease (which he missed
in the other patient who had a negative test). Another name for verification bias is
work-up bias. Verification bias is common and counter-intuitive. People tend to get
rather angry when you say it might exist, for they will reply along the lines of "We
confirmed all cases at autopsy, dammit!" (The positive test may have influenced the
clinicians to send the patients to autopsy). A good test will be more likely to influence
selection for 'verification', and thus introduce a stronger bias! (Begg & McNeil
describe this bias well, and show how it can be corrected for).
5. Diagnostic review bias. If the test is first performed, and then the definitive diagnosis
is made, knowledge of the test result may affect the final 'definitive' diagnosis.
Similar is "test-review bias", where knowledge of the 'gold standard' diagnosis might
influence interpretation of the test. Studies in radiology have shown that provision of
clinical information may move observers along an ROC curve, or even to a new curve
entirely! ('Co-variate analysis' may help in controlling for this form of bias).
6. "Incorporation bias". This has already been mentioned above under "independence
from the gold standard". Here, the test is incorporated into the evidence used to
diagnose the disease!
7. Uninterpretable test results. These are infrequently reported in studies! Such results
should be considered 'equivocal' if the test is not repeatable. However, if the test is
repeatable, then correction (and estimation of sensitivity and specificity) may be
possible, provided the variation is random. Uninterpretable tests may have a positive
association with the disease state (or even with 'normality').
8. Interobserver variation. In studies where observer abilities are important, different
observers may perform on different ROC curves, or move along the same ROC curve.
An Example: Procalcitonin and Sepsis
Let's see how ROC curves have been applied to a particular TEST, widely promoted as an easy and quick method of
diagnosing sepsis. As with all clinical medicine, we must first state our problem. We will simply repeat our
SIRS/sepsis problem from above:
The Problem
Some patients with SIRS have underlying bacterial infection, whereas others do not. It is generally highly
inappropriate to empirically treat everyone with SIRS as if they had bacterial infection, so we need a reliable
diagnostic test that tells us early on whether bacterial infection is present.
Waiting for culture results takes days, and such delays will compromise infected patients. Although positive
identification of bacterial infection is our gold standard, the delay involved (1 to 2 days) is too great for us to wait
for cultures. We need something quicker. The test we examine will be serum procalcitonin.
Clearly what we now need is to perform a study on patients with SIRS, in whom bacterial infection is suspected.
These patients should then have serum PCT determination, and adequate bacteriological investigation. Knowledge
of the presence or absence of infection can then be used to create a receiver operating characteristic curve for the
PCT assay. We can then examine the utility of the ROC curve for distinguishing between plain old SIRS, and sepsis.
(We might even compare such a curve with a similar curve constructed for other indicators of infection, such as Creactive protein).
(Note that there are other requirements for our PCT assay, for example, that the test is reproducible. In addition, we
must have reasonable evidence that the 'gold standard' test - here interpretation of microbiological data - is
reproducibly and correctly performed).
PCT - a look at the literature
Fortunately for us, there's a 'state of the art' supplement to Intensive Care Medicine (2000 26 S 145-216) where most
of the big names in procalcitonin research seem to have had their say. Let's look at those articles that seem to have
specific applicability to intensive care. Interestingly enough, most of these articles make use of ROC analysis! Here
they are:
1. Brunkhorst FM, et al (pp 148-152) Procalcitonin for the early diagnosis and
differentiation of SIRS, sepsis, severe sepsis and septic shock
2. Cheval C. et al (pp 153-158) Procalcitonin is useful in predicting the bacterial origin
of an acute circulatory failure in critically ill patients
3. Rau B. et al (pp 158-164) The Clinical Value of Procalcitonin in the prediction of
infected necro[s]is in acute pancreatitis
4. Reith HB. et al (pp 165-169) Procalcitonin in patients with abdominal sepsis
5. Oberhoffer M. et al (pp170-174) Discriminative power of inflammatory markers for
prediction of tumour necrosis factor-alpha and interleukin-6 in ICU patients with
systemic inflammatory response syndrome or sepsis at arbitrary time points
Quite an impressive list! Let's look at each in turn:
1. Brunkhorst FM, et al (pp 148-152)
Procalcitonin for the early diagnosis and differentiation of SIRS, sepsis, severe sepsis
and septic shock
The authors recruited 185 consecutive patients. Unfortunately, only seventeen patients in the study had
uncomplicated 'SIRS' - the rest had sepsis (n=61), 'severe sepsis' (n=68) or septic shock (n=39). The authors
then indulge in intricate statistical manipulation to differentiate between sepsis, severe sepsis, and septic
shock - they even construct ROC curves (although we are not told, when they construct an ROC curve for
'prediction of severe sepsis' what those with severe sepsis are being differentiated from - presumably the
rest of the population)! The authors do not address why, in their ICU, so many patients had sepsis, and so
few had SIRS without sepsis. The bottom line is that the results of this study, with an apparently highly
selected group of just seventeen 'non-septic' SIRS patients, seem useless for addressing our problem of
differentiating SIRS and sepsis! Their ROC curves seem irrelevant to our problem. (Parenthetically one
might observe that if you walk into their ICU and find a patient with SIRS, there would appear to be an
over 90% chance that the patient has sepsis - who needs procalcitonin in such a setting)?
2. Cheval C. et al (pp 153-158)
Procalcitonin is useful in predicting the bacterial origin of an acute circulatory failure
in critically ill patients
This study looked at four groups:
1. septic shock (n=16);
2. shock without infection(n=18);
3. SIRS related to proved infection(n=16);
4. ICU patients without shock or infection(n=10).
The choice of groups is somewhat unfortunate! Where are the patients we really want
to know about - those with SIRS but no infection? Reading on, we find that only four
of the patients in the fourth group met the criteria for SIRS! This study too does not
appear to help us in our quest! (The authors use ROC curves to analyse their patients
in shock, comparing those with and without sepsis. The numbers look impressive - an
AUC of 0.902 for procalcitonin's ability to differentiate between septic shock and
'other' causes of shock. But hang on - let's look at the 'other' causes of shock. We find
that in these cases, shock was due to haemorrhage(n=8), heart failure(n=7),
anaphylaxis(n=2), and 'hypovolaemia' (n=1). One doesn't need a PCT level to decide
whether a patient is in heart failure, bleeding to death, etc. A study whose title
promises more than is delivered)!
3. Rau B. et al (pp 158-164)
The Clinical Value of Procalcitonin in the prediction of infected necro[s]is in acute
pancreatitis
Sixty one patients were entered into this study. Twenty two had oedematous pancreatitis, 18 had sterile
necrosis, and 21 had infected necrosis. Serial PCT levels were determined over a period of fourteen days.
The 'gold standard' used to determine whether infected necrosis was present was fine needle aspiration of
the pancreas, combined with results of intra-operative bacteriology. We learn that
"PCT concentrations were significantly higher from day 3-13 after onset of symptoms in patients with
[infected necrosis, compared with sterile necrosis]". {The emphasis is ours}.
The authors then inform us that
"ROC analysis for PCT and CRP has been calcul[a]ted on the basis of at least two maximum values
reached during the total observation period. By comparison of the areas under the ROC curve (AUC), PCT
was found to have the closest correlation to the presence and severity of bacterial/fungal infection of
necrosis and was clearly superior to CRP in this respect (AUC for PCT: 0.955, AUC for CRP: 0.861;
p<0.02)."
Again, the numbers look impressive. Hold it! Does this mean that we have to do daily PCT levels on all of
our patients, and then take the two maximum values, and average them in order to decide who has infected
necrosis?? Even more tellingly, we are not provided with information about how PCT might have been
used in prospectively differentiating between those who developed sepsis and those who didn't, before
bacterial cultures became available. In other words, was PCT useful in identifying infected necrosis early
on? If I have a sick patient with pancreatitis, can I base my management decision on a PCT level? This
vital question is left unanswered, but the lack of utility of PCT in the first two days is of concern!
4. Reith HB. et al (pp 165-169)
Procalcitonin in patients with abdominal sepsis
A large study compared 246 patients with "infective or septic episodes confirmed at laparotomy" with 66
controls. And this is where the wheels fall off, for the sixty six controls were undergoing elective operation!
Clearly, any results from such a study are irrelevant to the problem ICU case where you are agonizing over
whether to send the patient for a laparotomy - "is there sepsis or not"?
5. Oberhoffer M. et al (pp170-174)
Discriminative power of inflammatory markers for prediction of tumour necrosis
factor-alpha and interleukin-6 in ICU patients with systemic inflammatory response
syndrome or sepsis at arbitrary time points
The authors reason that TNF and IL-6 levels predict mortality from sepsis. Strangely enough, they do not
appear to have looked at actual mortality in the 243 patients in the study! This is all very well if you're
interested in deciding whether the TNF and IL-6 levels in your patients are over their cutoff levels of
40pg/ml and 500pg/ml respectively, but perhaps of somewhat less utility unless such levels themselves
absolutely predict fatal outcome (they don't). From a clinical point of view, this study suffers from use of a
'gold standard' that may not be of great overall relevance. A hard end point (like death) would have been far
better. (In addition, the authors are surprisingly coy with their AUCs. If you're really keen, you might try
and work these out from their Table 4).
A Summary
Four of the five papers above used ROC analysis. In our opinion, this use provides us with little or no clinical
direction. If the above articles reflect the 'state of the art' as regards use of procalcitonin in distinguishing between
the systemic inflammatory response syndrome and sepsis, we can at present find no justification in using the test on
our critically ill patients! (This does not mean that the test is of no value, simply that we have no substantial
evidence that it is of use).
What would be most desirable is a study that conformed to the requirements we gave above - a study that examines
a substantial number of patients with either:
by taking the number of "false alarms" (cancer patients) at or above that level, and
dividing by the total number of such non-TB patients.
We now have sufficient data to plot our ROC curve. Here it is:
We still need to determine the Area Under the Curve (AUC). We do this by noting that every time we move RIGHT
along the x-axis, we can calculate the increase in area by finding:
(how much we moved right) * (the current y value)
We can then add up all these tiny areas to get a final AUC. As shown in the spreadsheet, this works out at 85.4%,
which indicates that, in distinguishing between tuberculosis and neoplasia as a cause of pleural effusion, ADA seems
to be a fairly decent test!
Here are the corresponding ROC curve for tuberculosis compared with inflammatory disorders. As expected, the
AUC is less for chronic inflammatory disorders, about 77.9%, and pretty poor at 63.9% for 'acute inflammation'
which mainly represents empyemas.
Note that there were only 67 cases of "chronic inflammatory disorders", and thirty five with "acute inflammation".
Finally, let's look at TB versus "all other" effusion data - there were 393 "non-tuberculous" cases. The data include
the above 'cancer' and 'inflammatory' cases. The AUC is still a respectable 78.6%.
Are the data selected, or were all samples of pleural fluid subject to analysis?
Does the hospital concerned have a peculiar case spectrum, or will your case profile
be similar?
How severe were the cases of tuberculosis - is the full spectrum of pleural effusions
being examined?
Should the cases who had two diseases (that is, carcinoma and tuberculosis) have
been excluded from analysis?
What co-morbid diseases were present (for example, Human Immunodeficiency Virus
infection)?
Was there verification bias introduced by, for example, a high ADA value being found,
and the diagnosis of tuberculosis therefore being aggressively pursued?
In how many cases was the diagnosis known before the test was performed? How
many of the cases were considered by the attending physician to be "really
problematical diagnoses"? One could even ask "How good were the physicians at
clinically diagnosing the various conditions - did the ADA add to diagnostic sensitivity
and specificity?"
{ Just as an aside, it's perhaps worth mentioning that the above ADA results are not normally distributed, for either
the 'tuberculosis' or the 'neoplasia' samples. Even taking the logarithms of the values (although it decreases the
skewness of the curves dramatically) doesn't quite result in normal distributions, so any ROC calculations that
assume normality are likely to give spurious results. Fortunately our calculations above make no such assumption.}
Working out Standard Errors
You can calculate Standard Errors for the Areas Under the Curves we've presented, using the following JavaScript
calculator. It's based on the formulae from above.
1. Exploring Accuracy
Accuracy, PPV and NPV
It would be great if we could lump things together in some way, and come up with a single number that could tell us
how well a test performs. One such number is represented by the area under the ROC. Another more traditional (and
far more limited) number is accuracy, commonly given as:
accuracy = number of correct diagnoses / number in total population
While we're about it, let's also consider a few other traditional terms:
KISS(2)
We will refer to positive predictive value as PPV, and negative predictive value as NPV. Accuracy we'll refer to as
'accuracy' (heh).
An examination of 'accuracy'
Let's consider two tests with the same accuracy. Let's say we have a population of 1000 patients, of whom 100 have
a particular disease (D+). We apply our tests (call them T1 and T2) to the population, and get the following results.
Test performance:
T1
(n=1000)
Test performance:
T2
(n=1000)
D+
D-
D+
D-
60
95
40
T+
T+
T-
40
895
PPV = 92.3%
NPV = 95.7%
T-
860
PPV = 70.3%
NPV = 99.4%
See how the two tests have the same accuracy (a + d)/1000 = 95.5%, but they do remarkably different things. The
first test, T1, misses the diagnosis 40% of the time, but makes up for this by providing us with few false positives the TNF is 99.4%. The second test is quite different - impressive at picking up the disease (a sensitivity of 95%) but
relatively lousy performance with false positives (a TNF of 95.5%). At first glance, if we accept the common
medical obsession with "making the diagnosis", we would be tempted to use T2 in preference to T1, (the TPF is
after all, 95% for T2 and only 60% for T1), but surely this depends on the disease? If the consequences of missing
the disease are relatively minor, and the costs of work-up of the false positives are going to be enormous, we might
just conceivably favour T1.
Now, let's drop the prevalence of the disease to just ten in a thousand, that is P(D+) = 1%. Note that the TPF and
TNF ( or sensitivity and specificity, if you prefer) are of course the same, but the positive predictive and negative
predictive values have altered substantially.
Test performance:
T1
(n=1000)
D+
D-
T+
5.5
T-
984.
5
PPV = 52.2%
NPV = 99.6%
Test performance:
T2
(n=1000)
D+
D-
T+
9.5
44
T-
0.5
946
PPV = 17.8%
NPV = 99.9%
(Okay, you might wish to round off the "fractional people")! See how the PPV and NPV have changed for both tests.
Now, almost five out of every six patients reported "positive" according to test T2, will in fact be false positives.
Makes you think, doesn't it?
Another example
Now let's consider a test which is 99% sensitive and and 99% specific for the diagnosis of say, Human
Immunodeficiency Virus infection. Let's look at how such a test would perform in two populations, one where the
prevalence of HIV infection is 0.1%, another where the prevalence is 30%. Let's sample 10 000 cases:
Test performance: Popu
lation A
(n=10 000, prevalence
1/1000)
T+
D+
D-
10
100
9890
T-
PPV = 9.1%
NPV = almost 100%
T+
D+
D-
2970
70
30
6930
T-
PPV = 97.7%
NPV = 99.5%
If the disease is rare, use of even a very specific test will be associated with many false positives (and all that this
entails, especially for a problem like HIV infection); conversely, if the disease is common, a positive test is likely to
be a true positive. (This should really be common sense, shouldn't it?)
You can see from the above that it's rather silly to have a fixed test threshold. We've already played around with our
applet where we varied the test threshold, and watched how the TPF/FPF coordinates moved along the ROC curve.
The (quite literally) million dollar question is "Where do we set the threshold"?
2. Deciding on a test threshold
The reason why we choose to plot FPF against TPF when we make our ROC is that all the information is
contained in the relationship between just these two values, and it's awfully convenient to think of, in the words of
Swets, "hits" and "false alarms" (in other words, TPF and FPF). We can limit the false alarms, but at the expense of
fewer "hits". What dictates where we should put our cutoff point for diagnosing a disease? The answer is not simple,
because we have many possible criteria on which to base a decision. These include:
Financial costs both direct and indirect of treating a disease (present or not), and of
failing to treat a disease;
Costs of further investigation (where deemed appropriate);
Soon we will explore the mildly complex maths involved, but first let's use a little common sense. It would seem
logical that if the cost of missing a diagnosis is great, and treatment (even inappropriate treatment of a normal
person) is safe, then one should move to a point on the right of the ROC, where we have a high TPF (most of the
true positives will be treated) at the cost of many false positives. Conversely, if the risks of therapy are grave, and
therapy doesn't help much anway, we should position our point far to the left, where we'll miss a substantial number
of positives (low TPF) but not harm many unaffected people (low FPF)!
More formally, we can express the average cost resulting from the use of a diagnostic test as:
Cavg
where Cavg is the average cost, CTP is the cost associated with management of true positives, and so on. Co is the
"overhead cost" of actually doing the test. Now, we can work out that the probability of a true positive P(TP) is
given by:
P(TP) = P(D+) * P(T+|D+)
= P(D+) * TPF
In other words, P(TP) is given by the product of the prevalence of the disease in the population, P(D+), multiplied
by the true positive fraction, for the test. We can similarly substitute for the three other probabilities in the equation,
to get:
Cavg
= Co + CTP*P(D+)*P(T+|D+) + CTN*P(D-)*P(T-|D-)
+ CFP*P(D-)*P(T+|D-) + CFN*P(D+)*P(T-|D+)
= Co + CTP*P(D+)*TPF + CTN*P(D-)*TNF
+ CFP*P(D-)*FPF + CFN*P(D+)*FNF
= Co + CTP*P(D+)*TPF + CTN*P(D-)*(1-FPF)
+ CFP*P(D-)*FPF + CFN*P(D+)*(1-TPF)
and, rearrange to ..
Cavg
As Metz has pointed out, even if a diagnostic test improves decision-making, it may still increase overall costs if C o
is great. Of even more interest is the dependence of Cavg on TPF and FPF - the coordinates on an ROC curve! Thus
average cost depends on the test threshold defined on an ROC curve, and varying this threshold will vary costs. The
best cost performance is achieved when Cavg is minimised. We know from elementary calculus that this cost will be
minimal when the derivative of the cost equation is zero. Now because we can express TPF as a function of FPF
using the curve of the ROC, thus:
Cavg
dROC/dFPF
In other words, we have found a differential equation that gives us the slope of the ROC curve at the point where
costs are optimal. Now let's look at a few circumstances:
Where the disease is rare, P(D-)/P(D+) will be enormous, and so we should shift our
test threshold down to the lower left part of the ROC curve, where dROC/dFPF, the
slope of the curve, is large. This fits in with our previous simple analysis, where with
uncommon diseases, we found that false positives are a very bad thing. We must
minimise our false positives, even at the expense of missing true positives!
Conversely, with a common disease, we move our threshold to a lower, more lenient
level, (and our position on the ROC curve necessarily moves right). Otherwise, most
of our negatives are false negatives!
Also notice that the curve slope is great if the cost difference is far greater for C FP CTN than for CFN - CTP. Let's consider a practical scenario - assume for a particular
disease (say a brain tumour) that if you get a positive test, you have to open up the
patient's skull and cut into the brain to find the presumed cancer. If you have a
negative, you do nothing. Let's also assume that the operation doesn't help those
who have the cancer - many die, regardless. Then the cost of a false positive
(operating on the brains of normal individuals!) is indeed far greater than the cost of
a true negative (doing nothing), and the cost of a false negative (not doing an
operation that doesn't help a lot) is similar to the cost of a true positive (doing the
rather unhelpful operation). The curve slope is steep, so we move our test threshold
down on the left of the ROC curve.
The opposite is where the consequences of a false positive are minimal, and there is
great benefit if you treat sufferers from the disease. Here, you must move up and to
the right on the ROC curve.
/ s
the cumulative distribution function (area under the probability distribution from
to
) of the detection
probability in the y-axis versus the cumulative distribution function of the false-alarm probability in x-axis.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from
(and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way
to cost/benefit analysis of diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting
enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli.
ROC analysis since then has been used in medicine, radiology, biometrics, and other areas for many decades and is
increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating
characteristics (TPR and FPR) as the criterion changes.[1]
..
accuracy (ACC)
F1 score
is the harmonic mean of precision and sensitivity
Matthews correlation coefficient (MCC)
Condition
(as determined by "Gold standard")
Prevalence =
Condition positive
Total population
Positive predictive value
False discovery rate
Test
False positive
(PPV, Precision) =
(FDR) =
outcome
True positive
(Type I error)
True positive
False positive
positive
Test outcome positive
Test outcome positive
Test
outcome
False omission rate (FOR) Negative predictive value
Test
False negative
=
(NPV) =
outcome
True negative
(Type II error)
False negative
True negative
negative
Test outcome negative Test outcome negative
Positive
True positive rate (TPR, False positive rate (FPR,
Accuracy (ACC) =
likelihood
Sensitivity, Recall) =
Fall-out) =
True positive + True
ratio (LR+) =
True positive
False positive
negative
TPR/FPR
Condition positive
Condition negative
Total population
True negative rate
Negative
False negative rate
(TNR, Specificity, SPC)
likelihood
(FNR) =
=
ratio (LR) =
False negative
True negative
FNR/TNR
Condition positive
Condition negative
Diagnostic
odds ratio
(DOR) =
LR+/LR
Total
population
Condition positive
Condition negative
ROC space: The ROC space and plots of the four prediction examples.
The contingency table can derive several evaluation "metrics" (see infobox). To draw an ROC curve, only the true
positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR
defines how many correct positive results occur among all positive samples available during the test. FPR, on the
other hand, defines how many incorrect positive results occur among all negative samples available during the test.
A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true
positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1
specificity, the ROC graph is sometimes called the sensitivity vs (1 specificity) plot. Each prediction result or
instance of a confusion matrix represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC
space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is
also called a perfect classification. A completely random guess would give a point along a diagonal line (the socalled line of no-discrimination) from the left bottom to the top right corners (regardless of the positive and negative
base rates). An intuitive example of random guessing is a decision by flipping coins (heads or tails). As the size of
the sample increases, a random classifier's ROC point migrates towards (0.5,0.5).
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than
random), points below the line poor results (worse than random). Note that the output of a consistently poor
predictor could simply be inverted to obtain a good predictor.
Let us look into four prediction results from 100 positive and 100 negative instances:
A
TP=6
3
FP=28
91
TP=7
7
FP=77
15
4
TP=2
4
FP=88
11
2
TP=7
6
FP=12
88
FN=3
7
TN=72
10
9
FN=2
3
TN=23
46
FN=7
6
TN=12
88
FN=2
4
TN=88
11
2
100
100
20
0
100
100
20
0
100
100
20
0
100
100
20
0
TPR = 0.63
FPR = 0.28
PPV = 0.69
F1 = 0.66
ACC = 0.68
TPR = 0.77
FPR = 0.77
PPV = 0.50
F1 = 0.61
ACC = 0.50
TPR = 0.24
FPR = 0.88
PPV = 0.21
F1 = 0.22
ACC = 0.18
TPR = 0.76
FPR = 0.12
PPV = 0.86
F1 = 0.81
ACC = 0.82
Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows the
best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line), and it
can be seen in the table that the accuracy of B is 50%. However, when C is mirrored across the center point
(0.5,0.5), the resulting method C is even better than A. This mirrored method simply reverses the predictions of
whatever method or test produced the C contingency table. Although the original C method has negative predictive
power, simply reversing its decisions leads to a new predictive method C which has positive predictive power.
When the C method predicts p or n, the C method would predict n or p, respectively. In this manner, the C test
would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts,
but the distance from the random guess line in either direction is the best indicator of how much predictive power a
method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's
predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.
Classifications are often based on a continuous random variable. Write the probability for belonging in the class as a
function of a decision/threshold parameter
as
. The ROC curve plots parametrically TPR(T) versus FPR(T) with T as the
varying parameter.
For example, imagine that the blood protein levels in diseased people and healthy people are normally distributed
with means of 2 g/dL and 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood
sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the
threshold (black vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold
would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve.
The actual shape of the curve is determined by how much overlap the two distributions have.
Further interpretations
Sometimes, the ROC is used to generate a summary statistic. Common versions are:
the intercept of the ROC curve with the line at 90 degrees to the no-discrimination line (also called
Youden's J statistic)
the area between the ROC curve and the no-discrimination line[citation needed]
the area under the ROC curve, or "AUC" ("Area Under Curve"), or A' (pronounced "a-prime"), [3] or "cstatistic".[4]
d' (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under
noise-alone conditions and its distribution under signal-alone conditions, divided by their standard
deviation, under the assumption that both these distributions are normal with the same standard deviation.
Under these assumptions, it can be proved that the shape of the ROC depends only on d'.
However, any attempt to summarize the ROC curve into a single number loses information about the pattern of
tradeoffs of the particular discriminator algorithm.
Area under the curve
When using normalized units, the area under the curve (often referred to as simply the AUC, or AUROC) is equal to
the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen
negative one (assuming 'positive' ranks higher than 'negative'). [5] This can be seen as follows: the area under the
curve is given by (the integral boundaries are reversed as large T has a lower value on the x-axis)
. The angular brackets denote average from the distribution of negative samples.
It can further be shown that the AUC is closely related to the MannWhitney U,[6][7] which tests whether positives
are ranked higher than negatives. It is also equivalent to the Wilcoxon test of ranks.[7] The AUC is related to the Gini
coefficient (
) by the formula
, where:
[8]
In this way, it is possible to calculate the AUC by using an average of a number of trapezoidal approximations.
It is also common to calculate the Area Under the ROC Convex Hull (ROC AUCH = ROCH AUC) as any point on
the line segment between two prediction results can be achieved by randomly using one or other system with
probabilities proportional to the relative length of the opposite component of the segment. [9] Interestingly, it is also
possible to invert concavities just as in the figure the worse solution can be reflected to become a better solution;
concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit
the data.[10]
The machine learning community most often uses the ROC AUC statistic for model comparison. [11] However, this
practice has recently been questioned based upon new machine learning research that shows that the AUC is quite
noisy as a classification measure[12] and has some other significant problems in model comparison.[13][14] A reliable
and valid AUC estimate can be interpreted as the probability that the classifier will assign a higher score to a
randomly chosen positive example than to a randomly chosen negative example. However, the critical research [12][13]
suggests frequent failures in obtaining reliable and valid AUC estimates. Thus, the practical value of the AUC
measure has been called into question,[14] raising the possibility that the AUC may actually introduce more
uncertainty into machine learning classification accuracy comparisons than resolution. Nonetheless, the coherence
of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate
distribution,[15] and AUC has being linked to a number of other performance metrics such as the Brier score.[16]
One recent explanation of the problem with ROC AUC is that reducing the ROC Curve to a single number ignores
the fact that it is about the tradeoffs between the different systems or performance points plotted and not the
performance of an individual system, as well as ignoring the possibility of concavity repair, so that related
alternative measures such as Informedness [17] or DeltaP are recommended.[18] These measures are essentially
equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP =
Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the
Matthews correlation coefficient.[17]
Other measures
In engineering, the area between the ROC curve and the no-discrimination line is sometimes preferred (equivalent to
subtracting 0.5 from the AUC), and referred to as the discrimination.[citation needed] In psychophysics, the Sensitivity
Index d' (d-prime), P' or DeltaP' is the most commonly used measure[19] and is equivalent to twice the
discrimination, being equal also to Informedness, deskewed WRAcc and Gini Coefficient in the single point case
(single parameterization or single system). [17] These measures all have the advantage that 0 represents chance
performance whilst 1 represents perfect performance, and -1 represents the "perverse" case of full informedness
used to always give the wrong response.[20]
These varying choices of scale are fairly arbitrary since chance performance always has a fixed value: for AUC it is
0.5, but these alternative scales bring chance performance to 0 and allow them to be interpreted as Kappa statistics.
Informedness has been shown to have desirable characteristics for Machine Learning versus other common
definitions of Kappa such as Cohen Kappa and Fleiss Kappa.[17][21]
Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is
possible to compute partial AUC.[22] For example, one could focus on the region of the curve with low false positive
rate, which is often of prime interest for population screening tests.[23] Another common approach for classification
problems in which P N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis. [24]
This difference in shape and slope result from an added element of variability due to some items being recollected.
Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close
to 1.0.[29]
History
The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal
detection theory.[30] Following the attack on Pearl Harbor in 1941, the United States army began new research to
increase the prediction of correctly detected Japanese aircraft from their radar signals.
In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal)
detection of weak signals.[30] In medicine, ROC analysis has been extensively used in the evaluation of diagnostic
tests.[31][32] ROC curves are also used extensively in epidemiology and medical research and are frequently
mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to
evaluate new radiology techniques.[33] In the social sciences, ROC analysis is often called the ROC Accuracy Ratio,
a common technique for judging the accuracy of default probability models.
ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in
machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating
different classification algorithms.[34]
ROC curves beyond binary classification
The extension of ROC curves for classification problems with more than two classes has always been cumbersome,
as the degrees of freedom increase quadratically with the number of classes, and the ROC space has
dimensions, where is the number of classes.[35] Some approaches have been made for the particular case with three
classes (three-way ROC).[36] The calculation of the volume under the ROC surface (VUS) has been analyzed and
studied as a performance metric for multi-class problems.[37] However, because of the complexity of approximating
the true VUS, some other approaches [38] based on an extension of AUC are more popular as an evaluation metric.
Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other
supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression
error characteristic (REC) Curves [39] and the Regression ROC (RROC) curves.[40] In the latter, RROC curves
become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex
hull. Also, the area under RROC curves is proportional to the error variance of the regression model.
ROC curve is related to the lift and uplift curves,[41][42] which are used in uplift modelling. The ROC curve itself has
also been used as the optimization metric in uplift modeling. [43][44]
False positive paradox
The false positive paradox is a statistical result where false positive tests are more probable than true positive tests,
occurring when the overall population has a low incidence of a condition and the incidence rate is lower than the
false positive rate. The probability of a positive test result is determined not only by the accuracy of the test but by
the characteristics of the sampled population.[1] When the incidence, the proportion of those who have a given
condition, is lower than the test's false positive rate, even tests that have a very low chance of giving a false positive
in an individual case will give more false than true positives overall.[2] So, in a society with very few infected people
fewer proportionately than the test gives false positivesthere will actually be more who test positive for a
disease incorrectly and don't have it than those who test positive accurately and do. The paradox has surprised many.
[3]
It is especially counter-intuitive when interpreting a positive result in a test on a low-incidence population after
having dealt with positive results drawn from a high-incidence population.[2] If the false positive rate of the test is
higher than the proportion of the new population with the condition, then a test administrator whose experience has
been drawn from testing in a high-incidence population may conclude from experience that a positive test result
usually indicates a positive subject, when in fact a false positive is far more likely to have occurred.
Not adjusting to the scarcity of the condition in the new population, and concluding that a positive test result
probably indicates a positive subject, even though population incidence is below the false positive rate is a "base rate
fallacy".
Type I error: "rejecting the null hypothesis when it is true".
Type II error: "accepting the null hypothesis when it is false".
Type III error: "correctly rejecting the null hypothesis for the wrong reason".
Type III error
In statistical hypothesis testing, there are various notions of so-called type III errors (or errors of the third kind),
and sometimes type IV errors or higher, by analogy with the type I and type II errors of Jerzy Neyman and Egon
Pearson. Fundamentally, Type III errors occur when researchers provide the right answer to the wrong question.
Since the paired notions of type I errors (or "false positives") and type II errors (or "false negatives") that were
introduced by Neyman and Pearson are now widely used, their choice of terminology ("errors of the first kind"
and "errors of the second kind"), has led others to suppose that certain sorts of mistake that they have identified
might be an "error of the third kind", "fourth kind", etc
Recall the T4 data from the previous section. The area under the T4 ROC curve is .86. The T4 would be considered
to be "good" at separating hypothyroid from euthyroid patients.
Relative risk measures how much the risk is reduced in the experimental group compared to a control group. For
example, if 60% of the control group died and 30% of the treated group died, the treatment would have a relative
risk reduction of 0.5 or 50% (the rate of death in the treated group is half of that in the control group).
The formula for computing relative risk reduction is: (CER - EER)/CER. CER is the control group event rate and
EER is the experimental group event rate. Using the DCCT data, this would work out to (0.096 - 0.028)/0.096 =
0.71 or 71%. This means that neuropathy was reduced by 71% in the intensive treatment group compared with the
usual care group.
One problem with the relative risk measure is that without knowing the level of risk in the control group, one cannot
assess the effect size in the treatment group. Treatments with very large relative risk reductions may have a small
effect in conditions where the control group has a very low bad outcome rate. On the other hand, modest relative
risk reductions can assume major clinical importance if the baseline (control) rate of bad outcomes is large.
Absolute risk reduction
Absolute risk reduction is just the absolute difference in outcome rates between the control and treatment groups:
CER - EER. The absolute risk reduction does not involve an explicit comparison to the control group as in the
relative risk reduction and thus, does not confound the effect size with the baseline risk. However, it is a less intuitve
measure to interpret.
For the DCCT data, the absolute risk reduction for neuropathy would be (0.096 - 0.028) = 0.068 or 6.8%. This
means that for every 100 patients enrolled in the intensive treatment group, about seven bad outcomes would be
averted.
Number needed to treat
The number needed to treat is basically another way to express the absolute risk reduction. It is just 1/ARR and can
be thought of as the number of patients that would need to be treated to prevent one additional bad outcome. For the
DCCT data, NNT = 1/.068 = 14.7. Thus, for every 15 patients treated with intensive therapy, one case of neuropathy
would be prevented.
The NNT concept has been gaining in popularity because of its simplicity to compute and its ease of interpretion.
NNT data are especially useful in comparing the results of multiple clinical trials in which the relative effectiveness
of the treatments are readily apparent. For example, the NNT to prevent stroke by treating patients with very high
blood pressures (DBP 115-129) is only 3 but rises to 128 for patients with less severe hypertension (DBP 90-109).
History
The technical strategy for DOTS was developed by Dr. Karel Styblo of the International Union
Against TB & Lung Disease in the 1970s and 80s, primarily in Tanzania, but also in Malawi,
Nicaragua and Mozambique. Styblo refined, a treatment system of checks and balances that
provided high cure rates at a cost affordable for most developing countries. This increased the
proportion of people cured of TB from 40% to nearly 80%, costing up to $10 per life saved and
$3 per new infection avoided.[3]
In 1989, WHO and the World Bank began investigating the potential expansion of this strategy.
In July 1990, the World Bank, under Richard Bumgarner's direction, invited Dr. Styblo and
WHO to design a TB control project for China. By the end of 1991, this pilot project was
achieving phenomenal results, more than doubling cure rates among TB patients. China soon
extended this project to cover half the country.[4]
During the early 1990s, WHO determined that of the nearly 700 different tasks involved in Dr.
Styblo's meticulous system, only 100 of them were essential to run an effective TB control
program. From this, WHO's relatively small TB Unit at that time, led by Dr. Arata Kochi,
developed an even more concise "Framework for TB Control" focusing on five main elements
and nine key operations. The initial emphasis was on "DOT, or directly observed therapy, using a
specific combination of TB medicines known as short-course chemotherapy as one of the five
essential elements for controlling TB.[5] In 1993, the World Banks Word Development Report
claimed that the TB control strategies used in DOTS were one of the most cost-effective public
health investments.[6]
In the Fall of 1994, Kraig Klaudt, WHO's TB Advocacy Officer, developed the name and
concept for a marketing strategy to brand this complex public health intervention. To help market
"DOTS" to global and national decision makers, turning the word "dots" upside down to spell
"stop," proved a memorable shorthand that promoted "Stop TB. Use Dots!"[7][8]
According to POZ Magazine, You know the worldwide epidemic of TB is entering a critical
stage when the cash-strapped World Health Organization spends a fortune on glossy paper,
morbid photos and an interactive, spinning (!) cover for its 1995 TB report.[9] India's Joint Effort
to Eradicate TB NGO observated that, DOTS became a clarion call for TB control programmes
around the world. Because of its novelty, this health intervention quickly captured the attention
of even those outside of the international health community." [7]
The DOTS report was released to the public on March 20, 1995, at New York Citys Health
Department. At the news conference, Dr. Thomas Frieden, head of the citys Bureau of TB
Control captured the essence of DOTS, "TB control is basically a management problem.
Frieden had been credited for using the strategy to turn around New York Citys TB outbreak a
few years earlier.[10][11]
On March 19, 1997 at the Robert Koch Institute in Berlin, Germany, WHO announced that
"DOTS was the biggest health breakthrough of the decade." According to WHO DirectorGeneral Dr. Hiroshi Nakajima, We anticipate that at least 10 million deaths from TB will be
prevented in the next ten years with the introduction and extensive use of the DOTS strategy. [12]
[13]
Upon Nakajima's death in 2013, WHO recognized that the promotion of DOTS was one of
one of WHO's most successful programs developed during his ten-year administration. [14]
Impact
There has been a steady global uptake of DOTS TB control services over the subsequent
decades. Whereas less than 2% of infectious TB patients were being detected and cured with
DOTS treatment services in 1990, approximately 60% are now benefiting from this care. Since
1995, 41 million people have been successfully treated and up to 6 million lives saved through
DOTS and the Stop TB Strategy. 5.8 million TB cases were notified through DOTS programs in
2009.[15]
A systematic review of randomized clinical trials found no difference for cure rates as well as the
treatment completion rates between DOTS and self-administered drug therapy.[16] A 2013 metaanalysis of both clinical trials and observational studies too did not find any difference between
DOTS and self-administered therapy.[17] However the WHO and all other TB programs continue
to use DOTS as an important strategy for TB delivery for fear of drug resistance .
DOTS-Plus is for multi-drug-resistant tuberculosis (MDR-TB).
Topics covered on this web site organized under the six components of the Stop TB Strategy
Laboratories
Gender and TB
HIV and TB
Multidrug-resistant TB (MDR-TB)
Poverty and TB
Prisons and TB
Refugees and TB
Tobacco and TB
TB research
The TB Research Movement
Ebola virus disease (formerly known as Ebola haemorrhagic fever) is a severe, often fatal illness,
with a case fatality rate of up to 90%. It is one of the worlds most virulent diseases. The
infection is transmitted by direct contact with the blood, body fluids and tissues of infected
animals or people. Severely ill patients require intensive supportive care. During an outbreak,
those at higher risk of infection are health workers, family members and others in close contact
with sick people and deceased patients.
cases in peri-urban and rural settings makes this one of the most challenging Ebola outbreaks
ever.
Though risk of spread of this disease to countries outside Africa is currently assessed to be low,
there is an urgent need to strengthen national capacity for its early detection, prompt
management and rapid containment. WHO believes that countries with strong health systems can
quickly contain any imported cases using strict infection control measures.
While global focus is on Ebola, we must not forget that several pathogens have been and shall
continue to threaten the world. Since the discovery of Ebola virus in 1976, more than 30 new
pathogens have been detected. SARS and Influenza are two such pathogens which caused
pandemics in this millennium. Fortunately, both could be contained in a short period.
The International Health Regulations, IHR (2005), call upon countries to be transparent in
sharing information on diseases that may have the potential to move across countries to facilitate
an international response. IHR regulations also specify, among others capacities, surveillance,
response, laboratories, human-resource, risk communication and preparedness for early detection
and prompt treatment.
The 2009 pandemic of influenza clearly demonstrated the importance of IHR (2005) as countries
shared information on disease spread in real time to enable the global community to mount a
coordinated response. Since the inception of IHR (2005), countries of WHOs South-East Asia
Region have been striving to strengthen their national capacities. Substantial progress has been
made. More work is yet to be done. Many countries have developed plans to achieve the desired
level of competence before June 2016. To supplement the national efforts and address the gaps,
WHO has established several networks of institutions of excellence and collaborating centres.
In the ongoing Ebola outbreak more than 100 WHO staff are deployed in the affected countries
to support national health authorities. Hundreds of global experts have also been mobilized. An
accelerated response is being implemented through a comprehensive plan in West Africa. WHO
has sought international financial aid of USD 101 million to effectively implement this plan.
No infectious disease can be controlled unless communities are informed and empowered to
protect themselves. Countries must provide accurate and relevant information to the public
including measures to reduce the risk of exposure.
Ebola virus spreads through contact with body fluids of the patient. Avoiding this contact
prevents transmission of infection. In communities and health care facilities, knowledge of
simple preventive measures including hand hygiene and standard infection control precautions
would be crucial to the national public health response.
WHO does not recommend imposing travel bans to or from the affected countries. A ban on
travel could have serious economic and social effects on these countries. A core principle of IHR
is the need to balance public health concerns without harming international travel and trade. The
risk of infection for travellers is very low since person-to-person transmission results only from
direct contact with the body fluids or secretions of an infected patient. People are infectious only
once they show symptoms. Sick people are advised not to travel and to seek medical advice
immediately if Ebola is suspected. All countries should be alert and have the capacity to manage
travellers from Ebola-infected areas who have unexplained febrile illness.
Preparedness, vigilance and community awareness will be crucial to success in our fight against
a complex public health emergency like Ebola. It will take effective national efforts to support an
internationally coordination response
from eradication;
through certification/containment;
The recent developments in polio eradication have led to some serious thinking, especially
regarding the choice of vaccines to be used in the polio endgame strategy to ensure effective
risk management in maintaining and sustaining the eradication. In moving forward with the
Polio end-game strategy, a number of issues need to be systematically addressed, such as
policy to support the implementation of the strategy, research and development required for
ensuring rational planning, assurance of continuous vaccine supply, ensuring operational
management efficiency, efficient surveillance and validation systems, and among other things,
for the post-polio eradication, an attempt needs to be made to integrate polio eradication into the
national immunization programme.
A high coverage of routine immunization is critically needed to ensure sustainability of polio
eradication in the long term, and for the national immunization services to be integrated into
general health services to ensure sustained, long-term immunization services in the most costefficient manner. While focusing on such integration, an attempt should be made to ensure
continued effectiveness of AFT surveillance. All in all, attention should also be paid to
improvement in hygiene and sanitation in the community.
Ladies and gentlemen, a lot of efforts and resources have been put into the polio eradication
programme. Experiences in the development and management of this programme should be used
to further strengthen the national immunization programme. We should utilize funds received
from GAVI-HSS in a big way so as to strengthen the health system infrastructure that supports
the national immunization services. Also, such strengthening will help ensure the sustainability
of polio eradication in the long term.
5. maintain these measures until the following criteria have been met: (i) at least 6 months
have passed without new exportations and (ii) there is documentation of full application
of high quality eradication activities in all infected and high risk areas; in the absence of
such documentation these measures should be maintained until at least 12 months have
passed without new exportations.
Once a State has met the criteria to be assessed as no longer exporting wild poliovirus, it should
continue to be considered as an infected State until such time as it has met the criteria to be
removed from that category, added the WHO statement.
Polio usually strikes children under five and is usually spread via infected water. There is no
specific treatment or cure, but several vaccines exist.
Experts are particularly concerned the virus continues to pop up in countries previously free of
the disease, such as Syria, Somalia and Iraq where civil war or unrest complicates efforts to
contain the virus.
Some critics say the rapid spread of polio could unravel the nearly three-decade effort to
eradicate it.
Leprosy
Leprosy is an infectious disease that causes severe, disfiguring skin sores and nerve damage in
the arms and legs. The disease has been around since ancient times, often surrounded by
terrifying, negative stigmas and tales of leprosy patients being shunned as outcasts. Outbreaks of
leprosy have affected, and panicked, people on every continent. The oldest civilizations of China,
Egypt, and India feared leprosy was an incurable, mutilating, and contagious disease.
Leprosy, also known as Hansen's disease (HD), is a chronic infection caused by the bacteria
Mycobacterium leprae[1] and Mycobacterium lepromatosis.[2] Initially infections are without
symptoms and typically remain this way for 5 to as long as 20 years. [1] Symptoms that develop
include granulomas of the nerves, respiratory tract, skin, and eyes.[1] This may result in a lack of
ability to feel pain and thus loss of parts of extremities due to repeated injuries.[3] Weakness and
poor eyesight may also be present.[3]
There are two main types of disease based on the number of bacteria present: paucibacillary and
multibacillary.[3] The two types are differentiated by the number of poorly pigmented numb skin
patches present, with paucibacillary having five or fewer and multibacillary having more than
five.[3] The diagnosis is confirmed by finding acid-fast bacilli in a biopsy of the skin or via
detecting the DNA by polymerase chain reaction.[3] It occurs more commonly among those living
in poverty and is believed to be transmitted by respiratory droplets.[3] It is not very contagious.[3]
Leprosy is curable with treatment.[1] Treatment for paucibacillary leprosy is with the medications
dapsone and rifampicin for 6 months.[3] Treatment for multibacillary leprosy consists of
rifampicin, dapsone, and clofazimine for 12 months.[3] These treatments are provided for free by
the World Health Organization.[1] A number of other antibiotics may also be used.[3] Globally in
2012 the number of chronic cases of leprosy was 189,000 and the number of new cases was
230,000.[1] The number of chronic cases has decreased from some 5.2 million in the 1980s.[1][4][5]
Most new cases occur in 16 countries, with India accounting for more than half.[1][3] In the past 20
years, 16 million people worldwide have been cured of leprosy.[1]
Leprosy has affected humanity for thousands of years.[3] The disease takes its name from the
Latin word lepra, which means "scaly", while the term "Hansen's disease" is named after the
physician Gerhard Armauer Hansen.[3] Separating people in leper colonies still occurs in
countries like India, where there are more than a thousand;[6] China, where there are hundreds;[7]
and in the continent of Africa.[8] However, most colonies have closed.[8] Leprosy has been
associated with social stigma for much of history,[1] which remains a barrier to self-reporting and
early treatment. World Leprosy Day was started in 1954 to draw awareness to those affected by
leprosy.
Prevention
Medications can decrease the risk of those living with people with leprosy from acquiring the
disease and likely those with whom people with leprosy come into contact outside the home.[54]
There are however concerns of resistance, cost, and disclosure of a person's infection status when
doing follow up of contacts, thus the WHO however recommends that people who live in the
same household be examined for leprosy and only be treated if symptoms are present.[54]
The Bacillus CalmetteGurin (BCG) vaccine offers a variable amount of protection against
leprosy in addition to tuberculosis.[55] It appears to be 26 to 41% effective (based on controlled
trials) and about 60% effective based on observational studies with two doses possibly working
better than one.[56][57] Development of a more effective vaccine is ongoing as of 2011.[54]
Treatment
A number of leprostatic agents are available for treatment. For paucibacillary (PB or tuberculoid)
cases treatment with daily dapsone and monthly rifampicin for six months is recommended.[3]
While for multibacillary (MB or lepromatous) cases treatment with daily dapsone and
clofazimine along with monthly rifampicin for twelve months is recommended.[3]
Multi-drug therapy (MDT) remains highly effective, and people are no longer infectious after the
first monthly dose.[23] It is safe and easy to use under field conditions due to its presentation in
calendar blister packs.[23] Relapse rates remain low, and there is no known resistance to the
combined drugs.[23]
Survival rate
Overall survival
Patients with a certain disease (for example, colorectal cancer) can die directly from that disease
or from an unrelated cause (for example, a car accident). When the precise cause of death is not
specified, this is called the overall survival rate or observed survival rate. Doctors often use
mean overall survival rates to estimate the patient's prognosis. This is often expressed over
standard time periods, like one, five, and ten years. For example, prostate cancer has a much
higher one year overall survival rate than
Net survival rate
When someone is interested in how survival is affected by the disease, there is also the net
survival rate, which filters out the effect of mortality from other causes than the disease. The
two main ways to calculate net survival are relative survival and cause-specific survival or
disease-specific survival.
Relative survival has the advantage that it does not depend on accuracy of the reported cause of
death; cause specific survival has the advantage that it does not depend on the ability to find a
similar population of people without the disease.
Relative survival
Relative survival is calculated by dividing the overall survival after diagnosis of a disease by the
survival as observed in a similar population that was not diagnosed with that disease. A similar
population is composed of individuals with at least age and gender similar to those diagnosed
with the disease.
Cause-specific survival and disease-specific survival
Cause-specific survival is calculated by treating deaths from other causes than the disease as
withdrawals from the population that don't lower survival, comparable to patients who are not
observed any longer, e.g. due to reaching the end of the study period.
Median survival
Median survival is also commonly used in regards to survival rates, meaning the amount of time
at which 50% of the patients have died and 50% have survived.
Five-year survival
Five-year survival rate measures survival at 5 years after diagnosis.
The five-year survival rate is a type of survival rate for estimating the prognosis of a particular
disease, normally calculated from the point of diagnosis. Lead time bias due to earlier diagnosis
can affect interpretation of the five-year survival rate.
There are absolute and relative survival rates; the latter are more useful and commonly used.
Uses
Five-year survival rates can be used to compare the effectiveness of treatments. Use of 5-year
survival statistics is more useful in aggressive diseases that have a shorter life expectancy
following diagnosis (such as lung cancer) and less useful in cases with a long life expectancy
such as prostate cancer.
Improvements in rates are sometimes attributed to improvements in diagnosis, rather than
improvements in prognosis.
To compare treatments (independent of diagnostics) it may be better to consider survival from
reaching a certain stage of the disease or its treatment.
Analysis performed against the Surveillance, Epidemiology, and End Results database (SEER)
facilitates calculation of Five-year survival rates.
Disability
The Convention on the Rights of Persons with Disabilities (CRPD) has been ratified by more
than 140 countries. It reinforces the rights of people with disabilities.
Impairments may be present from birth, the majority are acquired during a person's lifetime.
Different countries may apply several definitions when it comes to disability.
Disabilities is an umbrella term, covering impairments, activity limitations, and participation
restrictions. An impairment impacts in body function or structure; an activity limitation is a
difficulty encountered by an individual in executing a task or action; while a participation
restriction is a problem experienced by an individual in involvement in life situations. Thus,
disability is a complex phenomenon, reflecting an interaction between features of a persons
body and features of the society in which he or she lives.[1]
An individual may also qualify as disabled if they have had an impairment in the past or is seen
as disabled based on a personal or group standard or norm. Such impairments may include
physical, sensory, and cognitive or developmental disabilities. Mental disorders (also known as
psychiatric or psychosocial disability) and various types of chronic disease may also qualify as
disabilities.
Some advocates object to describing certain conditions (notably deafness and autism) as
"disabilities", arguing that it is more appropriate to consider them developmental differences that
have been unfairly stigmatized by society.[2][3] However, other advocates argue that disability is a
result of exclusion from mainstream society and not any inherent impairment.[4][5]
Types of disability
The term "disability" broadly describes an impairment in a person's ability to function, caused by
changes in various subsystems of the body, or to mental health. The degree of disability may
range from mild to moderate, severe, or profound.[6] A person may also have multiple disabilities.
Conditions causing disability are classified by the medical community as:[7]
of unknown origin.
Physical disability
Any impairment which limits the physical function of limbs, fine bones, or gross motor ability is
a physical impairment, not necessarily a physical disability. The Social Model of Disability
defines physical disability as manifest when an impairment meets a non-universal design or
program, e.g. a person who cannot climb stairs may have a physical impairment of the knees
when putting stress on them from an elevated position such as with climbing or descending
stairs. If an elevator was provided, or a building had services on the first floor, this impairment
would not become a disability. Other physical disabilities include impairments which limit other
facets of daily living, such as severe sleep apnea.
A man with an above the knee amputation exercises while wearing a prosthetic leg
Sensory disability
Sensory disability is impairment of one of the senses. The term is used primarily to refer to
vision and hearing impairment, but other senses can be impaired.
Vision impairment
Vision impairment (or "visual impairment") is vision loss (of a person) to such a degree as to
qualify as an additional support need through a significant limitation of visual capability
resulting from either disease, trauma, or congenital or degenerative conditions that cannot be
corrected by conventional means, such as refractive correction, medication, or surgery.[8][9][10]
This functional loss of vision is typically defined to manifest with
1.
2.
best corrected visual acuity of less than 20/60, or significant central field
defect,
significant peripheral field defect including homonymous or heteronymous
bilateral visual, field defect or generalized contraction or constriction of field,
or
3.
reduced peak contrast sensitivity with either of the above conditions. [8][11]
Hearing impairment
Hearing impairment or hard of hearing or deafness refers to conditions in which individuals are
fully or partially unable to detect or perceive at least some frequencies of sound which can
typically be heard by most people. Mild hearing loss may sometimes not be considered a
disability.
Complete loss of the sense of taste is known as ageusia, while dysgeusia is persistent abnormal
sense of taste,
Somatosensory impairment
Insensitivity to stimuli such as touch, heat, cold, and pain are often an adjunct to a more general
physical impairment involving neural pathways and is very commonly associated with paralysis
(in which the motor neural circuits are also affected).
Balance disorder
A balance disorder is a disturbance that causes an individual to feel unsteady, for example when
standing or walking. It may be accompanied by symptoms of being giddy, woozy, or have a
sensation of movement, spinning, or floating. Balance is the result of several body systems
working together. The eyes (visual system), ears (vestibular system) and the body's sense of
where it is in space (proprioception) need to be intact. The brain, which compiles this
information, needs to be functioning effectively.
Intellectual disability
Intellectual disability is a broad concept that ranges from mental retardation to cognitive deficits
too mild or too specific (as in specific learning disability) to qualify as mental retardation.
Intellectual disabilities may appear at any age. Mental retardation is a subtype of intellectual
disability, and the term intellectual disability is now preferred by many advocates in most
English-speaking countries.
Disability-adjusted life years out of 100,000 lost due to any cause in 2004.
The disability-adjusted life year (DALY) is a measure of overall disease burden, expressed as
the number of years lost due to ill-health, disability or early death.
Originally developed by Harvard University for the World Bank in 1990, the World Health
Organization subsequently adopted the method in 1996 as part of the Ad hoc Committee on
Health Research "Investing in Health Research & Development" report. The DALY is becoming
increasingly common in the field of public health and health impact assessment (HIA). It
"extends the concept of potential years of life lost due to premature death...to include equivalent
years of 'healthy' life lost by virtue of being in states of poor health or disability." In so doing,
mortality and morbidity are combined into a single, common metric.
Traditionally, health liabilities were expressed using one measure: (expected or average number
of) 'Years of Life Lost' (YLL). This measure does not take the impact of disability into account,
which can be expressed by: 'Years Lived with Disability' (YLD). DALYs are calculated by taking
the sum of these two components. In a formula:
DALY = YLL + YLD.
The DALY relies on an acceptance that the most appropriate measure of the effects of chronic
illness is time, both time lost due to premature death and time spent disabled by disease. One
DALY, therefore, is equal to one year of healthy life lost. Japanese life expectancy statistics are
used as the standard for measuring premature death, as the Japanese have the longest life
expectancies.
Looking at the burden of disease via DALYs can reveal surprising things about a population's
health. For example, the 1990 WHO report[citation needed] indicated that 5 of the 10 leading causes of
disability were psychiatric conditions. Psychiatric and neurologic conditions account for 28% of
all years lived with disability, but only 1.4% of all deaths and 1.1% of years of life lost. Thus,
psychiatric disorders, while traditionally not regarded as a major epidemiological problem, are
shown by consideration of disability years to have a huge impact on populations.
Social weighting
The disability-adjusted life year is a type of health adjusted life year (HALY) that attempts to
quantify the burden of disease or disability in populations. They are similar to quality of life
adjust life year (QALY) measures, but rather than attach health related quality of life (HRQL)
estimates to health states, DALYs assign HRQLs to specific diseases and disabilities. The
methodology was originally developed by the World Bank, but has since been greatly modified
and is not an economic measure. However, unique among disease measures, HALYs, including
DALYs and QALYs, are especially useful in guiding the allocation of health resources as they
provide a common denominator, allowing for the expression of utility in terms of DALYs/dollar,
or QALY/dollar.[5] For example, in Gambia, provision of the pneumococcal conjugate
vaccination costs $670 per DALY saved.[6]
Some studies use DALYs calculated to place greater value on a year lived as a young adult. This
formula produces average values around age 10 and age 55, a peak around age 25, and lowest
values among very young children and very old people.[7]
A crucial distinction among DALY studies is the use of "social weighting", in which the value of
each year of life depends on age. There are two components to this differential accounting of
time, age weighing and time discounting. Age weighing is based on the theory of human capital.
Commonly, years lived as a young adult are valued more highly than years spent as a young
child or older adult, at these are years of peak productivity. Age weighing receives considerable
flak from those who criticize for valuing young adults at the expense of children and the old.
Some criticize, while others rationalize, this as reflecting society's interest in productivity and
receiving a return on its investment in raising children. This age weighting system means that
somebody disabled at 30 years of age, for ten years, would be measured as having a higher loss
of DALYs (a greater burden of disease), than somebody disabled by the same disease or injury at
the age of sixty for 15 years. This age-weight funtion is by no means a universal methodology in
HALY studies, but is common when using DALYs. Cost effectiveness studies using QALYs, for
example, do not discount time at different ages differently.[5] It is important to not that this age
weighing function applies to the calculation of DALYs lost due to disability. Years lost to
premature death are determined by the age at death years and life expectancy.
The global burden of disease (GBD) 2001-2002 study counted disability adjusted life years
equally for all ages, but the GBD 1990 and GBD 2004 studies used the formula[8]
[9]
where is the age at which the year is lived and is the value
assigned to it relative to an average value of 1. This age weighting function is not the same as the
disability weight (DW) which is determined by disease or disability and does not vary with age.
Tables have been created of thousands of diseases and disabilities, ranging from Alzheimer's
disease to lose of finger, with the disability weight meant to indicate the level of disability that
result from the specific condition.
At the population level, the burden of disease as measured by DALYs is calculated by DALY =
YLL + YLD where YLL is years of life lost, and YLD is years lived with disability. In turn,
population YLD is determined by the number of years disabled weighed by level of disability
caused by a disability or disease using the formula YLD = I x DW x L. In this formula I =
number of incident cases in the population, DW = disability weight of specific condition, and L =
average duration of the case until remission or death (years). There is also a prevalence (as
opposed to incidence) based calculation for YLD. Premature death is calculate by YLL = N x L,
where N = number of deaths due to condition, L = standard life expectancy at age of death
(expectancy - age at death).[10]
In these studies future years were also discounted at a 3% rate to account for future health care
losses. Time discounting, which is distinct from the age weight function, describes preferences in
time as used in economic models.[11]
The effects of the interplay between life expectancy and years lost, discounting, and social
weighting are complex, depending on the severity and duration of illness. For example, the
parameters used in the GBD 1990 study generally give greater weight to deaths at any year prior
to age 39 than afterward, with the death of a newborn weighted at 33 DALYs and the death of
someone aged 520 weighted at approximately 36 DALYs
The Human Development Index (HDI) is a composite statistic of life expectancy, education,
and income indices used to rank countries into four tiers of human development. It was created
by Pakistani economist Mahbub ul Haq and Indian economist Amartya Sen in 1990, and was
published by the United Nations Development Programme.
The 2010 Human Development Report introduced an Inequality-adjusted Human Development
Index (IHDI). While the simple HDI remains useful, it stated that "the IHDI is the actual level of
human development (accounting for inequality)" and "the HDI can be viewed as an index of
'potential' human development (or the maximum IHDI that could be achieved if there were no
inequality)".
Years of potential life lost (YPLL) or potential years of life lost (PYLL), is an estimate of the
average years a person would have lived if he or she had not died prematurely. It is, therefore, a
measure of premature mortality. As a method, it is an alternative to death rates that gives more
weight to deaths that occur among younger people. Another alternative is to consider the effects
of both disability and premature death using disability adjusted life years.
Calculation
To calculate the years of potential life lost, the analyst has to set an upper reference age. The
reference age should correspond roughly to the life expectancy of the population under study. In
the developed world, this is commonly set at age 75, but it is essentially arbitrary. Thus, PYLL
should be written with respect to the reference age used in the calculation: e.g., PYLL [75].
PYLL can be calculated using individual level data or using age grouped data.
Briefly, for the individual method, each person's PYLL is calculated by subtracting the person's
age at death from the reference age. If a person is older than the reference age when he or she
dies, that person's PYLL is set to zero (i.e., there are no "negative" PYLLs). In effect, only those
who die before the reference age are included in the calculation. Some examples:
1. Reference age = 75; Age at death = 60; PYLL[75] = 75 - 60 = 15
2. Reference age = 75; Age at death = 6 months; PYLL[75] = 75 - 0.5 = 74.5
3. Reference age = 75; Age at death = 80; PYLL[75] = 0 (age at death greater than reference
age)
To calculate the PYLL for a particular population in a particular year, the analyst sums the
individual PYLLs for all individuals in that population who died in that year. This can be done
for all-cause mortality or for cause-specific mortality.
Significance
In the developed world, mortality counts and rates tend to emphasize the most common causes of
death in older people, because the risk of death increases with age. Because PYLL gives more
weight to deaths among younger individuals, it is the favoured metric among those who wish to
draw attention to those causes of death that are more common in younger people. Some
researchers say that this measurement should be considered by governments when they decide
how best to divide up scarce resources for research.
For example, in most of the developed world, heart disease and cancer are the leading causes of
death, as measured by the number (or rate) of deaths. For this reason, heart disease and cancer
tend to get a lot of attention (and research funding). However, one might argue that everyone has
to die of something eventually, and so public health efforts should be more explicitly directed at
preventing premature death. When PYLL is used as an explicit measure of premature death, then
injuries and infectious diseases, become more important. While the most common cause of death
of young people aged 5 to 40 is injury and poisoning in the developed world, because relatively
few young people die, the principal causes of lost years remain cardiovascular disease and
cancer.
Person-years of potential life lost in the United States in 2006
Cause of premature death
Person-years lost
Cancer
8,628,000 person-years
Heart disease and strokes
8,760,000 person-years
Accidents and other injuries
5,873,000 person-years
All other causes
13,649,000 person-years
Epidemiological transition
Diagram showing sharp birth rate and death rate decreases between Time 1 and
Time 4, the congruent increase in population caused by delayed birth rate
decreases, and the subsequent re-leveling of population growth by Time 5.
Theory
Omran divided the epidemiological transition of mortality into three phases, in the last of which
chronic diseases replace infection as the primary cause of death.[4] These phases are:
1. The Age of Pestilence and Famine: Where mortality is high and fluctuating,
precluding sustained population growth, with low and variable life
expectancy, vacillating between 20 and 40 years.
2. The Age of Receding Pandemics: Where mortality progressively declines, with
the rate of decline accelerating as epidemic peaks decrease in frequency.
Average life expectancy increases steadily from about 30 to 50 years.
Population growth is sustained and begins to be exponential.
3. The Age of Degenerative and Man-Made Diseases: Mortality continues to
decline and eventually approaches stability at a relatively low level.
The epidemiological transition occurs as a country undergoes the process of modernization from
developing nation to developed nation status. The developments of modern healthcare, and
medicine like antibiotics, drastically reduces infant mortality rates and extends average life
expectancy which, coupled with subsequent declines in fertility rates, reflects a transition to
chronic and degenerative diseases as more important causes of death.
History
In general human history, Omran's first phase occurs when human population sustains cyclic,
low-growth, and mostly linear, up-and-down patterns associated with wars, famine, epidemic
outbreaks, as well as small golden ages, and localized periods of "prosperity". In early preagricultural history, infant mortality rates were high and average life expectancy low. Today, life
expectancy in third world countries remains relatively low, as in many Sub-Saharan African
nations where it typically doesn't exceed 60 years of age.[5]
The second phase involves advancements in medicine and the devopment of a healthcare system.
One treatment breakthrough of note was the discovery of penicillin in the mid 20th century
which led to widespread and dramatic declines in death rates from previously serious diseases
such as syphilis. Population growth rates surged in the 1950s, 1960s and 1970s, to 1.8% per year
and higher, with the world gaining 2 billion people between 1950 and the 1980s alone.
Omran's third phase occurs when human birth rates drastically decline from highly positive
replacement numbers to stable replacement rates. In several European nations replacement rates
have even become negative.[6] As this transition generally represents the net effect of individual
choices on family size (and the ability to implement those choices), it is more complicated.
Omran gives three possible factors tending to encourage reduced fertility rates:[3]
1. Biophysiologic factors, associated with reduced infant mortaliity and the
expectation of longer life in parents,
2. Socioeconomic factors, associated with childhood survival and the economic
perceptions of large family size, and
3. Psychologic or emotional factors, where society as a whole changes its
rationale and opinion on family size and parental energies are redirected to
qualitative aspects of child-raising.
This transition may also be associated with the sociological adaptations associated with
demographic movements to urban areas, and a shift from agriculture and labor based production
output to technological and service-sector-based economies.
Regardless, Chronic and degenerative diseases, and accidents and injuries, became more
important causes of death. This shift in demographic and disease profiles is currently under way
in most developing nations, however every country is unique in its transition speed based on a
myriad of geographical and socio-political factors.
Controversy
Many question whether or not epidemiological transition really took place during the twentieth
century. The transition during this time describes the replacement of infectious diseases by
chronic diseases. This replacement of diseases has been identified to be caused by multiple
factors such as antibiotics and increased overall public sanitation. Even though these factors
undeniably affected society in ways such as increased lifespan, many believe that the increase
from infectious disease to chronic disease may be an illusion. It is debated that there was an
actual increase in chronic diseases. Instead it is argued that due to new techniques of diagnosing
and managing diseases that previously had been undiagnosed and untreated, it gave the
appearance of an emergence of new chronic illnesses. Multiple factors made chronic diseases
more visible to health care professionals such as increased use of hospitals as treatment centers
and improved statistical evaluation. This led to the question, "Was an epidemiological transition
really taking place in the twentieth century?
Nutrition transition
Nutrition transition is the shift in dietary consumption and energy expenditure that coincides
with economic, demographic, and epidemiological changes. Specifically the term is used for the
recent transition of developing countries from traditional diets high in cereal and fiber to more
Western pattern diets high in sugars, fat, and animal-source food.
Demographic transition (DT) refers to the transition from high birth and death rates to low
birth and death rates as a country develops from a pre-industrial to an industrialized economic
system. This is typically demonstrated through a demographic transition model (DTM). The
theory is based on an interpretation of demographic history developed in 1929 by the American
demographer Warren Thompson (18871973).[1] Thompson observed changes, or transitions, in
birth and death rates in industrialized societies over the previous 200 years. Most developed
countries are in stage 3 or 4 of the model; the majority of developing countries have reached
stage 2 or stage 3. The major (relative) exceptions are some poor countries, mainly in subSaharan Africa and some Middle Eastern countries, which are poor or affected by government
policy or civil strife, notably Pakistan, Palestinian Territories, Yemen and Afghanistan.[2]
Although this model predicts ever decreasing fertility rates, recent data show that beyond a
certain level of development fertility rates increase again.[3]
A correlation matching the demographic transition has been established; however, it is not
certain whether industrialization and higher incomes lead to lower population or if lower
populations lead to industrialization and higher incomes.[4] In countries that are now developed
this demographic transition began in the 18th century and continues today. In less developed
countries, this demographic transition started later and is still at an earlier stage.[5]
Demographic transition (DT) refers to the transition from high birth and death rates to low
birth and death rates as a country develops from a pre-industrial to an industrialized economic
system. This is typically demonstrated through a demographic transition model (DTM). The
theory is based on an interpretation of demographic history developed in 1929 by the American
demographer Warren Thompson (18871973).[1] Thompson observed changes, or transitions, in
birth and death rates in industrialized societies over the previous 200 years. Most developed
countries are in stage 3 or 4 of the model; the majority of developing countries have reached
stage 2 or stage 3. The major (relative) exceptions are some poor countries, mainly in subSaharan Africa and some Middle Eastern countries, which are poor or affected by government
policy or civil strife, notably Pakistan, Palestinian Territories, Yemen and Afghanistan.[2]
Although this model predicts ever decreasing fertility rates, recent data show that beyond a
certain level of development fertility rates increase again.[3]
A correlation matching the demographic transition has been established; however, it is not
certain whether industrialization and higher incomes lead to lower population or if lower
populations lead to industrialization and higher incomes.[4] In countries that are now developed
this demographic transition began in the 18th century and continues today. In less developed
countries, this demographic transition started later and is still at an earlier stage.[5]
In stage one, pre-industrial society, death rates and birth rates are high and roughly in
balance. All human populations are believed to have had this balance until the late 18th
century, when this balance ended in Western Europe.[6] In fact, growth rates were less
than 0.05% at least since the Agricultural Revolution over 10,000 years ago.[6] Birth and
death rates both tend to be very high in this stage.[6] Because both rates are approximately
in balance, population growth is typically very slow in stage one.[6]
In stage two, that of a developing country, the death rates drop rapidly due to
improvements in food supply and sanitation, which increase life spans and reduce
disease. The improvements specific to food supply typically include selective breeding
and crop rotation and farming techniques.[6] Other improvements generally include access
to technology, basic healthcare, and education. For example, numerous improvements in
public health reduce mortality, especially childhood mortality.[6] Prior to the mid-20th
century, these improvements in public health were primarily in the areas of food
handling, water supply, sewage, and personal hygiene.[6] One of the variables often cited
is the increase in female literacy combined with public health education programs which
emerged in the late 19th and early 20th centuries.[6] In Europe, the death rate decline
started in the late 18th century in northwestern Europe and spread to the south and east
over approximately the next 100 years.[6] Without a corresponding fall in birth rates this
produces an imbalance, and the countries in this stage experience a large increase in
population.
In stage three, birth rates fall due to access to contraception, increases in wages,
urbanization, a reduction in subsistence agriculture, an increase in the status and
education of women, a reduction in the value of children's work, an increase in parental
investment in the education of children and other social changes. Population growth
begins to level off. The birth rate decline in developed countries started in the late 19th
century in northern Europe.[6] While improvements in contraception do play a role in birth
rate decline, it should be noted that contraceptives were not generally available nor
widely used in the 19th century and as a result likely did not play a significant role in the
decline then.[6] It is important to note that birth rate decline is caused also by a transition
in values; not just because of the availability of contraceptives.[6]
During stage four there are both low birth rates and low death rates. Birth rates may drop
to well below replacement level as has happened in countries like Germany, Italy, and
Japan, leading to a shrinking population, a threat to many industries that rely on
population growth. As the large group born during stage two ages, it creates an economic
burden on the shrinking working population. Death rates may remain consistently low or
increase slightly due to increases in lifestyle diseases due to low exercise levels and high
obesity and an aging population in developed countries. By the late 20th century, birth
rates and death rates in developed countries leveled off at lower rates.[5]
As with all models, this is an idealized picture of population change in these countries. The
model is a generalization that applies to these countries as a group and may not accurately
describe all individual cases. The extent to which it applies to less-developed societies today
remains to be seen. Many countries such as China, Brazil and Thailand have passed through the
Demographic Transition Model (DTM) very quickly due to fast social and economic change.
Some countries, particularly African countries, appear to be stalled in the second stage due to
stagnant development and the effect of AIDS.
Stage One
In pre-industrial society,death rates and birth rates were both high and fluctuated rapidly
according to natural events, such as drought and disease, to produce a relatively constant and
young population. Family planning and contraception were virtually nonexistent; therefore, birth
rates were essentially only limited by the ability of women to bear children. Emigration
depressed death rates in some special cases (for example, Europe and particularly the Eastern
United States during the 19th century), but, overall, death rates tended to match birth rates, often
exceeding 40 per 1000 per year. Children contributed to the economy of the household from an
early age by carrying water, firewood, and messages, caring for younger siblings, sweeping,
washing dishes, preparing food, and working in the fields.[7] Raising a child cost little more than
feeding him or her; there were no education or entertainment expenses. Thus, the total cost of
raising children barely exceeded their contribution to the household. In addition, as they became
adults they become a major input to the family business, mainly farming, and were the primary
form of insurance for adults in old age. In India, an adult son was all that prevented a widow
from falling into destitution. While death rates remained high there was no question as to the
need for children, even if the means to prevent them had existed.[8]
During this stage, the society evolves in accordance with Malthusian paradigm, with population
essentially determined by the food supply. Any fluctuations in food supply (either positive, for
example, due to technology improvements, or negative, due to droughts and pest invasions) tend
to translate directly into population fluctuations. Famines resulting in significant mortality are
frequent. Overall, the population dynamics during stage one is highly reminiscent of that
commonly observed in animals.
Stage Two
First, improvements in the food supply brought about by higher yields in agricultural
practices and better transportation prevent death due to starvation and lack of water.
Agricultural improvements included crop rotation, selective breeding, and seed drill
technology.
Second, significant improvements in public health reduce mortality, particularly in
childhood. These are not so many medical breakthroughs (Europe passed through stage
two before the advances of the mid-20th century, although there was significant medical
progress in the 19th century, such as the development of vaccination) as they are
improvements in water supply, sewerage, food handling, and general personal hygiene
following from growing scientific knowledge of the causes of disease and the improved
education and social status of mothers.
A consequence of the decline in mortality in Stage Two is an increasingly rapid rise in population
growth (a "population explosion") as the gap between deaths and births grows wider. Note that
this growth is not due to an increase in fertility (or birth rates) but to a decline in deaths. This
change in population occurred in north-western Europe during the 19th century due to the
Industrial Revolution. During the second half of the 20th century less-developed countries
entered Stage Two, creating the worldwide population explosion that has demographers
concerned today. In this stage of DT, countries are vulnerable to become failed states in the
absence of progressive governments.
Stage Three
Stage Three moves the population towards stability through a decline in the birth rate.[11] Several
factors contribute to this eventual decline, although some of them remain speculative:
In rural areas continued decline in childhood death means that at some point parents
realize they need not require so many children to be born to ensure a comfortable old age.
As childhood death continues to fall and incomes increase parents can become
increasingly confident that fewer children will suffice to help in family business and care
for them in old age.
Increasing urbanization changes the traditional values placed upon fertility and the value
of children in rural society. Urban living also raises the cost of dependent children to a
family. A recent theory suggests that urbanization also contributes to reducing the birth
rate because it disrupts optimal mating patterns. A 2008 study in Iceland found that the
most fecund marriages are between distant cousins. Genetic incompatibilities inherent in
more distant outbreeding makes reproduction harder.[12]
In both rural and urban areas, the cost of children to parents is exacerbated by the
introduction of compulsory education acts and the increased need to educate children so
they can take up a respected position in society. Children are increasingly prohibited
under law from working outside the household and make an increasingly limited
contribution to the household, as school children are increasingly exempted from the
expectation of making a significant contribution to domestic work. Even in equatorial
Africa, children now need to be clothed, and may even require school uniforms. Parents
begin to consider it a duty to buy children books and toys. Partly due to education and
access to family planning, people begin to reassess their need for children and their
ability to raise them.[8]
A major factor in reducing birth rates in stage 3 countries such as Malaysia is the availability of
family planning facilities, like this one in Kuala Terengganu, Terengganu, Malaysia.
Increasing female literacy and employment lowers the uncritical acceptance of
childbearing and motherhood as measures of the status of women. Working women have
less time to raise children; this is particularly an issue where fathers traditionally make
little or no contribution to child-raising, such as southern Europe or Japan. Valuation of
women beyond childbearing and motherhood becomes important.
Improvements in contraceptive technology are now a major factor. Fertility decline is
caused as much by changes in values about children and sex as by the availability of
contraceptives and knowledge of how to use them.
The resulting changes in the age structure of the population include a reduction in the youth
dependency ratio and eventually population aging. The population structure becomes less
triangular and more like an elongated balloon. During the period between the decline in youth
dependency and rise in old age dependency there is a demographic window of opportunity that
can potentially produce economic growth through an increase in the ratio of working age to
dependent population; the demographic dividend.
However, unless factors such as those listed above are allowed to work, a society's birth rates
may not drop to a low level in due time, which means that the society cannot proceed to Stage
Four and is locked in what is called a demographic trap.
Countries that have experienced a fertility decline of over 40% from their pre-transition levels
include: Costa Rica, El Salvador, Panama, Jamaica, Mexico, Colombia, Ecuador, Guyana,
Philippines, Indonesia, Malaysia, Sri Lanka, Turkey, Azerbaijan, Turkmenistan, Uzbekistan,
Egypt, Tunisia, Algeria, Morocco, Lebanon, South Africa, India, Saudi Arabia, and many Pacific
islands.
Countries that have experienced a fertility decline of 25-40% include: Honduras, Guatemala,
Nicaragua, Paraguay, Bolivia, Vietnam, Myanmar, Bangladesh, Tajikistan, Jordan, Qatar,
Albania, United Arab Emirates, Zimbabwe, and Botswana.
Countries that have experienced a fertility decline of 10-25% include: Haiti, Papua New Guinea,
Nepal, Pakistan, Syria, Iraq, Libya, Sudan, Kenya, Ghana and Senegal.[10]
Stage Four
This occurs where birth and death rates are both low, leading to a total population which is high
and stable. Death rates are low for a number of reasons, primarily lower rates of diseases and
higher production of food. The birth rate is low because people have more opportunities to
choose if they want children; this is made possible by improvements in contraception or women
gaining more independence and work opportunities.[13] Some theorists[who?] consider there are only
4 stages and that the population of a country will remain at this level. The DTM is only a
suggestion about the future population levels of a country, not a prediction.
Countries that are at this stage (Total Fertility Rate of less than 2.5 in 1997) include: United
States, Canada, Argentina, Australia, New Zealand, most of Europe, Bahamas, Puerto Rico,
Trinidad and Tobago, Brazil, Sri Lanka, South Korea, Singapore, Iran, China, Turkey, Thailand
and Mauritius.[10]
In the current century, most developed countries have increased fertility. From the point of view
of evolutionary biology, richer people having fewer children is unexpected, as natural selection
would be expected to favor individuals who are willing and able to convert plentiful resources
into plentiful fertile descendants.
Use
The QALY is often used in cost-utility analysis to calculate the ratio of cost to QALYs saved for
a particular health care intervention. This is then used to allocate healthcare resources, with an
intervention with a lower cost to QALY saved (incremental cost effectiveness) ratio ("ICER")
being preferred over an intervention with a higher ratio.[citation needed]
Calculation
The QALY is a measure of the value of health outcomes. Since health is a function of length of
life and quality of life, the QALY was developed as an attempt to combine the value of these
attributes into a single index number. The basic idea underlying the QALY is simple: it assumes
that a year of life lived in perfect health is worth 1 QALY (1 Year of Life 1 Utility value = 1
QALY) and that a year of life lived in a state of less than this perfect health is worth less than 1.
In order to determine the exact QALY value, it is sufficient to multiply the utility value
associated with a given state of health by the years lived in that state. QALYs are therefore
expressed in terms of "years lived in perfect health": half a year lived in perfect health is
equivalent to 0.5 QALYs (0.5 years 1 Utility), the same as 1 year of life lived in a situation
with utility 0.5 (e.g. bedridden) (1 year 0.5 Utility). QALYs can then be incorporated with
medical costs to arrive at a final common denominator of cost/QALY. This parameter can be
used to develop a cost-effectiveness analysis of any treatment
Meaning
The concept of the QALY is credited to work by Klarman[4] and later Fanshel and Bush[5] and
Torrance [6] who suggested the idea of length of life adjusted by indices of functionality or health.
[7]
It was officially named the QALY in print in an article by Zeckhauser and Shepard[8] It was
later promoted through medical technology assessment conducted by the US Congress Office of
Technology Assessment.
Then, in 1980, Pliskin proposed a justification of the construction of the QALY indicator using
the multiattribute utility theory: if a set of conditions pertaining to agent preferences on life years
and quality of life are verified, then it is possible to express the agents preferences about couples
(number of life years/health state), by an interval (Neumannian) utility function. This utility
function would be equal to the product of an interval utility function on life years , and an
interval utility function on health state . Because of these theoretical assumptions, the
meaning and usefulness of the QALY is debated.[9][10][11] Perfect health is hard, if not impossible,
to define. Some argue that there are health states worse than being dead, and that therefore there
should be negative values possible on the health spectrum (indeed, some health economists have
incorporated negative values into calculations). Determining the level of health depends on
measures that some argue place disproportionate importance on physical pain or disability over
mental health.[12] The effects of a patient's health on the quality of life of others (e.g. caregivers or
family) do not figure into these calculations
Sullivan's Index
Sullivan's index is a method to compute life expectancy free of disability.[1] Health expectancy
calculated by Sullivans method is the number of remaining years, at a particular age, that an
individual can expect to live in a healthy state.[2] It is computed by subtracting the probable
duration of bed disability and inability to perform major activities from the life expectancy. The
data for calculation is obtained from population surveys and period life table. The Sullivan's
index collects mortality and disability data separately, and this data is almost often readily
available. The Sullivan health expectancy reflects the current health of a real population adjusted
for mortality levels and independent of age structure.[3]
This indicator gives more importance to the causes of death that occurred at younger
ages than those occurred at older ages.
The upper age limit of 75 is used to approximate the life expectancy of Canadians for
both sexes combined. For example, an individual in good health is expected to live up
to age 75 in Canada.
Deaths occurring in individuals age 75 or older are NOT included in the calculation.
Infant deaths, deaths among infants under 1 year of age, are included in the calculation
due to their very small numbers. Other methods exclude these deaths since they are
often due to causes that have different etiology from deaths at later ages.
PYLL can be calculated in two ways. The Core Indicators for Public Health in Ontario uses
Method A.
Method A (Individual):
The PYLL due to death is calculated for each person who died before age 75. For example, a
person who died at age 20 would contribute 55 potential years of life lost. Deaths occurring in
individuals age 75 or older are NOT included in the calculation. Potential years of life lost
correspond to the sum of the PYLL contributed for each individual. The rate is obtained by
dividing total potential years of life lost by the total population less than 75 years of age.
Method of Calculation:
Individual
6 months
75 0.5 = 74.5
55
75 55 = 20
15
75 15 = 60
85 *
60
75 60 = 15
SUM of PYLL
169.5
Note: * refers to deaths that DO NOT contribute to PYLL as deaths occurred to individuals 75
years of age or older.
Method B (Age Group):
The PYLL due to death is calculated for each age group (< 1, 1-4, 5-9, , and 70-74) by
multiplying the number of deaths by the difference between age 75 and the mean age at death
in each age group. Potential years of life lost correspond to the sum of the products obtained
for each age group. The rate is obtained by dividing total potential years of life lost by the
total population under 75 years old.
Method of Calculation:
Age
# of Deaths (1)
Mean Age at
Death (2)
75 Mean Age at
Death (3)
PYLL(1) x (3)
<1
0.5
74.5
298.0
1-4
28
3.0
72.0
2,016.0
5-9
52
7.5
67.5
3,510.0
10-14
64
12.5
62.5
4,000.0
15-19
315
17.5
57.5
18,112.5
20-24
410
22.5
52.5
21,525.0
25-29
308
27.5
47.5
14,630.0
30-34
243
32.5
42.5
10,327.5
35-39
171
37.5
37.5
6,412.5
40-44
131
42.5
32.5
4,257.5
45-49
116
47.5
27.5
3,190.0
50-54
85
52.5
22.5
1,912.5
55-59
85
57.5
17.5
1,487.5
60-64
86
62.5
12.5
1,075.0
65-69
64
67.5
7.5
480.0
70-74
70
SUM of PYLL
72.5
2.5
175.0
93,409.0
(1) Calculate the mean age for each age group (column 2) and subtract from the selected age,
75 (column 3)
(2) Calculate the potential years of life lost for each age group by multiplying the number of
deaths (column 1) by the remaining years of life lost (column 3)
(3) Calculate the PYLL rate by dividing the sum of the potential years of life lost by age
group (93,409) by the total population for the ages selected (12,975,615).
Rate per 1,000 persons
= Total PYLL divided by Population under age 75
= 93,409.0/12,975,615
= 7.2 per 1,000
Time trend
In a time-trend analysis, comparisons are made between groups to help draw conclusions about
the effect of an exposure on different populations. Observations are recorded for each group at
equal time intervals, for example monthly. Examples of measurements include prevalence of
disease, levels of pollution, or mean temperature in a region.
Uses of time-trend analysis
Trends in factors such as rates of disease and death, as well as behaviours such as smoking are
often used by public health professionals to assist in healthcare needs assessments, service
planning, and policy development. Examining data over time also makes it possible to predict
future frequencies and rates of occurrence.
Studies of time trends may focus on any of the following:
Such studies usually rely on routine data sources, which may have been
collected for other purposes
Advantages
Conceptually, a meta-analysis uses a statistical approach to combine the results from multiple
studies in an effort to increase power (over individual studies), improve estimates of the size of
the effect and/or to resolve uncertainty when reports disagree. Basically, it produces a weighted
average of the included study results and this approach has several advantages:
Inconsistency of results across studies can be quantified and analyzed. For instance, does
inconsistency arise from sampling error, or are study results (partially) influenced by
between-study heterogeneity.
Pitfalls
A meta-analysis of several small studies does not predict the results of a single large study.[11]
Some have argued that a weakness of the method is that sources of bias are not controlled by the
method: a good meta-analysis of badly designed studies will still result in bad statistics.[12] This
would mean that only methodologically sound studies should be included in a meta-analysis, a
practice called 'best evidence synthesis'.[12] Other meta-analysts would include weaker studies,
and add a study-level predictor variable that reflects the methodological quality of the studies to
examine the effect of study quality on the effect size.[13] However, others have argued that a
better approach is to preserve information about the variance in the study sample, casting as wide
a net as possible, and that methodological selection criteria introduce unwanted subjectivity,
defeating the purpose of the approach.[14]
A funnelplot expected without the file drawer problem. The largest studies converge
on a null result, while smaller studies show more random variability.
A funnelplot expected with the file drawer problem. The largest studies still cluster
around the null result, but the bias against publishing negative studies has caused
the literature as a whole to appear unjustifiably favourable to the hypothesis.
Another potential pitfall is the reliance on the available corpus of published studies, which may
create exaggerated outcomes due to publication bias, as studies which show negative results or
insignificant results are less likely to be published. For example, one may have overlooked
dissertation studies or studies that have never been published. This is not easily solved, as one
cannot know how many studies have gone unreported.[15]
This file drawer problem results in the distribution of effect sizes that are biased, skewed or
completely cut off, creating a serious base rate fallacy, in which the significance of the published
studies is overestimated, as other studies were either not submitted for publication or were
rejected. This should be seriously considered when interpreting the outcomes of a meta-analysis.
[15][16]
The distribution of effect sizes can be visualized with a funnel plot which is a scatter plot of
sample size and effect sizes. In fact, for a certain effect level, the smaller the study, the higher is
the probability to find it by chance. At the same time, the higher the effect level, the lower is the
probability that a larger study can result in that positive result by chance. If many negative
studies were not published, the remained positive studies give rise to a funnel plot in which effect
size is inversely proportional to sample size, in other words: the higher the effect size, the
smaller the sample size. An important part of the shown effect is then due to chance that is not
balanced in the plot because of unpublished negative data absence. In contrast, when most
studies were published, the effect shown has no reason to be biased by the study size, so a
symmetric funnel plot results. So, if no publication bias is present, one would expect that there is
no relation between sample size and effect size.[17] A negative relation between sample size and
effect size would imply that studies that found significant effects were more likely to be
published and/or to be submitted for publication. There are several procedures available that
attempt to correct for the file drawer problem, once identified, such as guessing at the cut off part
of the distribution of study effects.
Methods for detecting publication bias have been controversial as they typically have low power
for detection of bias, but also may create false positives under some circumstances.[18] For
instance small study effects, wherein methodological differences between smaller and larger
studies exist, may cause differences in effect sizes between studies that resemble publication
bias.[clarification needed] However, small study effects may be just as problematic for the interpretation
of meta-analyses, and the imperative is on meta-analytic authors to investigate potential sources
of bias. A Tandem Method for analyzing publication bias has been suggested for cutting down
false positive error problems.[19] This Tandem method consists of three stages. Firstly, one
calculates Orwin's fail-safe N, to check how many studies should be added in order to reduce the
test statistic to a trivial size. If this number of studies is larger than the number of studies used in
the meta-analysis, it is a sign that there is no publication bias, as in that case, one needs a lot of
studies to reduce the effect size. Secondly, one can do an Egger's regression test, which tests
whether the funnel plot is symmetrical. As mentioned before: a symmetrical funnel plot is a sign
that there is no publication bias, as the effect size and sample size are not dependent. Thirdly, one
can do the trim-and-fill method, which imputes data if the funnel plot is asymmetrical. Important
to note is that these are just a couple of methods that can be used, but several more exist.
Nevertheless, it is suggested that 25% of meta-analyses in the psychological sciences may have
publication bias.[19] However, low power problems likely remain at issue, and estimations of
publication bias may remain lower than the true amount.
Most discussions of publication bias focus on journal practices favoring publication of
statistically significant finds. However, questionable research practices, such as reworking
statistical models until significance is achieved, may also favor statistically significant findings
in support of researchers' hypotheses[20][21] Questionable researcher practices aren't necessarily
sample size dependent, and as such are unlikely to be evident on a funnel plot and may go
undetected by most publication bias detection methods currently in use.
Other weaknesses are Simpson's paradox (two smaller studies may point in one direction, and the
combination study in the opposite direction) and subjectivity in the coding of an effect or
decisions about including or rejecting studies.[22] There are two different ways to measure effect:
correlation or standardized mean difference. The interpretation of effect size is arbitrary, and
there is no universally agreed upon way to weigh the risk. It has not been determined if the
statistically most accurate method for combining results is the fixed, random or quality effect
models.[citation needed]
Agenda-driven bias
The most severe fault in meta-analysis[23] often occurs when the person or persons doing the
meta-analysis have an economic, social, or political agenda such as the passage or defeat of
legislation. People with these types of agendas may be more likely to abuse meta-analysis due to
personal bias. For example, researchers favorable to the author's agenda are likely to have their
studies cherry-picked while those not favorable will be ignored or labeled as "not credible". In
addition, the favored authors may themselves be biased or paid to produce results that support
their overall political, social, or economic goals in ways such as selecting small favorable data
sets and not incorporating larger unfavorable data sets. The influence of such biases on the
results of a meta-analysis is possible because the methodology of meta-analysis is highly
malleable.[22]
A 2011 study done to disclose possible conflicts of interests in underlying research studies used
for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the
studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11
from general medicine journals, 15 from specialty medicine journals, and three from the
Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed a total of 509
randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources, with 219
(69%) receiving funding from industry[clarification needed]. Of the 509 RCTs, 132 reported author
conflict of interest disclosures, with 91 studies (69%) disclosing one or more authors having
industry financial ties. The information was, however, seldom reflected in the meta-analyses.
Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The
authors concluded without acknowledgment of COI due to industry funding or author industry
financial ties from RCTs included in meta-analyses, readers understanding and appraisal of the
evidence from the meta-analysis may be compromised.[24]
Steps in a meta-analysis
1. Formulation of the problem
2. Search of literature
3. Selection of studies ('incorporation criteria')
4. Decide which dependent variables or summary measures are allowed. For instance:
in which
pooled variance.
the
5. Selection of a meta-regression statistical model: e.g. simple regression, fixed-effect metaregression or random-effect meta-regression. Meta-regression is a tool used in meta-analysis to
examine the impact of moderator variables on study effect size using regression-based
techniques. Meta-regression is more effective at this task than are standard regression techniques.
For reporting guidelines, see the Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA) statement
Forest graph
Funnel plot
A funnel plot is a graph designed to check for the existence of publication bias in systematic
reviews and meta-analyses. In the absence of publication bias, it assumes that the largest studies
will be plotted near the average, and smaller studies will be spread evenly on both sides of the
average, creating a roughly funnel-shaped distribution. Deviation from this shape can indicate
publication bias
Quotation
Funnel plots, introduced by Light and Pillemer in 1984[1] and discussed in detail by Egger and
colleagues,[2][3] are useful adjuncts to meta-analyses. A funnel plot is a scatterplot of treatment
effect against a measure of study size. It is used primarily as a visual aid for detecting bias or
systematic heterogeneity. A symmetric inverted funnel shape arises from a well-behaved data
set, in which publication bias is unlikely. An asymmetric funnel indicates a relationship between
treatment effect and study size. This suggests the possibility of either publication bias or a
systematic difference between smaller and larger studies (small study effects). Asymmetry can
also arise from use of an inappropriate effect measure. Whatever the cause, an asymmetric funnel
plot leads to doubts over the appropriateness of a simple meta-analysis and suggests that there
needs to be investigation of possible causes.
A variety of choices of measures of study size is available, including total sample size, standard
error of the treatment effect, and inverse variance of the treatment effect (weight). Sterne and
Egger have compared these with others, and conclude that the standard error is to be
recommended.[3] When the standard error is used, straight lines may be drawn to define a region
within which 95% of points might lie in the absence of both heterogeneity and publication bias.[3]
In common with confidence interval plots, funnel plots are conventionally drawn with the
treatment effect measure on the horizontal axis, so that study size appears on the vertical axis,
breaking with the general rule. Since funnel plots are principally visual aids for detecting
asymmetry along the treatment effect axis, this makes them considerably easier to interpret.
Criticism
The funnel plot is not without problems. If high precision studies really are different from low
precision studies with respect to effect size (e.g., due to different populations examined) a funnel
plot may give a wrong impression of publication bias.[4] The appearance of the funnel plot can
change quite dramatically depending on the scale on the y-axis whether it is the inverse
square error or the trial size.
Null result
In science, a null result is a result without the expected content: that is, the proposed result is
absent.[1] It is an experimental outcome which does not show an otherwise expected effect. This
does not imply a result of zero or nothing, simply a result that does not support the hypothesis.
The term is a translation of the scientific Latin nullus resultarum, meaning "no consequence".
In statistical hypothesis testing, a null result occurs when an experimental result is not
significantly different from what is to be expected under the null hypothesis. While some effect
may in fact be observed, its probability (under the null hypothesis) does not exceed the
significance level, i.e., the threshold set prior to testing for rejection of the null hypothesis. The
significance level varies, but is often set at 0.05 (5%).
As an example in physics, the results of the MichelsonMorley experiment were of this type, as
it did not detect the expected velocity relative to the postulated luminiferous aether. This
experiment's famous failed detection, commonly referred to as the null result, contributed to the
development of special relativity. Note that the experiment did in fact appear to measure a nonzero "drift", but the value was far too small to account for the theoretically expected results; it is
generally thought to be inside the noise level of the experiment
SYSTEMATIC REVIEW
A systematic review (also systematic literature review or structured
literature review, SLR) is a literature review focused on a research question that
tries to identify, appraise, select and synthesize all high quality research evidence
relevant to that question. Systematic reviews of high-quality randomized controlled
trials are crucial to evidence-based medicine.[1] An understanding of systematic
reviews and how to implement them in practice is becoming mandatory for all
professionals involved in the delivery of health care. Besides health interventions,
systematic reviews may concern clinical tests, public health interventions, social
interventions, adverse effects, and economic evaluations.[2][3] Systematic reviews
are not limited to medicine and are quite common in all other sciences where data
are collected, published in the literature, and an assessment of methodological
quality for a precisely defined subject would be helpful.
Characteristics
A systematic review aims to provide an exhaustive summary of current literature relevant to a
research question. The first step of a systematic review is a thorough search of the literature for
relevant papers. The Methodology section of the review will list the databases and citation
indexes searched, such as Web of Science, Embase, and PubMed, as well as any hand searched
individual journals. Next, the titles and the abstracts of the identified articles are checked against
pre-determined criteria for eligibility and relevance. This list will always depend on the research
problem. Each included study may be assigned an objective assessment of methodological
quality preferably using a method conforming to the Preferred Reporting Items for Systematic
Reviews and Meta-Analyses (PRISMA) statement (the current guideline)[5] or the high quality
standards of Cochrane collaboration.[6]
Systematic reviews often, but not always, use statistical techniques (meta-analysis) to combine
results of the eligible studies, or at least use scoring of the levels of evidence depending on the
methodology used. An additional rater may be consulted to resolve any scoring differences
between raters.[4] Systematic review is often applied in the biomedical or healthcare context, but
it can be applied in any field of research. Groups like the Campbell Collaboration are promoting
the use of systematic reviews in policy-making beyond just healthcare.
A systematic review uses an objective and transparent approach for research synthesis, with the
aim of minimizing bias. While many systematic reviews are based on an explicit quantitative
meta-analysis of available data, there are also qualitative reviews which adhere to the standards
for gathering, analyzing and reporting evidence. The EPPI-Centre has been influential in
developing methods for combining both qualitative and quantitative research in systematic
reviews.[7]
Recent developments in systematic reviews include realist reviews,[8] and the meta-narrative
approach.[9][10] These approaches try to overcome the problems of methodological and
epistemological heterogeneity in the diverse literatures existing on some subjects. The PRISMA
Cochrane Collaboration
The Cochrane Collaboration is a group of over 31,000 specialists in healthcare who
systematically review randomised trials of the effects of prevention, treatments and rehabilitation
as well as health systems interventions. When appropriate, they also include the results of other
types of research. Cochrane Reviews are published in The Cochrane Database of Systematic
Reviews section of The Cochrane Library. The 2010 impact factor for The Cochrane Database of
Systematic Reviews was 6.186, and it was ranked 10th in the Medicine, General & Internal
category.[13]
The Cochrane Collaboration provides a handbook for systematic reviewers of interventions
which "provides guidance to authors for the preparation of Cochrane Intervention reviews."[14]
The Cochrane Handbook outlines eight general steps for preparing a systematic review:[14]
1. Defining the review question(s) and developing criteria for including studies
2. Searching for studies
3. Selecting studies and collecting data
4. Assessing risk of bias in included studies
5. Analysing data and undertaking meta-analyses
6. Addressing reporting biases
7. Presenting results and "summary of findings" tables
8. Interpreting results and drawing conclusions
The Cochrane Handbook forms the basis of two sets of standards for the conduct and reporting
of Cochrane Intervention Reviews (MECIR - Methodological Expectations of Cochrane
Intervention Reviews)[15]
[17]
A 2003 study suggested that extending searches beyond major databases, perhaps into grey
literature, would increase the effectiveness of reviews.[18]
Systematic reviews are increasingly prevalent in other fields, such as international development
research.[19] Subsequently, a number of donors most notably the UK Department for
International Development (DFID) and AusAid are focusing more attention and resources on
testing the appropriateness of systematic reviews in assessing the impacts of development and
humanitarian interventions.[19]
One concern is that the methods used to conduct a systematic review are sometimes changed one
researchers see the available trials they are going to include.[20]
Galbraith plot
In statistics, a Galbraith plot (also known as Galbraith's radial plot or just radial plot), is one
way of displaying several estimates of the same quantity that have different standard errors.
Surveillance
Surveillance is the monitoring of the behavior, activities, or other changing information, usually
of people for the purpose of influencing, managing, directing, or protecting them.[2] This can
include observation from a distance by means of electronic equipment (such as CCTV cameras),
or interception of electronically transmitted information (such as Internet traffic or phone calls);
and it can include simple, relatively no- or low-technology methods such as human intelligence
agents and postal interception. The word surveillance comes from a French phrase for "watching
over" ("sur" means "from above" and "veiller" means "to watch"), and is in contrast to more
recent developments such as sousveillance.
Surveillance is very useful to governments and law enforcement to maintain social control,
recognize and monitor threats, and prevent/investigate criminal activity. With the advent of
programs such as the Total Information Awareness program and ADVISE, technologies such as
high speed surveillance computers and biometrics software, and laws such as the
Communications Assistance for Law Enforcement Act, governments now possess an
unprecedented ability to monitor the activities of their subjects.[6]
However, many civil rights and privacy groups, such as the Electronic Frontier Foundation and
American Civil Liberties Union, have expressed concern that by allowing continual increases in
government surveillance of citizens we will end up in a mass surveillance society, with
extremely limited, or non-existent political and/or personal freedoms. Fears such as this have led
to numerous lawsuits such as Hepting v. AT&T
GDP
Gross domestic product (GDP) is defined by OECD as "an aggregate measure of
production equal to the sum of the gross values added of all resident institutional
units engaged in production (plus any taxes, and minus any subsidies, on products
not included in the value of their outputs)."
GDP estimates are commonly used to measure the economic performance of a whole country or
region, but can also measure the relative contribution of an industry sector. This is possible
because GDP is a measure of 'value added' rather than sales; it adds each firm's value added (the
value of its output minus the value of goods that are used up in producing it). For example, a
firm buys steel and adds value to it by producing a car; double counting would occur if GDP
added together the value of the steel and the value of the car.[3] Because it is based on value
added, GDP also increases when an enterprise reduces its use of materials or other resources
('intermediate consumption') to produce the same output.
The more familiar use of GDP estimates is to calculate the growth of the economy from year to
year (and recently from quarter to quarter). The pattern of GDP growth is held to indicate the
success or failure of economic policy and to determine whether an economy is 'in recession'.
GDP estimates are commonly used to measure the economic performance of a whole country or
region, but can also measure the relative contribution of an industry sector. This is possible
because GDP is a measure of 'value added' rather than sales; it adds each firm's value added (the
value of its output minus the value of goods that are used up in producing it). For example, a
firm buys steel and adds value to it by producing a car; double counting would occur if GDP
added together the value of the steel and the value of the car.[3] Because it is based on value
added, GDP also increases when an enterprise reduces its use of materials or other resources
('intermediate consumption') to produce the same output.
The more familiar use of GDP estimates is to calculate the growth of the economy from year to
year (and recently from quarter to quarter). The pattern of GDP growth is held to indicate the
success or failure of economic policy and to determine whether an economy is 'in recession'.
Contents
1 History
2 Determining GDP
o
3 GDP vs GNI
8 Externalities
12 See also
14 Further reading
15 External links
o
15.1 Global
15.2 Data
History
This section requires expansion.
(March 2011)
The concept of GDP was first developed by Simon Kuznets for a US Congress report in 1934.[4]
In this report, Kuznets warned against its use as a measure of welfare (see below under
limitations and criticisms). After the Bretton Woods conference in 1944, GDP became the main
tool for measuring a country's economy.[5] At that time Gross National Product (GNP) was the
preferred estimate, which differed from GDP in that it measured production by a country's
citizens at home and abroad rather than its 'resident institutional units' (see OECD definition
above). The switch to GDP came in the 1990s.
The history of the concept of GDP should be distinguished from the history of changes in ways
of estimating it. The value added by firms is relatively easy to calculate from their accounts, but
the value added by the public sector, by financial industries, and by intangible asset creation is
more complex. These activities are increasingly important in developed economies, and the
international conventions governing their estimation and their inclusion or exclusion in GDP
regularly change in an attempt to keep up with industrial advances. In the words of one academic
economist "The actual number for GDP is therefore the product of a vast patchwork of statistics
and a complicated set of processes carried out on the raw data to fit them to the conceptual
framework."[6]
Angus Maddison calculated historical GDP figures going back to 1830 and before.
Production approach
This approach mirrors the OECD definition given above.
1. Estimate the gross value of domestic output out of the many various
economic activities;
2. Determine the intermediate consumption, i.e., the cost of material, supplies
and services used to produce final goods or services.
3. Deduct intermediate consumption from gross value to obtain the gross value
added.
For measuring output of domestic product, economic activities (i.e. industries) are classified into
various sectors. After classifying economic activities, the output of each sector is calculated by
any of the following two methods:
1. By multiplying the output of each sector by their respective market price and
adding them together
2. By collecting data on gross sales and inventories from the records of
companies and adding them together
The gross value of all sectors is then added to get the gross value added (GVA) at factor cost.
Subtracting each sector's intermediate consumption from gross output gives the GDP at factor
cost. Adding indirect tax minus subsidies in GDP at factor cost gives the "GDP at producer
prices".
Income approach
The second way of estimating GDP is to use "the sum of primary incomes distributed by resident
producer units".[2]
If GDP is calculated this way it is sometimes called gross domestic income (GDI), or GDP (I).
GDI should provide the same amount as the expenditure method described later. (By definition,
GDI = GDP. In practice, however, measurement errors will make the two figures slightly off
when reported by national statistical agencies.)
This method measures GDP by adding incomes that firms pay households for factors of
production they hire - wages for labour, interest for capital, rent for land and profits for
entrepreneurship.
The US "National Income and Expenditure Accounts" divide incomes into five categories:
1. Wages, salaries, and supplementary labour income
2. Corporate profits
3. Interest and miscellaneous investment income
4. Farmers' incomes
5. Income from non-farm unincorporated businesses
These five income components sum to net domestic income at factor cost.
Two adjustments must be made to get GDP:
1. Indirect taxes minus subsidies are added to get from factor cost to market
prices.
2. Depreciation (or capital consumption allowance) is added to get from net
domestic product to gross domestic product.
Total income can be subdivided according to various schemes, leading to various formulae for
GDP measured by the income approach. A common one is:
Nominal GDP Income Approach [2]
GDP = compensation of employees + gross operating surplus + gross mixed
income + taxes less subsidies on production and imports
GDP = COE + GOS + GMI + TP & M SP & M
The sum of COE, GOS and GMI is called total factor income; it is the income of all of the
factors of production in society. It measures the value of GDP at factor (basic) prices. The
difference between basic prices and final prices (those used in the expenditure calculation) is the
total taxes and subsidies that the government has levied or paid on that production. So adding
taxes less subsidies on production and imports converts GDP at factor cost to GDP(I).
Total factor income is also sometimes expressed as:
Total factor income = employee compensation + corporate profits +
proprietor's income + rental income + net interest [9]
Yet another formula for GDP by the income method is:[citation needed]
where R : rents
I : interests
P : profits
SA : statistical adjustments (corporate income taxes, dividends, undistributed corporate profits)
W : wages.
Expenditure approach
The third way to estimate GDP is to calculate the sum of the final uses of goods and services (all
uses except intermediate consumption) measured in purchasers' prices.[2]
In economics, most things produced are produced for sale and then sold. Therefore, measuring
the total expenditure of money used to buy things is a way of measuring production. This is
known as the expenditure method of calculating GDP. Note that if you knit yourself a sweater, it
is production but does not get counted as GDP because it is never sold. Sweater-knitting is a
small part of the economy, but if one counts some major activities such as child-rearing
(generally unpaid) as production, GDP ceases to be an accurate indicator of production.
Similarly, if there is a long term shift from non-market provision of services (for example
cooking, cleaning, child rearing, do-it yourself repairs) to market provision of services, then this
trend toward increased market provision of services may mask a dramatic decrease in actual
domestic production, resulting in overly optimistic and inflated reported GDP. This is
particularly a problem for economies which have shifted from production economies to service
economies.
A fully equivalent definition is that GDP (Y) is the sum of final consumption expenditure
(FCE), gross capital formation (GCF), and net exports (X M).
Y = FCE + GCF+ (X M)
FCE can then be further broken down by three sectors (households, governments and non-profit
institutions serving households) and GCF by five sectors (non-financial corporations, financial
corporations, households, governments and non-profit institutions serving households). The
advantage of this second definition is that expenditure is systematically broken down, firstly, by
type of final use (final consumption or capital formation) and, secondly, by sectors making the
expenditure, whereas the first definition partly follows a mixed delimitation concept by type of
final use and sector.
Note that C, G, and I are expenditures on final goods and services; expenditures on intermediate
goods and services do not count. (Intermediate goods and services are those used by businesses
to produce other goods and services within the accounting year.[10] )
According to the U.S. Bureau of Economic Analysis, which is responsible for calculating the
national accounts in the United States, "In general, the source data for the expenditures
components are considered more reliable than those for the income components [see income
method, below]
GROSS NATIONAL INCOME (GNI)
The Gross national income (GNI) is the total domestic and foreign output claimed
by residents of a country, consisting of gross domestic product (GDP) plus factor
incomes earned by foreign residents, minus income earned in the domestic
economy by nonresidents.
Gross national product (GNP) is the market value of all the products and services produced in
one year by labour and property supplied by the citizens of a country. Unlike Gross Domestic
Product (GDP), which defines production based on the geographical location of production, GNP
allocates production based on location of ownership.
GNP does not distinguish between qualitative improvements in the state of the technical arts
(e.g., increasing computer processing speeds), and quantitative increases in goods (e.g., number
of computers produced), and considers both to be forms of "economic growth".[1]
When a country's capital or labour resources are employed outside its borders, or
when a foreign firm is operating in its territory, GDP and GNP can produce different
measures of total output. In 2009 for instance, the United States estimated its GDP
at $14.119 trillion, and its GNP at $14.265 trillion.
Meta analysis
In statistics, a meta-analysis refers to methods that focus on contrasting and combining
results from different studies, in the hope of identifying patterns among study results,
sources of disagreement among those results, or other interesting relationships that
may come to light in the context of multiple studies. [1] In its simplest form, meta-analysis
is normally done by identification of a common measure of effect size. A weighted
average of that common measure is the output of a meta-analysis. The weighting is
related to sample sizes within the individual studies. More generally there are other
differences between the studies that need to be allowed for, but the general aim of a
meta-analysis is to more powerfully estimate the true effect size as opposed to a less
precise effect size derived in a single study under a given single set of assumptions and
conditions. A meta-analysis therefore gives a thorough summary of several studies that
have been done on the same topic, and provides the reader with extensive information
on whether an effect exists and what size that effect has.
Meta analysis can be thought of as "conducting research about research."
Meta-analyses are often, but not always, important components of a systematic
review procedure. For instance, a meta-analysis may be conducted on several clinical
trials of a medical treatment, in an effort to obtain a better understanding of how well the
treatment works. Here it is convenient to follow the terminology used by the Cochrane
Collaboration,[2] and use "meta-analysis" to refer to statistical methods of combining
evidence, leaving other aspects of 'research synthesis' or 'evidence synthesis', such as
combining information from qualitative studies, for the more general context
of systematic reviews.
Meta-analysis forms part of a framework called estimation statistics which relies
on effect sizes, confidence intervals and precision planning to guide data analysis, and
is an alternative to null hypothesis significance testing.
Basically, it produces a weighted average of the included study results and this
approach has several advantages:
The precision and accuracy of estimates can be improved as more data is used.
This, in turn, may increase the statistical power to detect an effect.
Pitfalls
A meta-analysis of several small studies does not predict the results of a single large study.
[9]
Some have argued that a weakness of the method is that sources of bias are not controlled
by the method: a good meta-analysis of badly designed studies will still result in bad statistics.
[10]
This would mean that only methodologically sound studies should be included in a metaanalysis, a practice called 'best evidence synthesis'. [10] Other meta-analysts would include
weaker studies, and add a study-level predictor variable that reflects the methodological quality
of the studies to examine the effect of study quality on the effect size.[11] However, others have
argued that a better approach is to preserve information about the variance in the study sample,
casting as wide a net as possible, and that methodological selection criteria introduce unwanted
subjectivity, defeating the purpose of the approach.[12]
Decide whether unpublished studies are included to avoid publication bias (file
drawer problem)
4. Decide which dependent variables or summary measures are allowed. For instance:
in which
pooled variance.
the
Meta-analysis
combines the results
of several studies
What is meta-analysis?
Meta-analysis is the use of statistical methods to combine results
of individual studies. This allows us to make the best use of all the
information we have gathered in our systematic review by increasing
the power of the analysis. By statistically combining the results of
similar studies we can improve the precision of our estimates of
treatment effect, and assess whether treatment effects are similar in
similar situations. The decision about whether or not the results of
individual studies are similar enough to be combined in a metaanalysis is essential to the validity of the result, and will be covered
in the next module on heterogeneity. In this module we will look at
the process of combining studies and outline the various methods
available.
There are many approaches to meta-analysis. We have discussed
already that meta-analysis is not simply a matter of adding up
numbers of participants across studies (although unfortunately some
non-Cochrane reviews do this). This is the 'pooling participants' or
'treat-as-one-trial' method and we will discuss it in a little more
detail now.
19
36
Risk
Risk difference
0.528
-0.16
Control
13
19
0.684
58
Risk
Risk difference
0.1034
-0.004
Control
65
0.1077
What would happen if we pooled all the children as if they were part
of a single trial?
Pooled results Retained Total
We don't add up
patients across trials
Risk
Daycare
25
94
0.266
Control
20
84
0.238
Risk difference
+0.03
WRONG!
Definition:
Graphs
Economic growth
Definition of 'Economic Growth'
An increase in the capacity of an economy to produce goods and services,
compared from one period of time to another. Economic growth can be
measured in nominal terms, which include inflation, or in real terms,
which are adjusted for inflation. For comparing one country's economic
growth to another, GDP or GNP per capita should be used as these take
into account population differences between countries.
Economic growth is the increase in the market value of the goods and services
produced by an economy over time. It is conventionally measured as the percent rate of
increase in real gross domestic product, or real GDP.[1] Of more importance is the
growth of the ratio of GDP to population (GDP per capita), which is also called per
capita income. An increase in per capita income is referred to as intensive growth. GDP
growth caused only by increases in population or territory is called extensive growth.[2]
Growth is usually calculated in real terms i.e., inflation-adjusted terms to eliminate
the distorting effect of inflation on the price of goods produced. In economics, "economic
growth" or "economic growth theory" typically refers to growth of potential output, i.e.,
production at "full employment".
As an area of study, economic growth is generally distinguished from development
economics. The former is primarily the study of how countries can advance their
economies. The latter is the study of the economic aspects of the development process
in low-income countries. See also Economic development.
Since economic growth is measured as the annual percent change of gross domestic
product (GDP), it has all the advantages and drawbacks of that measure. For example,
GDP only measures the market economy, which tends to overstate growth during the
change over from a farming economy with household production. [3] An adjustment was
made for food grown on and consumed on farms, but no correction was made for other
household production. Also, there is no allowance in GDP calculations for depletion of
natural resources.
Pros
1. Quality of life
Cons
2. Resource depletion
3. Environmental impact
4. Global warming
Inflation graphs
who made a large part of his academic reputation by reviving, and giving
evidence for, the role of money growth in causing inflation.
If the growth rate of real GDP increases and the growth rates of M and V are
held constant, the growth rate of the price level must fall. But the growth
rate of the price level is just another term for the inflation rate; therefore,
inflation must fall. An increase in the rate of economic growth means more
goods for money to chase, which puts downward pressure on the inflation
rate. If for example the money supply grows at 7% a year and velocity is
constant and if annual economic growth is 3%, inflation must be 4% (more
exactly, 3.9%). If, however, economic growth rises to 4%, inflation falls to 3%
(actually, 2.9%.)
The April numbers for the index of industrial production (IIP), released on Thursday,
brought some cheer on the growthfront. The IIP grew by 3.4 per cent, its highest in
a long time. April, of course, was a month in which the entire country was deep in
electioneering. Therefore, some sort of stimulus from all the campaign spending
might have been reasonable to expect. The biggest beneficiary of this was the
category of "electrical machinery", which grew by over 66 per cent year on year,
reflecting all those campaign rallies, with their generators and audio equipment.
The other significant contributor to the growth in the overall index was electricity,
which grew by almost 12 per cent year on year, significantly higher than its growth
during 2013-14. Typically, a growth acceleration that relies heavily on one or two
sectoral surges does not have much staying power. It would require an across-theboard show of resurgence to allow people to conclude that a sustainable recovery
was under way. That is clearly not happening yet. However, these numbers do
reinforce the perception that things are not getting worse as far as growth is
concerned.
Likewise, there was some room for relief on the inflation front. The consumer price
index, or CPI, numbers for May 2014 showed headline inflation declining slightly,
from 8.6 per cent in April to 8.3 per cent in May. The Central Statistical Office is now
separately reporting a sub-index labelled consumer food price index, or CFPI, which
provides some convenience to observers. The index itself, though, offers little cheer.
It came down modestly between April and May, largely explaining the decline in the
headline rate, but is still significantly above nine per cent. At a time when there are
concerns about the performance of the monsoon and the impact of that on food
prices, these numbers should be a major cause of worry for the government. Milk,
eggs, fish and meat, vegetables and fruit contributed to the persistence of food
inflation. But cereals are also kicking in, as they have been for the past couple of
years, and the government must use its large stocks of rice and wheat quickly to
dampen at least this source of food inflation. It would be unconscionable not to do
so when risks of a resurgence of inflation are high. The larger point on inflation,
though, is how stubborn the rate is despite sluggish growth and high interest rates.
The limitations of monetary policy are being repeatedly underscored.
Against this backdrop, the government's prioritisation of its fight against inflation is
an extremely important development. It has to move quickly from intent to action
on a variety of reforms, from procurement policy to subsidies and to investment in
rural infrastructure. Many of these will generate benefits only over the medium
term. So those expecting a growth stimulus from the Reserve Bank of India any time
soon are bound to be in for a disappointment. Even so, room for optimism should
come from the fact that this government does have the capacity to design and
execute long-term strategies with complete credibility. The simple equation that it
needs to keep in mind is that inflation will not subside unless food prices moderate
and growth will not recover unless inflation subsides.
And clarified: Beta-carotene is a vitamine in carrots and many other vegetables, but you can
also buy it in pure form as pills. There is reason to believe that beta-carotene might help to
prevent lung cancer in cigarette smokers. How do you think you can find out whether betacarotene will have this effect?
Suppose you have two neighbors, both heavy smokers of the same age, both males.
The neighbor who doesnt eat much vegetables gets lung cancer, but the neighbor who eats
a lot of vegetables and is fond of carrots doesnt. Do you think this provides good evidence
that
beta-carotene
prevents
lung
cancer?
There is a laughter in the room, so they dont believe in n=1 experiments/case reports. (still
how many people dont think smoking does not necessarily do any harm because their
chainsmoking
father
reached
his
nineties
I show them the following slide with the lowest box only.
in
good
health).
O.k. What about this study? Ive a group of lung cancer patients, who smoke(d)
heavily. I ask them to fill in a questionnaire about their eating habits in the past and take a
blood sample, and I do the same with a simlar group of smokers without cancer (controls).
Analysis shows that smokers developing lung cancer eat much less beta-carotene
containing vegetables and have lower bloodlevels of beta-carotene than the smokers not
developing cancer. Does this mean that beta-carotene is preventing lung cancer?
Humming in the audience, till one man says: perhaps some people dont remember exactly
what they eat and then several people object that it is just an association and you do not
yet know whether beta-carotene really causes this. Right! I show the box patient-control
studies.
Than consider this study design. I follow a large cohort of healthy heavy smokers
and look at their eating habits (including use of supplements) and take regular blood
samples. After a long follow-up some heavy smokers develop lung cancer whereas others
dont. Now it turns out that the group that did not develop lung cancer had significantly
more beta-carotene in their blood and eat larger amount of beta-carotene containing food.
What
do
you
think
about
that
then?
Now the room is a bit quiet, there is some hesitation. Then someone says: well it is more
convincing and finally the chair says: but it may still not be the carrots, but something else
in their food or they may just have other healthy living habits (including eating carrots).
Cohort-study appears on the slide (What a perfect audience!)
O.k. youre not convinced that these study designs give conclusive evidence. How
could we then establish that beta-carotene lowers the risk of lung cancer in
hea
vy smokers? Suppose you really wanted to
know,
how
do
you
set
up
such
a
study?
Grinning. Someone says by giving half of the smokersbeta-carotene and the other half
nothing. Or a placebo, someone else says. Right! Randomized Controlled Trial is on top of
the slide. And there is not much room left for another box, so we are there. I only add that
the best way to do it is to do it double blinded.
Than I reveal that all this research has really been done. There have been numerous
observational studies (case-control as well cohorts studies) showing a consistent negative
correlation between the intake of beta-carotene and the development of lung cancer in
heavy smokers. The same has been shown for vitamin E.
Knowing that, I asked the public: Would you as a heavy smoker participate in a trial
where you are randomly assigned to one of the following groups: 1. beta-carotene, 2.
vitamin E, 3. both or 4. neither vitamin (placebo)?
The recruitment fails. Some people say they dont believe in supplements, others say that it
would be far more effective if smokers quit smoking (laughter). Just 2 individuals said they
would at least consider it. But they thought there was a snag in it and they were right. Such
studies have been done, and did not give the expected positive results.
In the first large RCT (appr. 30,000 male smokers!), the ATBC Cancer Prevention Study, betacarotene rather increased the incidence of lung cancer with 18 percent and overall mortality
with 8 percent (although harmful effects faded after men stopped taking the pills). Similar
results were obtained in the CARET-study, but not in a 3rd RCT, the Physicians Health Trial,
the only difference being that the latter trial was performed both with smokers nd nonsmokers.
It is now generally thought that cigarette smoke causes beta-carotene to breakdown in
detrimental products, a process that can be halted by other anti-oxidants (normally present
in food). Whether vitamins act positively (anti-oxidant) or negatively (pro-oxidant) depends
very much on the dose and the situation and on whether there is a shortage of such
supplements or not.
I found that this way of explaining study designs to well-educated layman was very effective
and
fun!
The take-home message is that no matter how reproducible the observational studies seem
to indicate a certain effect, better evidence is obtained by randomized control trials. It also
shows that scientists should be very prudent to translate observational findings directly in a
particular lifestyle advice.
On the other hand, I wonder whether all hypotheses have to be tested in a costly RCT (the
costs for the ATCB trial were $46 million). Shouldnt there be very very solid grounds to start
a prevention study with dietary supplements in healthy individuals ? Arent their any
dangers? Personally I think we should be very restrictive about these chemopreventive
studies. Till now most chemopreventive studies have not met the high expectations, anyway.
And what about coenzyme-Q and komkommerslank? Besides that I do not expect the
evidence to be convincing, tiredness can obviously be best combated by rest and I already
eat
enough
cucumbers. ;)
To be continued
countries have been linked to the nutrition transition to the Western diet. [4]
An important advancement in the understanding of risk-modifying factors for cancer was
made by examining maps of cancer mortality rates. The map of colon cancer mortality
rates in the United States was used by the brothers Cedric and Frank C. Garland to
propose the hypothesis that solar ultraviolet B (UVB) radiation, through vitamin D
production, reduced the risk of cancer (the UVB-vitamin D-cancer hypothesis). [5] Since
then many ecological studies have been performed relating the reduction of incidence
or mortality rates of over 20 types of cancer to lower solar UVB doses. [6]
Links between diet and Alzheimers disease have been studied using both geographical
and temporal ecological studies. The first paper linking diet to risk of Alzheimers
disease was a multicountry ecological study published in 1997. [7] It used prevalence of
Alzheimers disease in 11 countries along with dietary supply factors, finding that total
fat and total energy (caloric) supply were strongly correlated with prevalence, while fish
and cereals/grains were inversely correlated (i.e., protective). Diet is now considered an
important risk-modifying factor for Alzheimers disease. [8] Recently it was reported that
the rapid rise of Alzheimers disease in Japan between 1985 and 2007 was likely due to
the nutrition transition from the traditional Japanese diet to the Western diet. [9]
Another example of the use of temporal ecological studies relates to influenza. John
Cannell and associates hypothesized that the seasonality of influenza was largely
driven by seasonal variations in solar UVB doses and calcidiol levels.[10] A randomized
controlled trial involving Japanese school children found that taking 1000 IU per day
vitamin D3 reduced the risk of type A influenza by two-thirds. [11]
Ecological studies are particularly useful for generating hypotheses since they can use
existing data sets and rapidly test the hypothesis. The advantages of the ecological
studies include the large number of people that can be included in the study and the
large number of risk-modifying factors that can be examined.
The term ecological fallacy means that the findings for the groups may not apply to
individuals in the group. However, this term also applies to observational studies
and randomized controlled trials. All epidemiological studies include some people who
have health outcomes related to the risk-modifying factors studied and some who do
not. For example, genetic differences affect how people respond to pharmaceutical
drugs. Thus, concern about the ecological fallacy should not be used to disparage
ecological studies. The more important consideration is that ecological studies should
include as many known risk-modifying factors for any outcome as possible, adding
others if warranted. Then the results should be evaluated by other methods, using, for
example, Hills criteria for causality in a biological system.
The ecological fallacy may occur when conclusions about individuals are drawn from analyses
conducted on grouped data. The nature of this type of analysis tends to overestimate the
degree of association between variables.
Survival rate.
Life table.....
In actuarial science and demography, a life table (also called a mortality
table or actuarial table) is a table which shows, for each age, what the probability is
that a person of that age will die before his or her next birthday ("probability of death").
From this starting point, a number of inferences can be derived.
Life tables are also used extensively in biology and epidemiology. The concept is also of
importance in product life cycle management.
Using
with
will
For those in the age range covered by the chart, the "5 yr" curve indicates the group
that will reach beyond the life expectancy. This curve represents the need for support
that covers longevity requirements.
The "20 yr" and "25 yr" curves indicate the continuing diminishing of the life
expectancy value as "age" increases. The differences between the curves are very
pronounced starting around the age of 50 to 55 and ought to be used for planning
based upon expectation models.
The "10 yr" and "15 yr" curves can be thought of as the trajectory that is followed by
the life expectancy curve related to those along the median which indicates that the age
of 90 is not out of the question.
A "life table" is a kind of bookkeeping system that ecologists often use to keep
track of stage-specific mortality in the populations they study.
It is an especially
From a pest
pest populations can often be suppressed without any other control methods.
To create a life table, an ecologist follows the life history of many individuals in a
population, keeping track of how many offspring each female produces, when each
one dies, and what caused its death.
predators, 90% of the larvae will die from parasitization, and three-fifths of the
pupae will freeze to death in the winter.
are based on a large database of observations.) A life table can be created from
the above data.
Female).
This number represents the maximum biotic potential of the species (i.e. the
greatest number of offspring that could be produced in one generation under ideal
conditions).
The first line of the life table lists the main cause(s) of death, the
number dying, and the percent mortality during the egg stage.
In this example,
an average of only 100 individuals survive the egg stage and become larvae.
The second line of the table lists the mortality experience of these 100 larvae: only
10 of them survive to become pupae (90% mortality of the larvae).
The third
line of the table lists the mortality experience of the 10 pupae -- three-fifths die of
freezing.
If
we assume a 1:1 sex ratio, then there are 2 males and 2 females to start the next
generation.
If there is no mortality of these females, they will each lay an average of 200 eggs
to start the next generation.
the one original female -- this population is DOUBLING in size each generation!!
In ecology, the symbol "R" (capital R) is known as the replacement rate.
It is a
Number of daughters
R = ------------------------------Number of mothers
If the value of "R" is less than 1, the population is decreasing -- if this situation
persists for any length of time the population becomes extinct.
If the value of "R" is greater than 1, the population is increasing -- if this situation
persists for any length of time the population will grow beyond the environment's
carrying capacity. (Uncontrolled population growth is usually a sign of a disturbed
habitat, an introduced species, or some other type of human intervention.)
If the value of "R" is equal to 1, the population is stable -- most natural populations
are very close to this value.
Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.
Of the survivors, 64 die as larvae due to habitat destruction (gum is cleared away
by the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct a life table for this species and calculate a value for "R", the replacement
rate (assume a 1:1 sex ratio).
remaining stable?
Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.
the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct
a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio).
stable?
Forest plots date back to at least the 1970s. One plot is shown in a 1985 book about
meta-analysis.[2]:252 The first use in print of the word "forest plot" may be in an abstract
for a poster at the Pittsburgh (USA) meeting of the Society for Clinical Trials in May
1996.[3] An informative investigation on the origin of the notion "forest plot" was
published in 2001.[4] The name refers to the forest of lines produced. In September
1990, Richard Peto joked that the plot was named after a breast cancer researcher
called Pat Forrest and as a result the name has sometimes been spelt "forrest plot".[4]
Effective Human Resource Management is the Center for Effective Organizations' (CEO) sixth
report of a fifteen-year study of HR management in today's organizations. The only long-term
analysis of its kind, this book compares the findings from CEO's earlier studies to new data
collected in 2010. Edward E. Lawler III and John W. Boudreau measure how HR management
is changing, paying particular attention to what creates a successful HR functionone that
contributes to a strategic partnership and overall organizational effectiveness. Moreover, the
book identifies best practices in areas such as the design of the HR organization and HR metrics.
It clearly points out how the HR function can and should change to meet the future demands of
a global and dynamic labor market.
For the first time, the study features comparisons between U.S.-based firms and companies in
China, Canada, Australia, the United Kingdom, and other European countries. With this new
analysis, organizations can measure their HR organization against a worldwide sample,
assessing their positioning in the global marketplace, while creating an international standard
for HR management.
(PDF 2 docs)
Policy?
1. Politics: (1) The basic principles by which a government is guided.
(2) The declared objectives that a government or party seeks to achieve and preserve in the interest of
national community. See also public policy.
2. Insurance: The formal contract issued by an insurer that contains terms and conditions of the
insurance cover and serves as its legal evidence.
3. Management: The set of basic principles and associated guidelines, formulated and enforced by the
governing body of an organization, to direct and limit its actions in pursuit of long-term goals. See
also corporate policy.
have a notably high subjective element, and that has a material impact on the financial
statements.[citation needed]
Micro-planning
Micro Planning: A tool to empower people
Micro-planning is a comprehensive planning approach wherein the community
prepares development plans themselves considering the priority needs of the village.
Inclusion and participation of all sections of the community is central to micro-
Macro-planning
Macro Planning and Policy Division (MPPD) is responsible for setting macroeconomic policies and
strategies in consultation with key agencies, such as the Reserve Bank of Fiji (RBF) and Ministry of
Finance. The Division analyzes and forecasts movements in macroeconomic indicators and accounts,
including Gross Domestic Product (GDP), Exports and Imports, and the Balance of Payments (BOP).
Macroeconomic forecasting involves making assessments on production data in the various sectors of the
economy for compilation of quarterly forecasts of the National Accounts.
The Division also involves in undertaking assessments and research on macroeconomic indicators,
internal external shocks and structural reform measures, which include areas such as investment, labour
market, goods market, trade, public enterprises, and public service.
The Macro Policy and Planning Division:
Produces macroeconomic forecasts of Gross Domestic Product, Exports, Imports and Balance of
Payments on a quarterly basis;
Undertake research on topical issues and provide pre-budget macroeconomic analyses and
advice.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
In a sense, macro planning is not writing lesson plans for specific lessons but rather familiarizing
with the context in which language teaching is taking place. Macro planning involves the
following:
1) Knowing about the course: The teacher should get to know which language areas and language
skills should be taught or practised in the course, what materials and teaching aids are available,
and what methods and techniques can be used.
2) Knowing about the institution: The teacher should get to know the institution's arrangements
regarding time, length, frequency of lessons, physical conditions of classrooms, and exam
requirements.
3) Knowing about the learners: The teacher should acquire information about the students?age
range, sex ratio, social background, motivation, attitudes, interests, learning needs and other
individual factors.
4) Knowing about the syllabus: The teacher should be clear about the purposes, requirements and
targets specified in the syllabus.
Much of macro planning is done prior to the commencement of a course. However, macro
planning is a job that never really ends until the end of the course.
Macro planning provides general guidance for language teachers. However, most teachers have
more confidence if they have a kind of written plan for each lesson they teach. All teachers have
different personalities and different teaching strategies, so it is very likely their lesson plans
would differ from each other. However, there are certain guidelines that we can follow and certain
elements that we can incorporate in our plans to help us create purposeful, interesting and
motivating lessons for our learners.
documentation approach
technologies (tools).
Definition of P&P program
A policies and procedures (P&P) program refers to the context in which an organization formally plans, designs,
implements, manages, and uses P&P communication in support of performance-based learning and on-going
reference.
Description of components
The five components of a formal P&P program are described below:
The information plan or architecture which identifies the coverage and organization of subject
matter and related topics to be included
The documentation approach which designates how P&P content will be designed and
presented, including the documentation methods, techniques, formats, and styles
The P&P expertise necessary for planning, designing, developing, coordinating, implementing,
and publishing P&P content, as well as the expertise needed for managing the program and the
content development projects
The designated technologies for developing, publishing, storing, accessing, and managing
content, as well as for monitoring content usage.
Implementing components
Every organization is usually at a different maturity stage for their P&P investment. Therefore, before establishing or
enhancing a current P&P program, it is important to obtain an objective assessment of the organizational maturity,
including where your P&P program is now and where it needs to be in the future. Once the maturity level is
established, it is then necessary to develop a strategic P&P program plan. The strategic plan will enable your
organization to achieve the necessary level of maturity for each component and ensure that your organization will
maximize the value of its P&P investment.
Conclusion
Organizations with informal P&P programs do not usually reap the benefits that formal P&P programs provide. An
effective P&P program must include five components. It is essential to have an objective P&P program assessment to
determine the existing P&P maturity grade and where it should be. The P&P strategic plan is the basis for achieving a
higher level of performance in your P&P program
The following information is provided as a template to assist learners draft a policy. However
it must be remembered that policies are written to address specific issues, and therefore the
structure and components of a policy will differ considerably according to the need. A policy
document may be many pages or it may be a single page with just a few simple statements.
The following template is drawn from an Information Bulletin "Policy and Planning" by Sport
and Recreation Victoria. It is suggested that there are nine components. The example given at
the right of the table should not be construed as a complete policy
Component
1
Brief Example
organisation seeks to
Underpinning principles,
Broad
service objectives which
explain the areas in
Strategies to achieve
each objective
Specific actions to be
taken
Desired outcomes of
specific actions
Performance indicators
A reduction in injuries
on a day-to-day basis.
of services delivery
9
A review program
Health financing systems are critical for reaching universal health coverage. Health financing
levers to move closer to universal health coverage lie in three interrelated areas:
Healthcare Financing
The Need
More than 120 million people in Pakistan do not have health coverage. This pushes the poor into
debt and an inevitable medical-poverty trap. Two-thirds of households surveyed over the last three
years, reported that they were affected by one or more health problems and went into debt to
finance the cost. Many who cannot afford treatment, particularly women, forego medical treatment
altogether.
The Solution
To fill this vacuum in healthcare financing, the American Pakistan Foundation has partnered with
Heartfile Health Financing to support their groundbreaking work in healthcare reform and health
financing for the poor in Pakistan.
Heartfile is an innovative program that utilizes a custom-made technology platform to transfer funds
for treatment costs of the poor. The system, founded by Dr. Sania Nishtar, is highly transparent and
effective by providing a direct connection between the donor, healthcare facility, and beneficiary
patient.
Success Stories
At the age of 15 Majjid was the only breadwinner of his family. After being hit by a tractor he was out
of a job with a starving family and no money for an operation. Through Heartfile he was able to get
the treatment he needed and stay out of debt.
Majid
The Process
Heartfile is contacted via text or email when a person of dire financial need is admitted into one of a
list of preregistered hospitals.
Within 24 hours a volunteer is mobilized to see the patient, assess poverty status and the eligibility
by running their identity card information through the national database authority.
Once eligibility is established, the patient is sent funds within 72 hours through a cash transfer to
their service provider.
Donors to Heartfile have full control over their donation through a web database that allows them to
decide where they want their funds to go. They are connected to the people they support through a
personal donation page that allows them to see exactly how their funds were used.
Hill's Criteria
Hills Criteria* are
presented here as
they have been
applied in
epidemiological
research, followed by
examples which
illustrate how they
would be applied to
research in the social
and behavioral
sciences.
1.
Temporal Relationship:
Exposure always precedes the
outcome. If factor "A" is believed to
cause a disease, then it is clear that
factor "A" must necessarily always
precede the occurrence of the disease.
This is the only absolutely essential
criterion. This criterion negates the
validity of all functional explanations
used in the social sciences, including
the functionalist explanations that
dominated British social anthropology
for so many years and the ecological
functionalism that pervades much
American cultural ecology.
2.
Strength:
3.
Dose-Response Relationship:
An increasing amount of exposure
increases the risk. If a dose-response
relationship is present, it is strong
evidence for a causal relationship.
However, as with specificity (see
below), the absence of a doseresponse relationship does not rule
out a causal relationship. A threshold
may exist above which a relationship
may develop. At the same time, if a
specific factor is the cause of a
disease, the incidence of the disease
should decline when exposure to the
factor is reduced or eliminated. An
anthropological example of this would
be the relationship between
population growth and agricultural
intensification. If population growth is
a cause of agricultural intensification,
then an increase in the size of a
population within a given area should
result in a commensurate increase in
the amount of energy and resources
invested in agricultural production.
Conversely, when a population
4.
Consistency:
The association is consistent when
results are replicated in studies in
different settings using different
methods. That is, if a relationship is
causal, we would expect to find it
consistently in different studies and
among different populations. This is
why numerous experiments have to be
done before meaningful statements
can be made about the causal
relationship between two or more
factors. For example, it required
thousands of highly technical studies
of the relationship between cigarette
smoking and cancer before a definitive
conclusion could be made that
cigarette smoking increases the risk of
(but does not cause) cancer. Similarly,
it would require numerous studies of
the difference between male and
female performance of specific
behaviors by a number of different
researchers and under a variety of
different circumstances before a
5.
Plausibility:
The association agrees with currently
accepted understanding of
pathological processes. In other
words, there needs to be some
theoretical basis for positing an
association between a vector and
disease, or one social phenomenon
and another. One may, by chance,
discover a correlation between the
price of bananas and the election of
dog catchers in a particular
community, but there is not likely to be
any logical connection between the
two phenomena. On the other hand,
the discovery of a correlation between
population growth and the incidence
of warfare among Yanomamo villages
would fit well with ecological theories
of conflict under conditions of
increasing competition over
resources. At the same time, research
that disagrees with established theory
is not necessarily false; it may, in fact,
force a reconsideration of accepted
beliefs and principles.
ernate Explanations:
In judging whether a reported
association is causal, it is necessary
to determine the extent to which
researchers have taken other possible
explanations into account and have
effectively ruled out such alternate
explanations. In other words, it is
7.
Experiment:
The condition can be altered
(prevented or ameliorated) by an
appropriate experimental regimen.
8.
Specificity:
This is established when a single
putative cause produces a specific
effect. This is considered by some to
be the weakest of all the criteria. The
diseases attributed to cigarette
smoking, for example, do not meet
this criteria. When specificity of an
association is found, it provides
additional support for a causal
relationship. However, absence of
specificity in no way negates a causal
relationship. Because outcomes (be
they the spread of a disease, the
incidence of a specific human social
behavior or changes in global
temperature) are likely to have
multiple factors influencing them, it is
highly unlikely that we will find a oneto-one cause-effect relationship
between two phenomena. Causality is
most often multiple. Therefore, it is
necessary to examine specific causal
relationships within a
larger systemic perspective.
9.
Coherence:
The association should be compatible
with existing theory and knowledge.
In other words, it is necessary to
evaluate claims of causality within the
context of the current state of
knowledge within a given field and in
related fields. What do we have to
sacrifice about what we currently
know in order to accept a particular
claim of causality. What, for example,
do we have to reject regarding our
current knowledge in geography,
physics, biology and anthropology in
order to accept the Creationist claim
that the world was created as
described in the Bible a few thousand
years ago? Similarly, how consistent
are racist and sexist theories of
intelligence with our current
understanding of how genes work and
how they are inherited from one
generation to the next? However, as
with the issue of plausibility, research
that disagrees with established theory
and knowledge are not automatically
false. They may, in fact, force a
reconsideration of accepted beliefs
and principles. All currently accepted
theories, including Evolution,
Relativity and non-Malthusian
population ecology, were at one time
new ideas that challenged orthodoxy.
Thomas Kuhn has referred to such
changes in accepted theories
as "Paradigm Shifts".
The Bradford Hill criteria, otherwise known as Hill's criteria for causation, are a
group of minimal conditions necessary to provide adequate evidence of a causal
relationship between an incidence and a consequence, established by
the English epidemiologist Sir Austin Bradford Hill (18971991) in 1965.
The list of the criteria is as follows:
1.
Strength: A small association does not mean that there is not a causal effect,
though the larger the association, the more likely that it is causal. [1]
2.
3.
4.
Temporality: The effect has to occur after the cause (and if there is an expected
delay between the cause and expected effect, then the effect must occur after
that delay).[1]
5.
6.
Plausibility: A plausible mechanism between cause and effect is helpful (but Hill
noted that knowledge of the mechanism is limited by current knowledge). [1]
7.
8.
9.
are found in some soils, sediments and food, especially dairy products, meat, fish and
shellfish. Very low levels are found in plants, water and air.
Extensive stores of PCB-based waste industrial oils, many with high levels of PCDFs, exist
throughout the world. Long-term storage and improper disposal of this material may
result in dioxin release into the environment and the contamination of human and animal
food supplies. PCB-based waste is not easily disposed of without contamination of the
environment and human populations. Such material needs to be treated as hazardous
waste and is best destroyed by high temperature incineration in specialised facilities.
Dioxin contamination incidents
Many countries monitor their food supply for dioxins. This has led to early detection of
contamination and has often prevented impact on a larger scale. In many instances
dioxin contamination is introduced via contaminated animal feed, e.g. incidences of
increased dioxin levels in milk or animal feed were traced back to clay, fat or citrus pulp
pellets used in the production of the animal feed,
Some dioxin contamination events have been more significant, with broader implications
in many countries.
In late 2008, Ireland recalled many tons of pork meat and pork products when up to 200
times the safe limit of dioxins were detected in samples of pork. This led to one of the
largest food recalls related to a chemical contamination. Risk assessments performed by
Ireland indicated no public health concern. The contamination was also traced back to
contaminated feed.
In 1999, high levels of dioxins were found in poultry and eggs from Belgium.
Subsequently, dioxin-contaminated animal-based food (poultry, eggs, pork), were
detected in several other countries. The cause was traced to animal feed contaminated
with illegally disposed PCB-based waste industrial oil.
Large amounts of dioxins were released in a serious accident at a chemical factory in
Seveso, Italy, in 1976. A cloud of toxic chemicals, including 2,3,7,8-Tetrachlorodibenzo-pdioxin, or TCDD, was released into the air and eventually contaminated an area of 15
square kilometres where 37 000 people lived.
Extensive studies in the affected population are continuing to determine the long-term
human health effects from this incident.
TCDD has also been extensively studied for health effects linked to its presence as a
contaminant in some batches of the herbicide Agent Orange, which was used as a
defoliant during the Vietnam War. A link to certain types of cancers and also to diabetes is
still being investigated.
Although all countries can be affected, most contamination cases have been reported in
industrialized countries where adequate food contamination monitoring, greater
awareness of the hazard and better regulatory controls are available for the detection of
dioxin problems.
A few cases of intentional human poisoning have also been reported. The most notable
incident is the 2004 case of Viktor Yushchenko, President of the Ukraine, whose face was
disfigured by chloracne.
Effects of dioxins on human health
Short-term exposure of humans to high levels of dioxins may result in skin lesions, such
as chloracne and patchy darkening of the skin, and altered liver function. Long-term
exposure is linked to impairment of the immune system, the developing nervous system,
the endocrine system and reproductive functions.
Chronic exposure of animals to dioxins has resulted in several types of cancer. TCDD was
evaluated by the WHOs International Agency for Research on Cancer (IARC) in 1997 and
2012. Based on animal data and on human epidemiology data, TCDD was classified by
IARC as a "known human carcinogen. However, TCDD does not affect genetic material
and there is a level of exposure below which cancer risk would be negligible.
Due to the omnipresence of dioxins, all people have background exposure and a certain
level of dioxins in the body, leading to the so-called body burden. Current normal
background exposure is not expected to affect human health on average. However, due
to the high toxic potential of this class of compounds, efforts need to be undertaken to
reduce current background exposure.
Sensitive groups
The developing fetus is most sensitive to dioxin exposure. Newborn, with rapidly
developing organ systems, may also be more vulnerable to certain effects. Some people
or groups of people may be exposed to higher levels of dioxins because of their diet (e.g.,
high consumers of fish in certain parts of the world) or their occupation (e.g., workers in
the pulp and paper industry, in incineration plants and at hazardous waste sites).
Prevention and control of dioxin exposure
Proper incineration of contaminated material is the best available method of preventing
and controlling exposure to dioxins. It can also destroy PCB-based waste oils. The
incineration process requires high temperatures, over 850C. For the destruction of large
amounts of contaminated material, even higher temperatures - 1000C or more - are
required.
Prevention or reduction of human exposure is best done via source-directed measures,
i.e. strict control of industrial processes to reduce formation of dioxins as much as
possible. This is the responsibility of national governments. The Codex Alimentarius
Commission adopted a Code of Practice for Source Directed Measures to Reduce
Contamination of Foods with Chemicals (CAC/RCP 49-2001) in 2001. Later in 2006 a Code
of Practice for the Prevention and Reduction of Dioxin and Dioxin-like PCB Contamination
in Food and Feeds (CAC/RCP 62-2006) was adopted.
More than 90% of human exposure to dioxins is through the food supply, mainly meat
and dairy products, fish and shellfish. Therefore, protecting the food supply is critical. One
approach includes source-directed measures to reduce dioxin emissions. Secondary
contamination of the food supply needs to be avoided throughout the food-chain. Good
controls and practices during primary production, processing, distribution and sale are all
essential in the production of safe food.
As indicated through the examples listed above, contaminated animal feed is often the
root-cause of food contamination.
Food and feed contamination monitoring systems must be in place to ensure that
tolerance levels are not exceeded. It is the role of national governments to monitor the
safety of food supply and to take action to protect public health. When contamination is
suspected, countries should have contingency plans to identify, detain and dispose of
contaminated feed and food. The affected population should be examined in terms of
exposure (e.g. measuring the contaminants in blood or human milk) and effects (e.g.
clinical surveillance to detect signs of ill health).
What should consumers do to reduce their risk of exposure?
Trimming fat from meat and consuming low fat dairy products may decrease the
exposure to dioxin compounds. Also, a balanced diet (including adequate amounts of
fruits, vegetables and cereals) will help to avoid excessive exposure from a single source.
This is a long-term strategy to reduce body burdens and is probably most relevant for
girls and young women to reduce exposure of the developing fetus and when
breastfeeding infants later on in life. However, the possibility for consumers to reduce
their own exposure is somewhat limited.
What does it take to identify and measure dioxins in the environment and food?
The quantitative chemical analysis of dioxins requires sophisticated methods that are
available only in a limited number of laboratories around the world. The analysis costs are
very high and vary according to the type of sample, but range from over US$ 1000 for the
analysis of a single biological sample to several thousand US dollars for the
comprehensive assessment of release from a waste incinerator.
Increasingly, biological (cell- or antibody) -based screening methods are being developed,
and theuse of such methods for food and feed samples is increasingly being validated.
Such screening methods allow more analyses at a lower cost, and in case of a positive
screening test, confirmation of results must be carried out by more complex chemical
analysis.
WHO activities related to dioxins
Reducing dioxin exposure is an important public health goal for disease reduction. To
provide guidance on acceptable levels of exposure, WHO has held a series of expert
meetings to determine a tolerable intake of dioxins.
In the latest expert meetings held in 2001, the Joint FAO/WHO Expert Committee on Food
Additives (JECFA) performed an updated comprehensive risk assessment of PCDDs,
PCDFs, and dioxin-like PCBs.
In order to assess long- or short-term risks to health due to these substances, total or
average intake should be assessed over months, and the tolerable intake should be
assessed over a period of at least 1 month. The experts established a provisional
tolerable monthly intake (PTMI) of 70 picogram/kg per month. This level is the amount of
dioxins that can be ingested over lifetime without detectable health effects.
WHO, in collaboration with the Food and Agriculture Organization (FAO), through the
Codex Alimentarius Commission, has established a Code of Practice for the Prevention
and Reduction of Dioxin and Dioxin-like PCB Contamination in Foods and Feed. This
document gives guidance to national and regional authorities on preventive measures.
WHO is also responsible for the Global Environment Monitoring Systems Food
Contamination Monitoring and Assessment Programme. Commonly known as GEMS/Food,
the programme provides information on levels and trends of contaminants in food
through its network of participating laboratories in over 50 countries around the world.
Dioxins are included in this monitoring programme.
WHO also conducted periodic studies on levels of dioxins in human milk. These studies
provide an assessment of human exposure to dioxins from all sources. Recent exposure
data indicate that measures introduced to control dioxin release in a number of
developed countries have resulted in a substantial reduction in exposure over the past
two decades.
WHO is continuing these studies now in collaboration with the United Nations
Environmental Programme (UNEP), in the context of the Stockholm Convention, an
international agreement to reduce emissions of certain persistent organic pollutants
(POPs), including dioxins. A number of actions are being considered to reduce the
production of dioxins during incineration and manufacturing processes. WHO and UNEP
are undertaking now global breast milk surveys, including in many developing countries,
to monitor trends in dioxin contamination across the globe and the effectiveness of
measures implemented under the Stockholm convention.
Dioxins occur as a complex mixture in the environment and in food. In order to assess the
potential risk of the whole mixture, the concept of toxic equivalence has been applied to
this group of contaminants.
During the last 15 years, WHO, through the International Programme on Chemical Safety
(IPCS), has established and regularly re-evaluated toxic equivalency factors (TEFs) for
dioxins and related compounds through expert consultations. WHO-TEF values have been
established which apply to humans, mammals, birds and fish.
Poisson distribution
In probability theory and statistics, the Poisson distribution (French pronunciation [pwas ]; in
English usually /pwsn/), named after French mathematician Simon Denis Poisson, is a discrete
probability distribution that expresses the probability of a given number of events occurring in a fixed
interval of time and/or space if these events occur with a known average rate and independently of
the time since the last event.[1] The Poisson distribution can also be used for the number of events in
other specified intervals such as distance, area or volume.
For instance, an individual keeping track of the amount of mail they receive each day may notice that
they receive an average number of 4 letters per day. If receiving any particular piece of mail doesn't
affect the arrival times of future pieces of mail, i.e., if pieces of mail from a wide range of sources
arrive independently of one another, then a reasonable assumption is that the number of pieces of
mail received per day obeys a Poisson distribution. [2] Other examples that may follow a Poisson: the
number of phone calls received by a call center per hour, the number of decay events per second
from a radioactive source, or the number of taxis passing a particular street corner per hour.
In statistics, graphs of uniform distributions all have this flat characteristic in which the top and sides
are parallel to the x and y axes. Here's another graph showing the probability distribution when
rolling a fair die, meaning each side has an equal chance, orprobability of turning up. Because there
are six sides to each die, there are six possible outcomes, with each outcome having a probability of
1/6th (16.7%).
Description
Allows to create ROC curve and a complete sensitivity/specificity report. The ROC curve is a fundamental tool
for diagnostic test evaluation.
In a ROC curve the true positive rate (Sensitivity) is plotted in function of the false positive rate (100-Specificity)
for different cut-off points of a parameter. Each point on the ROC curve represents a sensitivity/specificity pair
corresponding to a particular decision threshold. The area under the ROC curve (AUC) is a measure of how well
a parameter can distinguish between two diagnostic groups (diseased/normal).
Theory summary
The diagnostic performance of a test, or the accuray of a test to discriminate diseased cases from normal cases
is evaluated using Receiver Operating Characteristic (ROC) curve analysis (Metz, 1978; Zweig & Campbell,
1993). ROC curves can also be used to compare the diagnostic performance of two or more laboratory or
diagnostic tests (Griner et al., 1981).
When you consider the results of a particular test in two populations, one population with a disease, the other
population without the disease, you will rarely observe a perfect separation between the two groups. Indeed, the
distribution of the test results will overlap, as shown in the following figure.
For every possible cut-off point or criterion value you select to discriminate between the two populations, there
will be some cases with the disease correctly classified as positive (TP = True Positive fraction), but some cases
with the disease will be classified negative (FN = False Negative fraction). On the other hand, some cases
without the disease will be correctly classified as negative (TN = True Negative fraction), but some cases without
the disease will be classified as positive (FP = False Positive fraction).
In a Receiver Operating Characteristic (ROC) curve the true positive rate (Sensitivity) is plotted in function of
the false positive rate (100-Specificity) for different cut-off points. Each point on the ROC curve represents a
sensitivity/specificity pair corresponding to a particular decision threshold. A test with perfect discrimination
(no overlap in the two distributions) has a ROC curve that passes through the upper left corner (100%
sensitivity, 100% specificity). Therefore the closer the ROC curve is to the upper left corner, the higher the
overall accuracy of the test (Zweig & Campbell, 1993).
This type of graph is called a Receiver Operating Characteristic curve (or ROC
curve.) It is a plot of the true positive rate against the false positive rate for the
different possible cutpoints of a diagnostic test.
An ROC curve demonstrates several things:
1. It shows the tradeoff between sensitivity and specificity (any increase in
sensitivity will be accompanied by a decrease in specificity).
2. The closer the curve follows the left-hand border and then the top border of the
ROC space, the more accurate the test.
3. The closer the curve comes to the 45-degree diagonal of the ROC space, the
less accurate the test.
4. The slope of the tangent line at a cutpoint gives the likelihood ratio (LR) for
that value of the test. You can check this out on the graph above. Recall that the
LR for T4 < 5 is 52. This corresponds to the far left, steep portion of the curve.
The LR for T4 > 9 is 0.2. This corresponds to the far right, nearly horizontal
portion of the curve.
5. The area under the curve is a measure of text accuracy.
Sensitivity (with optional 95% Confidence Interval): Probability that a test result will be positive when the
disease is present (true positive rate).
Specificity (with optional 95% Confidence Interval): Probability that a test result will be negative when
the disease is not present (true negative rate).
Positive likelihood ratio (with optional 95% Confidence Interval): Ratio between the probability of a
positive test result given the presence of the disease and the probability of a positive test result given the
absence of the disease.
Negative likelihood ratio (with optional 95% Confidence Interval): Ratio between the probability of a
negative test result given the presence of the disease and the probability of a negative test result given
the absence of the disease.
Positive predictive value (with optional 95% Confidence Interval): Probability that the disease is
present when the test is positive.
Negative predictive value (with optional 95% Confidence Interval): Probability that the disease is not
present when the test is negative.
Cost*: The average cost resulting from the use of the diagnostic test at that decision level. Note that the
cost reported here excludes the "overhead cost", i.e. the cost of doing the test, which is constant at all
decision levels.
A key question needed to interpret the results of a clinical trial is whether the
measured effect size is clinically important. Three commonly used measures of effect
size are relative risk reduction (RRR), absolute risk reduction (ARR), and
the number needed to treat (NNT) to prevent one bad outcome. These terms are
defined below. The material in this section is adapted from Evidence-based medicine:
How to practice and teach EBM by DL Sackett, WS Richardson, W Rosenberg and
RB Haynes. 1997, New York: Churchill Livingston.
Consider the data from the Diabetes Control and Complications Trial (DCCT-Ann
Intern Med 1995;122:561-8.). Neuropathy occurred in 9.6% of the usual care group
and in 2.8% of the intensively treated group. These rates are sometimes referred to
as risks by epidemiologists. For our purposes, risk can be thought of as the rate of
some outcome.
Relative risk reduction
Relative risk measures how much the risk is reduced in the experimental group
compared to a control group. For example, if 60% of the control group died and 30%
of the treated group died, the treatment would have a relative risk reduction of 0.5 or
50% (the rate of death in the treated group is half of that in the control group).
The formula for computing relative risk reduction is: (CER - EER)/CER. CER is the
control group event rate and EER is the experimental group event rate. Using the
DCCT data, this would work out to (0.096 - 0.028)/0.096 = 0.71 or 71%. This means
that neuropathy was reduced by 71% in the intensive treatment group compared with
the usual care group.
One problem with the relative risk measure is that without knowing the level of risk in
the control group, one cannot assess the effect size in the treatment group. Treatments
with very large relative risk reductions may have a small effect in conditions where
the control group has a very low bad outcome rate. On the other hand, modest relative
risk reductions can assume major clinical importance if the baseline (control) rate of
bad outcomes is large.
Absolute risk reduction
Absolute risk reduction is just the absolute difference in outcome rates between the
control and treatment groups: CER - EER. The absolute risk reduction does not
involve an explicit comparison to the control group as in the relative risk reduction
and thus, does not confound the effect size with the baseline risk. However, it is a less
intuitve measure to interpret.
For the DCCT data, the absolute risk reduction for neuropathy would be (0.096 0.028) = 0.068 or 6.8%. This means that for every 100 patients enrolled in the
intensive treatment group, about seven bad outcomes would be averted.
Number needed to treat
The number needed to treat is basically another way to express the absolute risk
reduction. It is just 1/ARR and can be thought of as the number of patients that would
need to be treated to prevent one additional bad outcome. For the DCCT data, NNT =
1/.068 = 14.7. Thus, for every 15 patients treated with intensive therapy, one case of
neuropathy would be prevented.
The NNT concept has been gaining in popularity because of its simplicity to compute
and its ease of interpretion. NNT data are especially useful in comparing the results of
multiple clinical trials in which the relative effectiveness of the treatments are readily
apparent. For example, the NNT to prevent stroke by treating patients with very high
blood pressures (DBP 115-129) is only 3 but rises to 128 for patients with less severe
hypertension (DBP 90-109).
Consider the use of the ANA (antinuclear antibody) test in the diagnosis of SLE
(systemic lupus erythematosus). In a rheumatology practice, the prevalence of SLE in
patients on whom an ANA test was done was 2.88%. The sensitivity of the ANA for
SLE is 98% and the specificity is 93%. Suppose a patient of this rheumatologist has a
positive ANA. What is the probability of SLE?
Traditional Method
The traditional way to solve this problem would be to draw a two by two table and fill
it in with a hypothetical population of, say, 100000 patients. Knowing the prevalence
of SLE is 2.88%, the column totals of patients with and without SLE can be easily
computed as shown:
SLE
No SLE
Positive
ANA
TP
FP
Negativ
e
ANA
FN
TN
Multiplying the sensitivity (0.98) by the number with SLE (2880) yields the number
of true positives (2822). Multiplying the specificity (0.93) by the number without SLE
(97120) yields the number of true negatives (97120).
SLE
Positive
ANA
No SLE
2822
Negativ
e
ANA
FP
FN 90322
The rest of the table entries are filled in by simple addition and subtraction:
SLE
No SLE
Positive
ANA
Negativ
e
ANA
2822
6798
9620
58 90322 90380
We can now answer the question of posttest probability given a positive test as
2822/9620 = 0.293.
Likelihood ratio method
The likelihood ratio of a positive ANA test is 14 and the likelihood ratio of a negative
ANA test is 0.02. These numbers, as with the sensitivity and specificity, are obtained
from the literature -- they are properties of the diagnostic test. From the likelhood
ratio form of Bayes theorem above, we can see that multiplying the pretest odds by 14
will give posttest odds. But wait, 0.0288 times 14 = 0.40. This is not the answer we
got using the traditional method.
The source of the discrepancy is that likelihood ratios are multiplied by the
pretest odds not the pretest probability. We must first compute the pretest probability
of 0.0288 to odds. The formula is:
Odds = Probability / (1 - Probability)
Thus, pretest odds = 0.0288 / 0.9712. This is about equal to 0.03 to 1.
We can now apply the likelihood ratio for a positive ANA to compute the posttest
odds: 0.03 x 14 = 0.42 to 1. We still do not have the answer we got above because we
now have to convert the odds back to a probability. The formula is:
Probability = Odds / (1 + Odds)
Posttest probability = 0.42 / 1.42 = 0.296 -- essentially the same answer as with the
traditional method.
Here is a little calculator you can use to work through likelihood ratio problems. Click
the buttons in sequence to work through the problem. Try changing the prior
probability or likelihood ratio values and recompute the posttest probabilitity. Once
you understand the difference between odds and probability, using likelihood ratios is
much easier than working through two by two tables.
No SLE
2822
6798
58
90322
2880
97120
100000
Going back to the original definition of likelihood ratio, we can compute the
probability of a positive ANA test in patients with SLE: (2822 / 2880) or 0.98. We can
also compute the probability of a positive ANA test in patients without SLE: (6798 /
97120) or 0.07. The likelihood ratio for a positive ANA is then 0.98 / 0.07 or 14.
Using an analagous approach, you should be able to compute the likelihood ratio for a
negative ANA (0.02). In more general terms:
LR+ = Sensitivity / (1 - Specificity)
LR- = (1 - Sensitivity) / (Specificity)
Hypothyr Euthyro
oid
id
18
5.1 - 7
17
7.1 - 9
36
9 or
more
39
32
93
Totals:
Notice that these authors found considerable overlap in T4 values among the
hypothyroid and euthyroid patients. Further, the lower the T4 value, the more likely
the patients are to be hypothyroid. We can compute likelihood ratios for each of the
four groupings of test results by recalling the definition of a likelihood ratio:
5 or less
18
52
5.1 - 7
17
1.2
7.1 - 9
36
.3
9 or
more
39
.2
32
93
Totals:
Notice that the likelihood ratios give you an intuitive feel for how a given test result
affects the likelihood of disease. Likelihood ratios greater than one increase the
likelihood; those less than one decrease the likelihood. Values near one indicate a
result that does not substantially change disease likelihood. Use the calculator below
to compute the posttest probability of hypothyroidism for a patient with a 0.1 pretest
probability given each of the possible results shown above.
Example 3: Patients with Suspected Pulmonary Embolism
Likelihood ratios also work well for tests with multiple qualitative results such as a
ventilation perfusion (V/Q) scan which can be interpreted as normal, low probability,
intermediate probability, and high probability of pulmonary embolism. For example,
the PIOPED Study (JAMA 1990;263:2753-2759) compared the V/Q scan with
angiography and reported the following data:
Scan Category
Sensitivity, Sepecificity,
%
%
High probability
41
97
High or
intermediate
82
52
probability
High,
intermediate,
or low probability
98
10
Now suppose you have a patient with a 30% pretest probability of pulmonary
embolism who has an intermediate probability V/Q scan. What is the posttest
probability of disease? Try computing the likelihood ratio for a high or
intermediate probability scan from the sensitivity and specificity data.. (Click here
if you need to review the formula.). Plug this number into the calculator below and
work through the posttest test probability of disease.
This result, however, is not the best use of the available data because it lumps the high
probability and intermediate probability scans together so that a sensitivity and
specificity can be reported. The paper also lists the raw data by individual test
category. From these data (shown below in the two left columns), you should be able
to compute the likelihood ratio for each test result. This is shown below in the right
column.
Scan Category
P.E.
present
P.E.
absent
Likelihood
ratio
High probability
102
14
13.9
Intermediate
probability
105
217
0.93
Low probability
39
199
0.37
Normal or near
normal
50
0.19
251
480
Total
Now we can compute the posttest probability for our patient with a 30% pretest
probability and an intermediate probability scan. Work though the calculations below:
This posttest probability is lower than the previously obtained because we are using of
the information in the data we have available. The likelihood ratio approach allows us
to work with individual test results without having to choose an artibrary cutpoint by
which to dichotomize the results into "positive" and "negative." Also notice again, the
intuitive value of the likelihood ratio number. An intermediate probability scan has a
likelihood ratio very close to 1. This means that intermediate probability scans should
not appreciably change your pretest diagnostic suspicion.
ROC space
The contingency table can derive several evaluation "metrics" (see infobox). To draw a ROC curve, only the
true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter).
The TPR defines how many correct positive results occur among all positive samples available during the test.
FPR, on the other hand, defines how many incorrect positive results occur among all negative samples
available during the test.
A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs
between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is
equal to 1 specificity, the ROC graph is sometimes called the sensitivity vs (1 specificity) plot. Each
prediction result or instance of a confusion matrix represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC
space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1)
point is also called a perfect classification. A completely random guess would give a point along a diagonal line
(the so-called line of no-discrimination) from the left bottom to the top right corners (regardless of the positive
and negative base rates). An intuitive example of random guessing is a decision by flipping coins (heads or
tails). As the size of the sample increases, a random classifier's ROC point migrates towards (0.5,0.5).
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better
than random), points below the line poor results (worse than random). Note that the output of a consistently
poor predictor could simply be inverted to obtain a good predictor.
Let us look into four prediction results from 100 positive and 100 negative instances:
A
TP=6
3
FP=28
FN=
37
TN=72
100
100
TPR = 0.63
FPR = 0.28
PPV = 0.69
F1 = 0.66
ACC = 0.68
91
10
9
TP=7
7
FP=77
15
4
TP=2
4
FP=88
11
2
TP=7
6
FP=12
88
FN=
23
TN=23
46
FN=
76
TN=12
88
FN=
24
TN=88
11
2
100
100
20
0
100
100
20
0
100
100
20
0
20
TPR = 0.77
FPR = 0.77
PPV = 0.50
F1 = 0.61
ACC = 0.50
TPR = 0.24
FPR = 0.88
PPV = 0.21
F1 = 0.22
ACC = 0.18
TPR = 0.76
FPR = 0.12
PPV = 0.86
F1 = 0.81
ACC = 0.82
Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows
the best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line),
and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored across the center
point (0.5,0.5), the resulting method C is even better than A. This mirrored method simply reverses the
predictions of whatever method or test produced theC contingency table. Although the original C method has
negative predictive power, simply reversing its decisions leads to a new predictive method C which has
positive predictive power. When the C method predicts p or n, the C method would predict n or p, respectively.
In this manner, the C test would perform the best. The closer a result from a contingency table is to the upper
left corner, the better it predicts, but the distance from the random guess line in either direction is the best
indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse
than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby
moving the result above the random guess line.
to
) of the detection probability in the y-axis versus the cumulative distribution function of the
false-alarm probability in x-axis.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones
independently from (and prior to specifying) the cost context or the class distribution. ROC analysis
is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during World War II
for detecting enemy objects in battlefields and was soon introduced to psychology to account for
perceptual detection of stimuli. ROC analysis since then has been used
in medicine, radiology, biometrics, and other areas for many decades and is increasingly used
in machine learning and data miningresearch.
The ROC is also known as a relative operating characteristic curve, because it is a comparison of
two operating characteristics (TPR and FPR) as the criterion changes. [1]
E-waste is the most rapidly growing segment of the municipal solid waste stream.
E-waste contains many valuable, recoverable materials such as aluminum, copper, gold, silver, plastics, and
ferrous metals. In order to conserve natural resources and the energy needed to produce new electronic
equipment from virgin resources, electronic equipment can be refurbished, reused, and recycled instead of being
landfilled.
E-waste also contains toxic and hazardous materials including mercury, lead, cadmium, beryllium, chromium, and
chemical flame retardants, which have the potential to leach into our soil and water.
Conserves natural resources. Recycling recovers valuable materials from old electronics that can be
used to make new products. As a result, we save energy, reduce pollution, reduce greenhouse gas
emissions, and save resources by extracting fewer raw materials from the earth.
Protects your surroundings. Safe recycling of outdated electronics promotes sound management of toxic
chemicals such as lead and mercury.
Helps others. Donating your used electronics benefits your community by passing on ready-to-use or
refurbished equipment to those who need it.
Create Jobs. eCycling creates jobs for professional recyclers and refurbishers and creates new markets
for the valuable components that are dismantled.
Saves landfill space. E-waste is a growing waste stream. By recycling these items, landfill space is
conserved.
Reuse of whole units: Reuse functioning electronic equipment by donating it to someone who can still
use it.
2.
Repair/refurbishment/remanufacturing of units
3.
4.
5.
Do you know where your cell-phones and laptops go to die? If they are not recycled or disposed-off,
they pose real threat to the people living on this planet.
The global pile-up of e-waste is getting out of control. While there are various predictions from UN
about the future size of waste, there are hardly any suggestions for counter-measures to minimize
the load. On top of that, the new generation electronic devices like smartphones and laptops have
an average lifespan of less than 2 years. This means a lot of e-waste will haunt us soon. Hence, it is
important that every citizen in Philadelphia should know a few facts about the problem. Ladies and
gentlemen, lets embrace the horror!
Where is it going?
Electronic waste disposal and management has grown into a globalized business because around
80% of this waste is shipped to the third world countries where it is further processed before being
dumped into landfills. There is a reason why the e-waste continues to have some value in the
developing countries. It is sorted or burnt to extract and sell scrap metal.
Lead
PVC
Beryllium
Mercury
Contents
[hide]
1 Definition
2 Amount of electronic waste world-wide
3.2 Trade
4 Environmental impact
5 Information security
6 E-waste management
6.1 Recycling
7.1 Hazardous
8 See also
9 References
10 Further reading
11 External links
Definition[edit]
Hoarding (left), disassembling (center) and collecting (right) electronic waste in Bengaluru, India
"Electronic waste" may be defined as discarded computers, office electronic equipment,
entertainment device electronics, mobile phones, television sets, and refrigerators. This includes
used electronics which are destined for reuse, resale, salvage, recycling, or disposal. Others are
re-usables (working and repairable electronics) and secondary scrap (copper, steel, plastic, etc.)
to be "commodities", and reserve the term "waste" for residue or material which is dumped by
the buyer rather than recycled, including residue from reuse and recycling operations. Because
loads of surplus electronics are frequently commingled (good, recyclable, and non-recyclable),
several public policy advocates apply the term "e-waste" broadly to all surplus electronics.
Cathode ray tubes (CRTs) are considered one of the hardest types to recycle.[2]
CRTs have relatively high concentration of lead and phosphors (not to be confused with
phosphorus), both of which are necessary for the display. The United States Environmental
Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous
household waste"[3] but considers CRTs that have been set aside for testing to be commodities if
they are not discarded, speculatively accumulated, or left unprotected from weather and other
damage.
The EU and its member states operate a system via the European Waste Catalogue (EWC)- a
European Council Directive, which is interpreted into "member state law". In the UK (an EU
member state). This is in the form of the List of Wastes Directive. However, the list (and EWC)
gives broad definition (EWC Code 16 02 13*) of Hazardous Electronic wastes, requiring "waste
operators" to employ the Hazardous Waste Regulations (Annex 1A, Annex 1B) for refined
definition. Constituent materials in the waste also require assessment via the combination of
Annex II and Annex III, again allowing operators to further determine whether a waste is
hazardous.[4]
Debate continues over the distinction between "commodity" and "waste" electronics definitions.
Some exporters are accused of deliberately leaving difficult-to-recycle, obsolete, or nonrepairable equipment mixed in loads of working equipment (though this may also come through
ignorance, or to avoid more costly treatment processes). Protectionists may broaden the
definition of "waste" electronics in order to protect domestic markets from working secondary
equipment.
The high value of the computer recycling subset of electronic waste (working and reusable
laptops, desktops, and components like RAM) can help pay the cost of transportation for a larger
number of worthless pieces than can be achieved with display devices, which have less (or
negative) scrap value. In A 2011 report, "Ghana E-Waste Country Assessment",[5] found that of
215,000 tons of electronics imported to Ghana, 30% were brand new and 70% were used. Of the
used product, the study concluded that 15% was not reused and was scrapped or discarded. This
contrasts with published but uncredited claims that 80% of the imports into Ghana were being
burned in primitive conditions.
Display units (CRT, LCD, LED monitors), processors (CPU, GPU, or APU chips), memory
(DRAM or SRAM), and audio components have different useful lives. Processors are most
frequently out-dated (by software no longer being optimized) and are more likely to become "ewaste", while display units are most often replaced while working without repair attempts, due to
changes in wealthy nation appetites for new display technology.
An estimated 50 million tons of E-waste are produced each year.[1] The USA discards 30 million
computers each year and 100 million phones are disposed of in Europe each year. The
Environmental Protection Agency estimates that only 15-20% of e-waste is recycled, the rest of
these electronics go directly into landfills and incinerators.[6][7]
According to a report by UNEP titled, "Recycling - from E-Waste to Resources," the amount of
e-waste being produced - including mobile phones and computers - could rise by as much as 500
percent over the next decade in some countries, such as India.[8] The United States is the world
leader in producing electronic waste, tossing away about 3 million tons each year.[9] China
already produces about 2.3 million tons (2010 estimate) domestically, second only to the United
States. And, despite having banned e-waste imports, China remains a major e-waste dumping
ground for developed countries.[9]
Electrical waste contains hazardous but also valuable and scarce materials. Up to 60 elements
can be found in complex electronics.
In the United States, an estimated 70% of heavy metals in landfills comes from discarded
electronics.[10][11]
While there is agreement that the number of discarded electronic devices is increasing, there is
considerable disagreement about the relative risk (compared to automobile scrap, for example),
and strong disagreement whether curtailing trade in used electronics will improve conditions, or
make them worse. According to an article in Motherboard, attempts to restrict the trade have
driven reputable companies out of the supply chain, with unintended consequences.[12]
4.5-volt, D, C, AA, AAA, AAAA, A23, 9-volt, CR2032, and LR44 cells are all recyclable in
most countries.
One theory is that increased regulation of electronic waste and concern over the environmental
harm in mature economies creates an economic disincentive to remove residues prior to export.
Critics of trade in used electronics maintain that it is still too easy for brokers calling themselves
recyclers to export unscreened electronic waste to developing countries, such as China,[13] India
and parts of Africa, thus avoiding the expense of removing items like bad cathode ray tubes (the
processing of which is expensive and difficult). The developing countries have become toxic
dump yards of e-waste. Proponents of international trade point to the success of fair trade
programs in other industries, where cooperation has led to creation of sustainable jobs, and can
bring affordable technology in countries where repair and reuse rates are higher.
Defenders of the trade[who?] in used electronics say that extraction of metals from virgin mining has
been shifted to developing countries. Recycling of copper, silver, gold, and other materials from
discarded electronic devices is considered better for the environment than mining. They also
state that repair and reuse of computers and televisions has become a "lost art" in wealthier
nations, and that refurbishing has traditionally been a path to development.
South Korea, Taiwan, and southern China all excelled in finding "retained value" in used goods,
and in some cases have set up billion-dollar industries in refurbishing used ink cartridges, singleuse cameras, and working CRTs. Refurbishing has traditionally been a threat to established
manufacturing, and simple protectionism explains some criticism of the trade. Works like "The
Waste Makers" by Vance Packard explain some of the criticism of exports of working product,
for example the ban on import of tested working Pentium 4 laptops to China, or the bans on
export of used surplus working electronics by Japan.
Opponents of surplus electronics exports argue that lower environmental and labor standards,
cheap labor, and the relatively high value of recovered raw materials leads to a transfer of
pollution-generating activities, such as smelting of copper wire. In China, Malaysia, India,
Kenya, and various African countries, electronic waste is being sent to these countries for
processing, sometimes illegally. Many surplus laptops are routed to developing nations as
"dumping grounds for e-waste".[14]
Because the United States has not ratified the Basel Convention or its Ban Amendment, and has
few domestic federal laws forbidding the export of toxic waste, the Basel Action Network
estimates that about 80% of the electronic waste directed to recycling in the U.S. does not get
recycled there at all, but is put on container ships and sent to countries such as China.[15][16][17][18]
This figure is disputed as an exaggeration by the EPA, the Institute of Scrap Recycling
Industries, and the World Reuse, Repair and Recycling Association.
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
The E-waste centre of Agbogbloshie, Ghana, where electronic waste is burnt and disassembled
with no safety or environmental considerations.
Guiyu in the Shantou region of China is a huge electronic waste processing area.[15][20][21] It is often
referred to as the e-waste capital of the world. The city employs over 150,000 e-waste workers
that work through 16-hour days disassembling old computers and recapturing whatever metals
and parts they can reuse or sell. The thousands of individual workshops employ laborers to snip
cables, pry chips from circuit boards, grind plastic computer cases into particles, and dip circuit
boards in acid baths to dissolve the lead, cadmium, and other toxic metals. Others work to strip
insulation from all wiring in an attempt to salvage tiny amounts of copper wire.[22] Uncontrolled
burning, disassembly, and disposal causes a variety of environmental problems such as
groundwater contamination, atmospheric pollution, or even water pollution either by immediate
discharge or due to surface runoff (especially near coastal areas), as well as health problems
including occupational safety and health effects among those directly and indirectly involved,
due to the methods of processing the waste.
Only limited investigations have been carried out on the health effects of Guiyu's poisoned
environment. One of them was carried out by Professor Huo Xia, of the Shantou University
Medical College, which is an hour and a half's drive from Guiyu. She tested 165 children for
concentrations of lead in their blood. 82% of the Guiyu children had blood/lead levels of more
than 100. Anything above that figure is considered unsafe by international health experts. The
average reading for the group was 149.[23]
High levels of lead in young children's blood can impact IQ and the development of the central
nervous system. The highest concentrations of lead were found in the children of parents whose
workshop dealt with circuit boards and the lowest was among those who recycled plastic.[23]
Six of the many villages in Guiyu specialize in circuit-board disassembly, seven in plastics and
metals reprocessing, and two in wire and cable disassembly. About a year ago the environmental
group Greenpeace sampled dust, soil, river sediment and groundwater in Guiyu where e-waste
recycling is done. They found soaring levels of toxic heavy metals and organic contaminants in
both places.[24] Lai Yun, a campaigner for the group found "over 10 poisonous metals, such as
lead, mercury and cadmium, in Guiyu town."
Guiyu is only one example of digital dumps but similar places can be found across the world
such as Asia and Africa. With amounts of e-waste growing rapidly each year urgent solutions are
required. While the waste continues to flow into digital dumps like Guiyu there are measures that
can help reduce the flow of e-waste.[23]
A preventative step that major electronics firms should take is to remove the worst chemicals in
their products in order to make them safer and easier to recycle. It is important that all companies
take full responsibility for their products and, once they reach the end of their useful life, take
their goods back for re-use or safely recycle them.
Trade[edit]
Proponents of the trade say growth of internet access is a stronger correlation to trade than
poverty. Haiti is poor and closer to the port of New York than southeast Asia, but far more
electronic waste is exported from New York to Asia than to Haiti. Thousands of men, women,
and children are employed in reuse, refurbishing, repair, and remanufacturing, unsustainable
industries in decline in developed countries. Denying developing nations access to used
electronics may deny them sustainable employment, affordable products, and internet access, or
force them to deal with even less scrupulous suppliers. In a series of seven articles for The
Atlantic, Shanghai-based reporter Adam Minter describes many of these computer repair and
scrap separation activities as objectively sustainable.[25]
Opponents of the trade argue that developing countries utilize methods that are more harmful and
more wasteful. An expedient and prevalent method is simply to toss equipment onto an open fire,
in order to melt plastics and to burn away non-valuable metals. This releases carcinogens and
neurotoxins into the air, contributing to an acrid, lingering smog. These noxious fumes include
dioxins and furans.[26] Bonfire refuse can be disposed of quickly into drainage ditches or
waterways feeding the ocean or local water supplies.[18][27]
In June 2008, a container of electronic waste, destined from the Port of Oakland in the U.S. to
Sanshui District in mainland China, was intercepted in Hong Kong by Greenpeace.[28] Concern
over exports of electronic waste were raised in press reports in India,[29][30] Ghana,[31][32][33] Cte
d'Ivoire,[34] and Nigeria.[35]
Environmental impact[edit]
Old keyboards
The processes of dismantling and disposing of electronic waste in the third world lead to a
number of environmental impacts as illustrated in the graphic. Liquid and atmospheric releases
end up in bodies of water, groundwater, soil, and air and therefore in land and sea animals both
domesticated and wild, in crops eaten by both animals and human, and in drinking water.[36]
One study of environmental effects in Guiyu, China found the following:
Airborne dioxins one type found at 100 times levels previously measured
Levels of carcinogens in duck ponds and rice paddies exceeded international standards
for agricultural areas and cadmium, copper, nickel, and lead levels in rice paddies were
above international standards
Heavy metals found in road dust lead over 300 times that of a control villages road
dust and copper over 100 times[37]
Process Used
Computer wires
[38]
Information security[edit]
E-waste presents a potential security threat to individuals and exporting countries. Hard drives
that are not properly erased before the computer is disposed of can be reopened, exposing
sensitive information. Credit card numbers, private financial data, account information, and
records of online transactions can be accessed by most willing individuals. Organized criminals
in Ghana commonly search the drives for information to use in local scams.[39]
Government contracts have been discovered on hard drives found in Agbogbloshie. Multimillion dollar agreements from United States security institutions such as the Defense
Intelligence Agency (DIA), the Transportation Security Administration and Homeland Security
have all resurfaced in Agbogbloshie.[39][40]
E-waste management[edit]
Recycling[edit]
Computer monitors are typically packed into low stacks on wooden pallets for recycling and then
shrink-wrapped.[26]
See also: Computer recycling
Today the electronic waste recycling business is in all areas of the developed world a large and
rapidly consolidating business. People tend to forget that properly disposing of or reusing
electronics can help prevent health problems, create jobs, and reduce greenhouse-gas emissions.
[41]
Part of this evolution has involved greater diversion of electronic waste from energy-intensive
downcycling processes (e.g., conventional recycling), where equipment is reverted to a raw
material form. This recycling is done by sorting, dismantling, and recovery of valuable materials.
[42]
This diversion is achieved through reuse and refurbishing. The environmental and social
benefits of reuse include diminished demand for new products and virgin raw materials (with
their own environmental issues); larger quantities of pure water and electricity for associated
manufacturing; less packaging per unit; availability of technology to wider swaths of society due
to greater affordability of products; and diminished use of landfills.
Audiovisual components, televisions, VCRs, stereo equipment, mobile phones, other handheld
devices, and computer components contain valuable elements and substances suitable for
reclamation, including lead, copper, and gold.
One of the major challenges is recycling the printed circuit boards from the electronic wastes.
The circuit boards contain such precious metals as gold, silver, platinum, etc. and such base
metals as copper, iron, aluminum, etc. One way e-waste is processed is by melting circuit boards,
burning cable sheathing to recover copper wire and open- pit acid leaching for separating metals
of value.[43] Conventional method employed is mechanical shredding and separation but the
recycling efficiency is low. Alternative methods such as cryogenic decomposition have been
studied for printed circuit board recycling,[44] and some other methods are still under
investigation.
The U.S. Environmental Protection Agency encourages electronic recyclers to become certified
by demonstrating to an accredited, independent third party auditor that they meet specific
standards to safely recycle and manage electronics. This works to ensure the highest
environmental standards are being maintained. Two certifications for electronic recyclers
currently exist and are endorsed by the EPA. Customers are encouraged to choose certified
electronics recyclers. Responsible electronics recycling reduces environmental and human health
impacts, increases the use of reusable and refurbished equipment and reduces energy use while
conserving limited resources. The two EPA-endorsed certification programs are: Responsible
Recyclers Practices (R2) and E-Stewards. Certified companies ensure they are meeting strict
environmental standards which maximize reuse and recycling, minimize exposure to human
health or the environment, ensure safe management of materials and require destruction of all
data used on electronics. Certified electronics recyclers have demonstrated through audits and
other means that they continually meet specific high environmental standards and safely manage
used electronics. Once certified, the recycler is held to the particular standard by continual
oversight by the independent accredited certifying body. A certification board accredits and
oversees certifying bodies to ensure that they meet specific responsibilities and are competent to
audit and provide certification. [45]
Some U.S. retailers offer opportunities for consumer recycling of discarded electronic devices. [46]
[47]
In the US, the Consumer Electronics Association (CEA) urges consumers to dispose properly of
end-of-life electronics through its recycling locator at www.GreenerGadgets.org. This list only
includes manufacturer and retailer programs that use the strictest standards and third-party
certified recycling locations, to provide consumers assurance that their products will be recycled
safely and responsibly. CEA research has found that 58 percent of consumers know where to take
their end-of-life electronics, and the electronics industry would very much like to see that level
of awareness increase. Consumer electronics manufacturers and retailers sponsor or operate more
than 5,000 recycling locations nationwide and have vowed to recycle one billion pounds
annually by 2016,[48] a sharp increase from 300 million pounds industry recycled in 2010.
The Sustainable Materials Management Electronic Challenge was created by the United States
Environmental Protection Agency (EPA). Participants of the Challenge are manufacturers of
electronics and electronic retailers. These companies collect end-of-life (EOL) electronics at
various locations and send them to a certified, third-party recycler. Program participants are then
able publicly promote and report 100% responsible recycling for their companies.[49]
The Electronics TakeBack Coalition[50] is a campaign aimed at protecting human health and
limiting environmental effects where electronics are being produced, used, and discarded. The
ETBC aims to place responsibility for disposal of technology products on electronic
manufacturers and brand owners, primarily through community promotions and legal
enforcement initiatives. It provides recommendations for consumer recycling and a list of
recyclers judged environmentally responsible.[51]
The Certified Electronics Recycler program[52] for electronic recyclers is a comprehensive,
integrated management system standard that incorporates key operational and continual
improvement elements for quality, environmental and health and safety (QEH&S) performance.
The grassroots Silicon Valley Toxics Coalition focuses on promoting human health and addresses
environmental justice problems resulting from toxins in technologies.
The World Reuse, Repair, and Recycling Association (wr3a.org) is an organization dedicated to
improving the quality of exported electronics, encouraging better recycling standards in
importing countries, and improving practices through "Fair Trade" principles.
Take Back My TV[53] is a project of The Electronics TakeBack Coalition and grades television
manufacturers to find out which are responsible and which are not.
The e-Waste Association of South Africa (eWASA)[54] has been instrumental in building a
network of e-waste recyclers and refurbishers in the country. It continues to drive the sustainable,
environmentally sound management of all e-waste in South Africa.
E-Cycling Central is a website from the Electronic Industry Alliance which allows you to search
for electronic recycling programs in your state. It lists different recyclers by state to find reuse,
recycle, or find donation programs across the country.[55]
Ewasteguide.info is a Switzerland-based website dedicated to improving the e-waste situation in
developing and transitioning countries. The site contains news, events, case studies, and more.[56]
StEP: Solving the E-Waste Problem This website of StEP, an initiative founded by various UN
organizations to develop strategies to solve the e-waste problem, follows its activities and
programs.[42][57]
Processing techniques[edit]
Benefits of recycling[edit]
Recycling raw materials from end-of-life electronics is the most effective solution to the growing
e-waste problem. Most electronic devices contain a variety of materials, including metals that
can be recovered for future uses. By dismantling and providing reuse possibilities, intact natural
resources are conserved and air and water pollution caused by hazardous disposal is avoided.
Additionally, recycling reduces the amount of greenhouse gas emissions caused by the
manufacturing of new products.[62]
Benefits of recycling are extended when responsible recycling methods are used. In the U.S.,
responsible recycling aims to minimize the dangers to human health and the environment that
disposed and dismantled electronics can create. Responsible recycling ensures best management
practices of the electronics being recycled, worker health and safety, and consideration for the
environment locally and abroad.[63]
Several sizes of button and coin cell with 2 9v batteries as a size comparison. They are all
recycled in many countries since they contain lead, mercury and cadmium.
Some computer components can be reused in assembling new computer products, while others
are reduced to metals that can be reused in applications as varied as construction, flatware, and
jewelry.[61]
Substances found in large quantities include epoxy resins, fiberglass, PCBs, PVC (polyvinyl
chlorides), thermosetting plastics, lead, tin, copper, silicon, beryllium, carbon, iron and
aluminium.
Elements found in small amounts include cadmium, mercury, and thallium.[64]
Elements found in trace amounts include americium, antimony, arsenic, barium, bismuth, boron,
cobalt, europium, gallium, germanium, gold, indium, lithium, manganese, nickel, niobium,
palladium, platinum, rhodium, ruthenium, selenium, silver, tantalum, terbium, thorium, titanium,
vanadium, and yttrium.
Almost all electronics contain lead and tin (as solder) and copper (as wire and printed circuit
board tracks), though the use of lead-free solder is now spreading rapidly. The following are
ordinary applications:
Hazardous[edit]
Sulphur: Found in lead-acid batteries. Health effects include liver damage, kidney
damage, heart damage, eye and throat irritation. When released into the environment, it
can create sulphuric acid.
BFRs: Used as flame retardants in plastics in most electronics. Includes PBBs, PBDE,
DecaBDE, OctaBDE, PentaBDE. Health effects include impaired development of the
nervous system, thyroid problems, liver problems. Environmental effects: similar effects
as in animals as humans. PBBs were banned from 1973 to 1977 on. PCBs were banned
during the 1980s.
Lead: Solder, CRT monitor glass, lead-acid batteries, some formulations of PVC.[68] A
typical 15-inch cathode ray tube may contain 1.5 pounds of lead,[3] but other CRTs have
been estimated as having up to 8 pounds of lead.[26] Adverse effects of lead exposure
include impaired cognitive function, behavioral disturbances, attention deficits,
hyperactivity, conduct problems and lower IQ[66]
Beryllium oxide: Filler in some thermal interface materials such as thermal grease used
on heatsinks for CPUs and power transistors,[69] magnetrons, X-ray-transparent ceramic
windows, heat transfer fins in vacuum tubes, and gas lasers.
There is also evidence of cytotixic and genotoxic effects of some chemicals, which have been
shown to inhibit cell proliferation, cause cell membrane lesion, cause DNA single-strand breaks,
and elevate Reactive Oxygen Species (ROS) levels.[71]
DNA breaks can increase the likelihood of developing cancer (if the damage is to a tumor
suppressor gene)
DNA damages are a special problem in non-dividing or slowly dividing cells, where
unrepaired damages will tend to accumulate over time. On the other hand, in rapidly
dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication
will tend to cause replication errors and thus mutation
Elevated Reactive Oxygen Species (ROS) levels can cause damage to cell structures
(oxidative stress)[71]
Generally non-hazardous[edit]
An iMac G4 that has been repurposed into a lamp (photographed next to a Mac Classic and a flip
phone).
Aluminium: nearly all electronic goods using more than a few watts of power (heatsinks),
electrolytic capacitors.
Copper: copper wire, printed circuit board tracks, component leads.
See also[edit]
Environment portal
Electronics portal
Digger gold
eDay
Green computing
Polychlorinated biphenyls
Retrocomputing
China RoHS
e-Stewards
Soesterberg Principles
Organizations
Empa
IFixit
General:
Waste
Computer recycling
E-Cycling
eDay
eDay is an annual New Zealand initiative, started by Computer Access New Zealand
(CANZ), aimed to raise awareness of the potential dangers associated with
electronic waste and to offer the opportunity for such waste to be disposed of in an
environmentally friendly fashion.
eDay was first held in Wellington in 2006, as a pilot sponsored by Dell, the event bought in 54
tonnes (119,000 lb) of old computers, mobile phones and other non-biodegradable electronic
material.[1] In 2007 the initiative was extended to cover 12 locations, which resulted in it
becoming a national initiative,[2] 946 tonnes (2,086,000 lb) were collected.[3]
eDay 2008 was held on October 4 and extended to 32 centres.[4] In 2009 an estimated 966 tonnes
(2,130,000 lb) was collected at 38 locations around the country.[5]
The initiative was started to minimise the amount of electronic waste being
disposed on in landfills, based on evidence from reports that there was an
estimated 16 million electronic devices in use in New Zealand and that 1 million
new devices were being introduced every year, the report found that the majority of
these devices were being disposed in landfills rather than being recycled. [6][7] A
separate report found that half of New Zealand schools did not recycle outdated and
replaced equipment, opting instead to deposit it in landfills. [7][8] When disposed in
landfills there is a possibility of the harmful chemicals in the electronic equipment,
such as mercury, lead and cadmium, contaminating groundwater and coming into
contact with humans or animals, the toxins in the chemicals are capable of causing
serious health issues, such as nervous system and brain damage. [4][9] When recycled,
the chemicals are disposed of safely and potentially valuable parts can be reused.
On the day, drive-thru collection points are established and volunteers operate each
centre. Businesses, schools and the public are encouraged to dispose of old
computer hardware, mobile phones and printer cartridges. As well as collecting
material, the initiative is also designed to increase awareness about the harmful
effects of electronic waste
CANZ were awarded the New Zealand Ministry for the Environment 2008 Green
Ribbon Award for Community action and involvement
Computer recycling, electronic recycling or e-waste recycling is the recycling
of computers and any other electronic devices. Recycling is the complete
deconstruction of electronic devices in order to cut down on mining the raw
materials and rather extract the materials from old and obsolete electronics.
Recycling methods
Data erasure
Data remanence
Degaussing
Digger gold
Electronic waste
Polychlorinated biphenyls
Trashware
China RoHS
Organisations
Camara
Computers For Schools
eDay
Empower Up
Free Geek
The word dioxin can refer in a general way to compounds which have a dioxin core skeletal
structure with substituent molecular groups attached to it. For example, dibenzo-1,4-dioxin is a
compound whose structure consists of two benzo- groups fused onto a 1,4-dioxin ring.
Polychlorinated dibenzodioxins[edit]
Main article: polychlorinated dibenzodioxins
Because of their extreme importance as environmental pollutants, current scientific literature
uses the name dioxins commonly for simplification to denote the chlorinated derivatives of
dibenzo-1,4-dioxin, more precisely the polychlorinated dibenzodioxins (PCDDs), among which
2,3,7,8-tetrachlorodibenzodioxin (TCDD), a tetrachlorinated derivative, is the best known. The
polychlorinated dibenzodioxins, which can also be classified in the family of halogenated
organic compounds, have been shown to bioaccumulate in humans and wildlife due to their
lipophilic properties, and are known teratogens, mutagens, and carcinogens.
PCDDs are formed through combustion, chlorine bleaching and manufacturing processes.[3] The
combination of heat and chlorine creates dioxin.[3] Since chlorine is often a part of the Earth's
environment, natural ecological activity such as volcanic activity and forest fires can lead to the
formation of PCDDs.[3] Nevertheless, PCDDs are mostly produced by human activity.[3]
Famous PCDD exposure cases include Agent Orange sprayed over vegetation by the British
military in Malaya during the Malayan Emergency and the U.S. military in Vietnam during the
Vietnam War, the Seveso disaster, and the poisoning of Viktor Yushchenko.
Polychlorinated dibenzofurans are a related class compounds to PCDDs which are often included
within the general term dioxins.
The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes
and Their Disposal, usually known as the Basel Convention, is an international treaty that was
designed to reduce the movements of hazardous waste between nations, and specifically to
prevent transfer of hazardous waste from developed to less developed countries (LDCs). It does
not, however, address the movement of radioactive waste. The Convention is also intended to
minimize the amount and toxicity of wastes generated, to ensure their environmentally sound
management as closely as possible to the source of generation, and to assist LDCs in
environmentally sound management of the hazardous and other wastes they generate.
The Convention was opened for signature on 22 March 1989, and entered into force on 5 May
1992. As of January 2015, 182 states and the European Union are parties to the Convention.
Haiti and the United States have signed the Convention but not ratified it
History
With the tightening of environmental laws (for example, RCRA) in developed nations in the
1970s, disposal costs for hazardous waste rose dramatically. At the same time, globalization of
shipping made transboundary movement of waste more accessible, and many LDCs were
desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to LDCs,
grew rapidly.
One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste
disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the
United States dumped half of its load on a beach in Haiti before being forced away. It sailed for
many months, changing its name several times. Unable to unload the cargo in any port, the crew
was believed to have dumped much of it at sea.
Another is the 1988 Koko case in which 5 ships transported 8,000 barrels of hazardous waste
from Italy to the small town of Koko in Nigeria in exchange for $100 monthly rent which was
paid to a Nigerian for the use of his farmland.
These practices have been deemed "Toxic Colonialism" by many developing countries.
At its most recent meeting, 27 November 1 December 2006, the Conference of the parties of
the Basel Agreement focused on issues of electronic waste and the dismantling of ships.
According to Maureen Walsh, only around 4% of hazardous wastes that come from OECD
countries are actually shipped across international borders.[3] These wastes include, among others,
chemical waste, radioactive waste, municipal solid waste, asbestos, incinerator ash, and old tires.
Of internationally shipped waste that comes from developed countries, more than half is shipped
for recovery and the remainder for final disposal.
Increased trade in recyclable materials has led to an increase in a market for used products such
as computers. This market is valued in billions of dollars. At issue is the distinction when used
computers stop being a "commodity" and become a "waste".
As of January 2015, there are 183 parties to the treaty, which includes 180 UN member states
plus the Cook Islands, the European Union, and the State of Palestine. The 13 UN member states
that are not party to the treaty are Angola, East Timor, Fiji, Grenada, Haiti, San Marino, Sierra
Leone, Solomon Islands, South Sudan, Tajikistan, Tuvalu, United States, and Vanuatu
Solving the E-waste Problem (StEP) is an international initiative, created to develop solutions
to address issues associated with Waste Electrical and Electronic Equipment (WEEE). Some of
the most eminent players in the fields of Production, Reuse and Recycling of Electrical and
Electronic Equipment (EEE), government agencies and NGOs as well as UN Organisations
count themselves among its members. StEP encourages the collaboration of all stakeholders
connected with e-waste, emphasising a holistic, scientific yet applicable approach to the
problem.
Contents
History
Waste Electrical and Electronic Equipment (WEEE) is increasing every day. The volume of
WEEE is becoming a serious environmental problem that has yet to become recognised by the
greater public. To guarantee the neutrality required to give analysis and recommendations the
necessary credibility, StEP has been started. After a starting period of three years, initiated by the
United Nations University UNU, promotion team wetzlar and Hewlett-Packard, the StEP
Initiative had its official launch in March 2007.
TF 1 Policy: The aim of this Task Force is to assess and analyse current governmental
approaches and regulations related to WEEE. Starting from this analysis, recommendations for
future regulating activities shall be formulated.
TF2 ReDesign: This Task Force works on the design of EEE, focusing on the reduction of
negative consequences of electrical and electronic appliances throughout their entire life cycle.
The Task Force especially takes heed of the situation in developing countries.
TF3 ReUse: The focus of this Task Force lies in the development of sustainable, transmissible
principles and standards for the reuse of EEE.
TF4 ReCycle: The objective of this Task Force is to improve infrastructures, systems and
technologies to realize a sustainable recycling on a global level.
TF5 Capacity Building: The aim of this Task Force is to draw attention to the problems
connected to WEEE. This aim shall be achieved by making the results of the research of the Task
Forces and other stakeholders publicly available. In doing so, the Task Force relies on personal
networks, the internet, collaborative working tools etc.
Guiding Principles
"1. StEPs work is founded on scientific assessments and incorporates a comprehensive view of
the social, environmental and economic aspects of e-waste.
2. StEP conducts research on the entire life-cycle of electronic and electrical equipment and
their corresponding global supply, process and material flows.
3. StEPs research and pilot projects are meant to contribute to the solution of e-waste problems.
4. StEP condemns all illegal activities related to e-waste including illegal shipments and reuse/
recycling practices that are harmful to the environment and human health.
5. StEP seeks to foster safe and eco/energy-efficient reuse and recycling practices around the
globe in a socially responsible manner."
Household hazardous waste (HHW), sometimes called retail hazardous waste or "home
generated special materials', is post-consumer waste which qualifies as hazardous waste when
discarded. It includes household chemicals and other substances for which the owner no longer
has a use, such as consumer products sold for home care, personal care, automotive care, pest
control and other purposes. These products exhibit many of the same dangerous characteristics as
fully regulated hazardous waste due to their potential for reactivity, ignitability, corrosivity,
toxicity, or persistence. Examples include drain cleaners, oil paint, motor oil, antifreeze, fuel,
poisons, pesticides, herbicides and rodenticides, fluorescent lamps, lamp ballasts, smoke
detectors, medical waste, some types of cleaning chemicals, and consumer electronics (such as
televisions, computers, and cell phones).
Certain items such as batteries and fluorescent lamps can be returned to retail stores for disposal.
The Rechargeable Battery Recycling Corporation (RBRC) maintains a list of battery recycling
locations and your local environmental organization should have list of fluorescent lamp
recycling locations. The classification "household hazardous waste" has been used for decades
and does not accurately reflect the larger group of materials that during the past several years
have become known as "household hazardous wastes". These include items such as latex paint,
non-hazardous household products and other items that do not generally exhibit hazardous
characteristics which are routinely included in "household hazardous waste" disposal programs.
The term "home generated special materials" more accurately identifies a broader range of items
that public agencies are targeting as recyclable and/or should not be disposed of into a landfill.
In 1995, the Governing Council of the United Nations Environment Programme (UNEP) called
for global action to be taken on POPs, which it defined as "chemical substances that persist in the
environment, bio-accumulate through the food web, and pose a risk of causing adverse effects to
human health and the environment".
Following this, the Intergovernmental Forum on Chemical Safety (IFCS) and the International
Programme on Chemical Safety (IPCS) prepared an assessment of the 12 worst offenders, known
as the dirty dozen.
The INC met five times between June 1998 and December 2000 to elaborate the convention, and
delegates adopted the Stockholm Convention on POPs at the Conference of the Plenipotentiaries
convened from 2223 May 2001 in Stockholm, Sweden.
The negotiations for the Convention were completed on 23 May 2001 in Stockholm. The
convention entered into force on 17 May 2004 with ratification by an initial 128 parties and 151
signatories. Co-signatories agree to outlaw nine of the dirty dozen chemicals, limit the use of
DDT to malaria control, and curtail inadvertent production of dioxins and furans.
Parties to the convention have agreed to a process by which persistent toxic compounds can be
reviewed and added to the convention, if they meet certain criteria for persistence and
transboundary threat. The first set of new chemicals to be added to the Convention were agreed
at a conference in Geneva on 8 May 2009.
As of May 2013, there are 179 parties to the Convention, (178 states and the European Union).
Notable non-ratifying states include the United States, Israel, Malaysia, Italy and Iraq.
The Stockholm Convention was adopted to EU legislation in REGULATION (EC) No 850/2004
Abstract
On December 3 1984, more than 40 tons of methyl isocyanate gas leaked from a pesticide plant
in Bhopal, India, immediately killing at least 3,800 people and causing significant morbidity and
premature death for many thousands more. The company involved in what became the worst
industrial accident in history immediately tried to dissociate itself from legal responsibility.
Eventually it reached a settlement with the Indian Government through mediation of that
country's Supreme Court and accepted moral responsibility. It paid $470 million in
compensation, a relatively small amount of based on significant underestimations of the longterm health consequences of exposure and the number of people exposed. The disaster indicated
a need for enforceable international standards for environmental safety, preventative strategies to
avoid similar accidents and industrial disaster preparedness.
Since the disaster, India has experienced rapid industrialization. While some positive changes in
government policy and behavior of a few industries have taken place, major threats to the
environment from rapid and poorly regulated industrial growth remain. Widespread
environmental degradation with significant adverse human health consequences continues to
occur throughout India.
December 2004 marked the twentieth anniversary of the massive toxic gas leak from Union
Carbide Corporation's chemical plant in Bhopal in the state of Madhya Pradesh, India that killed
more than 3,800 people. This review examines the health effects of exposure to the disaster, the
legal response, the lessons learned and whether or not these are put into practice in India in terms
of industrial development, environmental management and public health.
Later, the affected area was expanded to include 700,000 citizens. A government
affidavit in 2006 stated the leak caused 558,125 injuries including 38,478
temporary partial injuries and approximately 3,900 severely and permanently
disabling injuries.[7]
A cohort of 80 021 exposed people was registered, along with a control group, a
cohort of 15 931 people from areas not exposed to MIC. Nearly every year since
1986, they have answered the same questionnaire. It shows overmortality and
overmorbidity in the exposed group. However, bias and confounding factors cannot
be excluded from the study. Because of migration and other factors, 75% of the
cohort is lost, as the ones who moved out are not followed. [5][21]
A number of clinical studies are performed. The quality varies, but the different
reports support each others.[5] Studied and reported long term health effects are:
Childrens health: Peri- and neonatal death rates increased. Failure to grow,
intellectual impairment etc.
Health care
The Government of India had focused primarily on increasing the hospital-based
services for gas victims thus hospitals had been built after the disaster. When UCC
wanted to sell its shares in UCIL, it was directed by the Supreme Court to finance a
500-bed hospital for the medical care of the survivors. Thus, Bhopal Memorial
Hospital and Research Centre (BMHRC) was inaugurated in 1998 and was obliged to
give free care for survivors for eight years. BMHRC was a 350-bedded super
speciality hospital where heart surgery and hemodialysis were done. However, there
was a dearth of gynaecology, obstetrics and paediatrics. Eight mini-units (outreach
health centres) were started and free health care for gas victims were to be offered
till 2006.[5] The management had also faced problems with strikes, and the quality
Environmental rehabilitation
When the factory was closed in 1986, pipes, drums and tanks were sold. The MIC
and the Sevin plants are still there, as are storages of different residues. Isolation
material is falling down and spreading.[5] The area around the plant was used as a
dumping area for hazardous chemicals. In 1982 tubewells in the vicinity of the UCIL
factory had to be abandoned and tests in 1989 performed by UCC's laboratory
revealed that soil and water samples collected from near the factory and inside the
plant were toxic to fish.[25] Several other studies had also shown polluted soil and
groundwater in the area. Reported polluting compounds include 1-naphthol,
naphthalene, Sevin, tarry residue, mercury, toxic organochlorines, volatile
organochlorine compounds, chromium, copper, nickel, lead, hexachloroethane,
hexachlorobutadiene, and the pesticide HCH.[5]
In order to provide safe drinking water to the population around the UCIL factory,
Government of Madhya Pradesh presented a scheme for improvement of water
supply.[26] In December 2008, the Madhya Pradesh High Court decided that the toxic
waste should be incinerated at Ankleshwar in Gujarat, which was met by protests
from activists all over India.[27] On 8 June 2012, the Centre for incineration of toxic
Bhopal waste agreed to pay 250 million (US$4.2 million) to dispose of UCIL
chemical plants waste in Germany. [28] On 9 August 2012, Supreme court directed the
Union and Madhya Pradesh Governments to, take immediate steps for disposal of
toxic waste lying around and inside the factory within six months. [29]
A U.S. court rejected the lawsuit blaming UCC for causing soil and water pollution
around the site of the plant and ruled that responsibility for remedial measures or
related claims rested with the State Government and not with UCC. [30] In 2005, the
state government invited various Indian architects to enter their "concept for
development of a memorial complex for Bhopal gas tragedy victims at the site of
Union Carbide". In 2011, a conference was held on the site, with participants from
European universities which was aimed for the same. [31][32]
what is called the "widow's colony" outside Bhopal. The water did not reach the
upper floors and it was not possible to keep cattle which were their primary
occupation. Infrastructure like buses, schools, etc. were missing for at least a
decade.[5]
Economic rehabilitation
Immediate relieves were decided two days after the tragedy. Relief measures
commenced in 1985 when food was distributed for a short period along with ration
cards.[5] Madhya Pradesh government's finance department allocated 874 million
(US$15 million) for victim relief in July 1985.[33][34] Widow pension of 200
(US$3.30)/per month (later 750 (US$13)) were provided. The government also
decided to pay 1500 (US$25) to families with monthly income 500 (US$8.40) or
less. As a result of the interim relief, more children were able to attend school, more
money was spent on treatment and food, and housing also eventually improved.
From 1990 interim relief of 200 (US$3.30) was paid to everyone in the family who
was born before the disaster.[5]
The final compensation, including interim relief for personal injury was for the
majority 25,000 (US$420). For death claim, the average sum paid out was 62,000
(US$1,000). Each claimant were to be categorised by a doctor. In court, the
claimants were expected to prove "beyond reasonable doubt" that death or injury in
each case was attributable to exposure. In 1992, 44 percent of the claimants still
had to be medically examined.[5]
By the end of October 2003, according to the Bhopal Gas Tragedy Relief and
Rehabilitation Department, compensation had been awarded to 554,895 people for
injuries received and 15,310 survivors of those killed. The average amount to
families of the dead was $2,200.[35]
In 2007, 1,029,517 cases were registered and decided. Number of awarded cases
were 574,304 and number of rejected cases 455,213. Total compensation awarded
was 15465 million (US$260 million).[26] On 24 June 2010, the Union Cabinet of the
Government of India approved a 12650 million (US$210 million) aid package which
would be funded by Indian taxpayers through the government. [36]
Other impacts
In 1985, Henry Waxman, a California Democrat, called for a U.S. government inquiry
into the Bhopal disaster, which resulted in U.S. legislation regarding the accidental
release of toxic chemicals in the United States. [37]
At Integrated Healthcare Services (IHS), we work hard to help you solve your problems and
make your job easier while improving patient outcomes and satisfaction.
Through our unique and comprehensive Workers' Compensation Medical Cost Containment
Program, IHS can really make a difference when it comes to saving you time and money.
IHS provides a full array of durable medical equipment, medical supplies and services, and
specialized management services to meet your unique needs.
Precrash
Human Factors
Vehicles and
Equipment Factors
Information
Roadworthiness
Attitudes
Lighting
Impairment
Police
Enforcement
Braking
Speed limits
Speed
Management
Occupant
restraints
Pedestrian facilities
Other safety
devices
Crash-protective
roadside objects
Crash-protective
design
Ease of access
Rescue facilities
Fire risk
Congestion
Crash
PostCrash
Use of restraints
Impairments
First-aid skills
Access to
medics
Environmental Factors
BMI Cut off points for under weight, overweight, four levels of obese
BMI classification
Body Mass Index (BMI) is a simple index of weight-for-height that is commonly used to classify
underweight, overweight and obesity in adults. It is defined as the weight in kilograms divided
by the square of the height in metres (kg/m ). For example, an adult who weighs 70kg and whose
height is 1.75m will have a BMI of 22.9.
2
BMI(kg/m )
2
Principal cut-off
points
Underweight
<18.50
<18.50
<16.00
<16.00
16.00 - 16.99
16.00 - 16.99
17.00 - 18.49
17.00 - 18.49
Severe thinness
Moderate
thinness
Mild thinness
Additional cut-off
points
18.50 - 22.99
Normal range
18.50 - 24.99
23.00 - 24.99
Overweight
25.00
25.00
25.00 - 27.49
Pre-obese
25.00 - 29.99
27.50 - 29.99
Obese
30.00
30.00
3
Obese class I
30.00 - 34.99
32.50 - 34.99
Obese c
ass II
35.00 - 37.49
35.00 - 39.99
37.50 - 39.99
Obese class
40.00
III
40.00
Source: Adapted from WHO, 1995, WHO, 2000 and WHO 2004.
BMI values are age-independent and the same for both sexes. However, BMI may not
correspond to the same degree of fatness in different populations due, in part, to different body
proportions. The health risks associated with increasing BMI are continuous and the
interpretation of BMI gradings in relation to risk may differ for different populations.
In recent years, there was a growing debate on whether there are possible needs for developing
different BMI cut-off points for different ethnic groups due to the increasing evidence that the
associations between BMI, percentage of body fat, and body fat distribution differ across
populations and therefore, the health risks increase below the cut-off point of 25 kg/m that
defines overweight in the current WHO classification.
2
There had been two previous attempts to interpret the BMI cut-offs in Asian and Pacific
populations , which contributed to the growing debates. Therefore, to shed the light on this
debates, WHO convened the Expert Consultation on BMI in Asian populations (Singapore, 8-11
July, 2002) .
3,4
The WHO Expert Consultation concluded that the proportion of Asian people with a high risk of
type 2 diabetes and cardiovascular disease is substantial at BMI's lower than the existing WHO
cut-off point for overweight (= 25 kg/m ). However, the cut-off point for observed risk varies
from 22 kg/m to 25 kg/m in different Asian populations and for high risk, it varies from 26
kg/m to 31 kg/m . The Consultation, therefore, recommended that the current WHO BMI cut-off
points (Table 1) should be retained as the international classification.
5
But the cut-off points of 23, 27.5, 32.5 and 37.5 kg/m are to be added as points for public health
action. It was, therefore, recommended that countries should use all categories (i.e. 18.5, 23, 25,
27.5, 30, 32.5 kg/m , and in many populations, 35, 37.5, and 40 kg/m ) for reporting purposes,
with a view to facilitating international comparisons.
2
Discussion updates
A WHO working group was formed by the WHO Expert Consultation and is currently
undertaking a further review and assessment of available data on the relation between waist
circumference and morbidity and the interaction between BMI, waist circumference, and health
risk.
5
SAAL seasonal awareness alert letter, measles and dengue stage measured
in terms of DEWS, DMIS
Theory
Disinfection with chlorine is very popular in water and wastewater treatment
because of its low cost, ability to form a residual, and its effectivness at low
concentrations. Although it is used as a disinfectant, it is a dangerous and
potentially fatal chemical if used improperly.
Despite the fact the disinfection process may seem simple, it is actually a quite
complicated process. Chlorination in wastewater treatment systems is a fairly
complex science which requires knowledge of the plant's effluent characteristics.
When free chlorine is added to the wastewater, it takes on various forms depending
on the pH of the wastewater. It is important to understand the forms of chlorine
which are present because each has a different disinfecting capability. The acid
form, HOCL, is a much stronger disinfectant than the hypochlorite ion, OCL-. The
graph below depicts the chlorine fractions at different pH values (Drawing by Erik
Johnston).
Ammonia present in the effluent can also cause problems as chloramines are
formed, which have very little disinfecting power. Some methods to overcome the
types of chlorine formed are to adjust the pH of the wastewater prior to chlorination
or to simply add a larger amount of chlorine. An adjustment in the pH would allow
the operators to form the most desired form of chlorine, hypochlorus acid, which
has the greatest disinfecting power. Adding larger amounts of chlorine would be an
excellent method to combat the chloramines because the ammonia present would
bond to the chlorine but further addition of chlorine would stay in the hypochlorus
acid or hypochlorite ion state.
a) Chlorine gas, when exposed to water reacts readily to form hypochlorus acid,
HOCl, and hydrochloric acid. Cl2 + H2O -> HOCl + HCl
b) If the pH of the wastewater is greater than 8, the hypochlorus acid will dissociate
to yield hypochlorite ion. HOCl <-> H+ + OCl-- If however, the pH is much less than
7, then HOCl will not dissociate.
c) If ammonia is present in the wastewater effulent, then the hypochlorus acid will
react to form one three types of chloramines depending on the pH, temperature,
and reaction time.
Monochloramine and dichloramine are formed in the pH range of 4.5 to 8.5,
however, monochloramine is most common when the pH is above 8. When the pH
of the wastewater is below 4.5, the most common form of chloramine is
trichloramine which produces a very foul odor. The equations for the formation of
the different chloramines are as follows: (Reynolds & Richards, 1996)
Monochloramine: NH3 + HOCl -> NH2Cl + H2O
Dichloramine: NH2Cl + 2HOCl -> NHCl2 + 2H2O
Trichloramine: NHCl2 + 3HOCl -> NHCl3 + 3H2O
Chloramines are an effective disinfectant against bacteria but not against viruses.
As a result, it is necessary to add more chlorine to the wastewater to prevent the
formation of chloramines and form other stronger forms of disinfectants.
d) The final step is that additional free chlorine reacts with the chloramine to
produce hydrogen ion, water , and nitrogen gas which will come out of solution. In
the case of the monochloramine, the following reaction occurs:
2NH2Cl + HOCl -> N2 + 6HCl + H2O
Thus, added free chlorine reduces the concentration of chloramines in the
disinfection process. Instead the chlorine that is added is allowed to form the
stronger disinfectant, hypochlorus acid.
Perhaps the most important stage of the wastewater treatment process is the
disinfection stage. This stage is most critical because it has the greatest effect on
public health as well as the health of the world's aquatic systems. It is important to
realize that wastewater treatment is not a cut and dry process but requires in depth
knowledge about the type of wastewater being treated and its characteristics to
obtain optimum results. (White, 1972)
The graph shown above depicts the chlorine residual as a function of increasing
chlorine dosage with descriptions of each zone given below (Drawing by Erik
Johnston, adapted from Reynolds and Richards, 1996).
Zone I: Chlorine is reduced to chlorides.
Zone II: Chloramines are formed.
Zone III: Chloramines are broken down and converted to nitrogen gas which
leaves the system (Breakpoint).
Zone IV: Free residual.
Therefore, it is very important to understand the amount and type of chlorine that
must be added to overcome the difficulties in the strength of the disinfectant which
results from the wastewater's characteristics.
Implementation
Water Treatment
The following is a schematic of a water treatment plant (Drawing by Matt Curtis).
Post chlorination is almost always done in water treatment, but can be replaced with
chlorine dioxide or chloramines. In this stage chlorine is fed to the drinking water
stream which is then sent to the chlorine contact basin to allow the chlorine a long
enough detention time to kill all viruses, bacteria, and protozoa that were not
removed and rendered inactive in the prior stages of treatment (Photo by Matt
Curtis).
Drinking water requires a large addition of chlorine because there must be a
residual amount of chlorine in the water that will carry through the system until it
reaches the tap of the user. After post chlorination, the water is retained in a clear
well prior to distribution. In the picture to the right, the clear pipe with the floater
designates the height of the water within the clear well. (Reynolds & Richards,
1996)
might confer some survival advantage. The "surplus calorie theory" as a potential
mechanism for the paradox is of great interest. If proven to be correct, it might
explain why peritoneal dialysis patients who receive excessive calories through
dialysis do not exhibit the paradox and, secondly and more importantly, therapy
could be directed to enhance a greater caloric intake by renal failure patients to
engender a better survival outcome. Finally, other clinical settings, for example,
congestive heart failure, have their own obesity-survival paradox. Thus, the paradox
appears to be a wider phenomenon and might merely be the external expression of
a larger principle yet to be uncovered.
In stage one, pre-industrial society, death rates and birth rates are high and
roughly in balance. All human populations are believed to have had this
balance until the late 18th century, when this balance ended in Western
Europe.[6] In fact, growth rates were less than 0.05% at least since the
Agricultural Revolution over 10,000 years ago. [6] Birth and death rates both
tend to be very high in this stage.[6] Because both rates are approximately in
balance, population growth is typically very slow in stage one. [6]
In stage two, that of a developing country, the death rates drop rapidly due to
improvements in food supply and sanitation, which increase life spans and
reduce disease. The improvements specific to food supply typically include
selective breeding and crop rotation and farming techniques. [6] Other
improvements generally include access to technology, basic healthcare, and
education. For example, numerous improvements in public health reduce
mortality, especially childhood mortality. [6] Prior to the mid-20th century,
these improvements in public health were primarily in the areas of food
handling, water supply, sewage, and personal hygiene. [6] One of the variables
often cited is the increase in female literacy combined with public health
education programs which emerged in the late 19th and early 20th centuries.
[6]
In Europe, the death rate decline started in the late 18th century in
northwestern Europe and spread to the south and east over approximately
the next 100 years.[6] Without a corresponding fall in birth rates this produces
an imbalance, and the countries in this stage experience a large increase in
population.
During stage four there are both low birth rates and low death rates. Birth
rates may drop to well below replacement level as has happened in countries
like Germany, Italy, and Japan, leading to a shrinking population, a threat to
many industries that rely on population growth. As the large group born
during stage two ages, it creates an economic burden on the shrinking
working population. Death rates may remain consistently low or increase
slightly due to increases in lifestyle diseases due to low exercise levels and
high obesity and an aging population in developed countries. By the late 20th
century, birth rates and death rates in developed countries leveled off at
lower rates.[5]
As with all models, this is an idealized picture of population change in these countries. The
model is a generalization that applies to these countries as a group and may not accurately
describe all individual cases. The extent to which it applies to less-developed societies today
remains to be seen. Many countries such as China, Brazil and Thailand have passed through the
Demographic Transition Model (DTM) very quickly due to fast social and economic change.
Some countries, particularly African countries, appear to be stalled in the second stage due to
stagnant development and the effect of AIDS.
During the transition, birth rates remain high while death rates drop, leading to
population growth.
During the pre-industrial stage, societies have high birth rates and high death rates.
During the industrial revolution, societies have high birth rates but death rates begin to
fall, leading to population growth.
During the post-industrial stage, societies have low birth rates and low death rates and
population stabilizes.
Terms
Economic development
Examples
Most of Western Europe has undergone a demographic transition in the past four
centuries. Prior to the Industrial Revolution, before the 1700s, the European population
was stable as both birth and death rates were high. The Industrial Revolution in the 1700s
and 1800s led to a population explosion as the birth rate remained high while the death
rate fell rapidly. By the 1900s and 2000s, the birth rate dropped to match the death rate,
and the population stabilized or, occasionally, began to shrink sightly.
According to Thomas Malthus, population growth is limited by available resources. But if that's
so, why is the world's population growing so rapidly in the regions that have the fewest
resources? In part, this puzzle can be explained by the demographic transition .
theory/images/the-demographic-transition/">
The Demographic Transition
This model illustrates the demographic transition, as birth and death rates rise and
fall but eventually reach equilibrium.
The demographic transition is a model and theory describing the transition from high birth and
death rates to low birth and death rates that occurs as part of the economic development of a
country. As countries industrialize, they undergo a transition during which death rates fall but
birth rates remain high. Consequently, population grows rapidly. This transition can be broken
down into four stages.
industrialization and higher incomes lead to lower population or whether lower populations lead
to industrialization and higher incomes.
As birth rates fall, the age structure of the population changes again. Families have fewer
children to support, decreasing the youth dependency ratio. But as people live longer, the
population as a whole grows older, creating a higher rate of old age dependency. During the
period between the decline in youth dependency and rise in old age dependency, there is a
demographic window of opportunity called the demographic dividend: The population has fewer
dependents (young and old) and a higher proportion of working-age adults, yielding increased
economic growth. This phenomenon can further the correlation between demographic transition
and economic development.
the process for each threat, identifying the strengths that the company can use to defend itself
from the threats and the weaknesses that leave the company exposed. When the company has
corresponding strengths and few weaknesses, this opportunity should be pursued vigorously. On
the other hand, the company should consider exiting those areas where it has many threats and
many weaknesses (especially if the threats target the company's weaknesses). Where it makes
sense for the company to stay in threatened areas, the teams should recommend how existing
strengths can be redeployed or acquired. Where there are opportunities worth pursuing, but the
company lacks strengths, recommendations can be prepared that include partnering with other
organizations or acquiring the necessary skills or resources through other means.
Final Assessment
Jack personally orchestrated the process, setting up a series of half-day work sessions that
involved his direct reports and several members of the functional areas reporting to him. He had
the groups use SWOT analysis as a key job aid in their work sessions, supported by facilitators
who understood the process. Jack also brought in outside facilitators to elicit objective opinions
and discussions.
The teams, which in previous years dreaded the paperwork demanded in creating a situation
analysis, were now energized by the interactive work sessions. Furthermore, they left the
meetings feeling their ideas would be used in the strategic plan that the corporation adopted and
that any resulting strategy was going to be their strategy. They weren't disappointed, either.
Because the resulting report summarized key factors and tied them directly to strategic
alternatives, the document had a significant impact on the development of the company's
strategic plan.
Health economics dollars per life years gained, dollars per quality adjusted
life years gained. It is a measure of cost effective analysis which measures
outcome in terms of DALYS
The
quality-adjusted life year or quality-adjusted life-year (QALY) is a measure of disease
burden, including both the quality and the quantity of life lived.[1][2] It is used in assessing the
value for money of a medical intervention. According to Pliskin et al., The QALY model requires
utility independent, risk neutral, and constant proportional tradeoff behaviour.[3]
The QALY is based on the number of years of life that would be added by the intervention. Each
year in perfect health is assigned the value of 1.0 down to a value of 0.0 for being dead. If the
extra years would not be lived in full health, for example if the patient would lose a limb, or be
blind or have to use a wheelchair, then the extra life-years are given a value between 0 and 1 to
account for this.[citation needed] Under certain methods, such as the EQ-5D, QALY can be negative
number
The disability-adjusted life year (DALY) is a measure of overall disease burden, expressed as
the number of years lost due to ill-health, disability or early death.
Originally developed by Harvard University for the World Bank in 1990, the World Health
Organization subsequently adopted the method in 1996 as part of the Ad hoc Committee on
Health Research "Investing in Health Research & Development" report. The DALY is becoming
increasingly common in the field of public health and health impact assessment (HIA). It
"extends the concept of potential years of life lost due to premature death...to include equivalent
years of 'healthy' life lost by virtue of being in states of poor health or disability."[2] In so doing,
mortality and morbidity are combined into a single, common metric.
Traditionally, health liabilities were expressed using one measure: (expected or average number
of) 'Years of Life Lost' (YLL). This measure does not take the impact of disability into account,
which can be expressed by: 'Years Lived with Disability' (YLD). DALYs are calculated by taking
the sum of these two components. In a formula:
DALY = YLL + YLD.[3]
The DALY relies on an acceptance that the most appropriate measure of the effects of chronic
illness is time, both time lost due to premature death and time spent disabled by disease. One
DALY, therefore, is equal to one year of healthy life lost. Japanese life expectancy statistics are
used as the standard for measuring premature death, as the Japanese have the longest life
expectancies.[4]
Looking at the burden of disease via DALYs can reveal surprising things about a population's
health. For example, the 1990 WHO report indicated that 5 of the 10 leading causes of disability
were psychiatric conditions. Psychiatric and neurologic conditions account for 28% of all years
lived with disability, but only 1.4% of all deaths and 1.1% of years of life lost. Thus, psychiatric
disorders, while traditionally not regarded as a major epidemiological problem, are shown by
consideration of disability years to have a huge impact on populations
Cost benefit analysis outcome measured in monetary terms- not suitable for
health
CBA is related to, but distinct from cost-effectiveness analysis. In CBA, benefits and costs are
expressed in monetary terms, and are adjusted for the time value of money, so that all flows of
benefits and flows of project costs over time (which tend to occur at different points in time) are
expressed on a common basis in terms of their "net present value."
Closely related, but slightly different, formal techniques include cost-effectiveness analysis,
costutility analysis, economic impact analysis, fiscal impact analysis, and Social return on
investment (SROI) analysis.
Social marketing seeks to develop and integrate marketing concepts with other approaches to
influence behaviors that benefit individuals and communities for the greater social good. It seeks
to integrate research, best practice, theory, audience and partnership insight, to inform the
delivery of competition sensitive and segmented social change programs that are effective,
efficient, equitable and sustainable.[1]
Although "social marketing" is sometimes seen only as using standard commercial marketing
practices to achieve non-commercial goals, this is an oversimplification. The primary aim of
social marketing is "social good", while in "commercial marketing" the aim is primarily
"financial". This does not mean that commercial marketers can not contribute to achievement of
social good.
Increasingly, social marketing is being described as having "two parents"a "social parent",
including social science and social policy approaches, and a "marketing parent", including
commercial and public sector marketing approaches.
Fig 1
Social marketing wheel
Audience segmentation
One of the key decisions in social marketing that guides the planning of most health
communications is whether to deliver messages to a general audience or whether to segment
into target audiences. Audience segmentation is usually based on sociodemographic, cultural,
and behavioural characteristics that may be associated with the intended behaviour change. For
example, the National Cancer Institute's five a day for better health campaign developed
specific messages aimed at Hispanic people, because national data indicate that they eat fewer
fruits and vegetables and may have cultural reasons that discourage them from eating locally
available produce.6
The broadest approach to audience segmentation is targeted communications, in which
information about population groups is used to prepare messages that draw attention to a generic
message but are targeted using a person's name (for example, marketing by mass mail). This
form of segmentation is used commercially to aim products at specific customer profiles (for
example, upper middle income women who have children and live in suburban areas). It has
been used effectively in health promotion to develop socially desirable images and prevention
messages (fig 2).
Fig 2
Image used in the American Legacy Foundation's Truth antismoking campaign
aimed at young people
thoughts about a message's arguments) for long term persuasion to occur.3 Exposure theorists
study how the intensity of and length of exposure to a message affects behaviour.10
Social marketers use theory to identify behavioural determinants that can be modified. For
example, social marketing aimed at obesity might use behavioural theory to identify connections
between behavioural determinants of poor nutrition, such as eating habits within the family,
availability of food with high calorie and low nutrient density (junk food) in the community, and
the glamorisation of fast food in advertising. Social marketers use such factors to construct
conceptual frameworks that model complex pathways from messages to changes in behaviour
(fig 3).
Fig 3
Example of social marketing conceptual framework
In applying theory based conceptual models, social marketers again use commercial marketing
strategies based on the marketing mix.2 For example, they develop brands on the basis of health
behaviour and lifestyles, as commercial marketers would with products. Targeted and tailored
message strategies have been used in antismoking campaigns to build brand equitya set of
attributes that a consumer has for a product, service, or (in the case health campaigns) set of
behaviours.13 Brands underlying the VERB campaign (which encourages young people to be
physically active) and Truth campaigns were based on alternative healthy behaviours, marketed
using socially appealing images that portrayed healthy lifestyles as preferable to junk food or fast
food and cigarettes.14,15
Go to:
Summary points
Social marketing uses commercial marketing strategies such as audience segmentation and
branding to change health behaviour
Social marketing is an effective way to change health behaviour in many areas of health risk
Doctors can reinforce these messages during their direct and indirect contact with patients
This is a small effect by clinical standards, but it shows that social marketing can have a big
impact at the population level. For example, if the number of young people in the US was 40
million, 10.1 million would have smoked in 1999, and this would be reduced to 7.2 million by
2002. In this example, the Truth campaign would be responsible for nearly 640 000 young
people not starting to smoke; this would result in millions of added life years and reductions in
healthcare costs and other social costs.
In a study of 48 social marketing campaigns in the US based on the mass media, the average
campaign accounted for about 9% of the favourable changes in health risk behaviour, but the
results were variable.17 Non-coercive campaigns (those that simply delivered health
information) accounted for about 5% of the observed variation.17
A study of 17 recent European health campaigns on a range of topics including promotion of
testing for HIV, admissions for myocardial infarction, immunisations, and cancer screening also
found small but positive effects.18 This study showed that behaviours that need to be changed
once or only a few times are easier to promote than those that must be repeated and maintained
over time.19 Some examples (such as breast feeding, taking vitamin A supplements, and
switching to skimmed milk) have shown greater effect sizes, and they seem to have higher rates
of success.19,20
Go to:
Notes
Contributors and sources: WDE's research focuses on behaviour change and public education
intervention programmes designed to communicate science based information. He has published
extensively on the influence of the media on health behaviour, including the effects of social
marketing on changes in behaviour. This article arose from his presentation at and discussions
after a recent conference on diet and communication.
Competing interests: None declared.
Abstract
Diabetes continues to increase in magnitude throughout the United States and abroad. It is
expected to increase by 165% from 2000 to 2050. Diabetes poses a particular burden to those in
ethnic minority populations. African Americans, Hispanics, and American Indians are more
likely to be affected by diabetes, to be less active in health-promoting behavior, and to have
fewer resources to address related complications compared with whites.
Because diabetes disproportionately affects ethnic minorities in the United States, it is imperative
that interventions be tailored to these audiences. To develop effective interventions, program
developers must identify an audience-centered planning process that provides a foundation for
culturally innovative interventions.
Social marketing efforts in both domestic and international settings have been successful at
improving the lives and health status of targeted individuals and communities. This article
describes how the social marketing process can be used to create interventions that are culturally
innovative and relevant. The Social Marketing Assessment and Response Tool (SMART) model
is used to establish a relationship between social marketing and culturally specific interventions.
The model incorporates a systematic and sequential process that includes preliminary planning;
audience, channel, and market analyses; materials development and pretesting; implementation;
and evaluation. Diabetes interventions that are developed and implemented with this approach
hold promise as solutions that are more likely to be adopted by targeted audiences and to result in
the desired health status changes.
Communication is a central aspect of health promotion and the opportunity for mass
communication makes the media a popular option amongst health promoters. The media in this
The Regional Office has developed a guideline on Monitoring and Evaluation of Health sector
Reforms. The purpose of this guideline is to therefore provide planners and policymakers in the
African region with guidance on monitoring and evaluating the process and progress of health
sector reforms within and across countries on a regular basis. The latest countries that received
support from AFRO for their health sector reforms are DRC and Kenya.
Public Policy Reforms
Service Delivery Reforms
Universal Coverage Reforms
Leadership Reforms
Having a great idea, and assembling a team to bring that concept to life is the first step in
creating a successful business venture. While finding a new and unique idea is rare enough; the
ability to successfully execute this idea is what separates the dreamers from the entrepreneurs.
However you see yourself, whatever your age may be, as soon as you make that exciting first
hire, you have taken the first steps in becoming a powerful leader. When money is tight, stress
levels are high, and the visions of instant success dont happen like you thought, its easy to let
those emotions get to you, and thereby your team. Take a breath, calm yourself down, and
remind yourself of the leader you are and would like to become. Here are some key qualities that
every good leader should possess, and learn to emphasize.
Honesty
Whatever ethical plane you hold yourself to, when you are responsible for a team of people, its
important to raise the bar even higher. Your business and its employees are a reflection of
yourself, and if you make honest and ethical behavior a key value, your team will follow suit.
As we do at RockThePost, the crowdfunding platform for entrepreneurs and small businesses I
co-founded, try to make a list of values and core beliefs that both you and your brand represent,
and post this in your office. Promote a healthy interoffice lifestyle, and encourage your team to
live up to these standards. By emphasizing these standards, and displaying them yourself, you
will hopefully influence the office environment into a friendly and helpful workspace.
Ability to Delegate
Finessing your brand vision is essential to creating an organized and efficient business, but if you
dont learn to trust your team with that vision, you might never progress to the next stage. Its
important to remember that trusting your team with your idea is a sign of strength, not weakness.
Delegating tasks to the appropriate departments is one of the most important skills you can
develop as your business grows. The emails and tasks will begin to pile up, and the more you
stretch yourself thin, the lower the quality of your work will become, and the less you will
produce.
The key to delegation is identifying the strengths of your team, and capitalizing on them. Find
out what each team member enjoys doing most. Chances are if they find that task more
enjoyable, they will likely put more thought and effort behind it. This will not only prove to your
team that you trust and believe in them, but will also free up your time to focus on the higher
level tasks, that should not be delegated. Its a fine balance, but one that will have a huge impact
on the productivity of your business.
Communication
Knowing what you want accomplished may seem clear in your head, but if you try to explain it
to someone else and are met with a blank expression, you know there is a problem. If this has
been your experience, then you may want to focus on honing your communication skills. Being
able to clearly and succinctly describe what you want done is extremely important. If you cant
relate your vision to your team, you wont all be working towards the same goal.
Training new members and creating a productive work environment all depend on healthy lines
of communication. Whether that stems from an open door policy to your office, or making it a
point to talk to your staff on a daily basis, making yourself available to discuss interoffice issues
is vital. Your team will learn to trust and depend on you, and will be less hesitant to work harder.
Sense of Humor
If your website crashes, you lose that major client, or your funding dries up, guiding your team
through the process without panicking is as challenging as it is important. Morale is linked to
productivity, and its your job as the team leader to instill a positive energy. Thats where your
sense of humor will finally pay off. Encourage your team to laugh at the mistakes instead of
crying. If you are constantly learning to find the humor in the struggles, your work environment
will become a happy and healthy space, where your employees look forward to working in,
rather than dreading it. Make it a point to crack jokes with your team and encourage personal
discussions of weekend plans and trips. Its these short breaks from the task at hand that help
keep productivity levels high and morale even higher.
At RockThePost, we place a huge emphasis on humor and a light atmosphere. Our office is dog
friendly, and we really believe it is the small, light hearted moments in the day that help keep our
work creative and fresh. One tradition that we like to do and brings the team closer is we plan a
fun prank on all new employees, on their first day. It breaks the ice and immediately creates that
sense of familiarity.
Confidence
There may be days where the future of your brand is worrisome and things arent going
according to plan. This is true with any business, large or small, and the most important thing is
not to panic. Part of your job as a leader is to put out fires and maintain the team morale. Keep
up your confidence level, and assure everyone that setbacks are natural and the important thing is
to focus on the larger goal. As the leader, by staying calm and confident, you will help keep the
team feeling the same. Remember, your team will take cues from you, so if you exude a level of
calm damage control, your team will pick up on that feeling. The key objective is to keep
everyone working and moving ahead.
Commitment
If you expect your team to work hard and produce quality content, youre going to need to lead
by example. There is no greater motivation than seeing the boss down in the trenches working
alongside everyone else, showing that hard work is being done on every level. By proving your
commitment to the brand and your role, you will not only earn the respect of your team, but will
also instill that same hardworking energy among your staff. Its important to show your
commitment not only to the work at hand, but also to your promises. If you pledged to host a
holiday party, or uphold summer Fridays, keep your word. You want to create a reputation for not
just working hard, but also be known as a fair leader. Once you have gained the respect of your
team, they are more likely to deliver the peak amount of quality work possible.
Having a great idea, and assembling a team to bring that concept to life is the first step in
creating a successful business venture. While finding a new and unique idea is rare enough; the
ability to successfully execute this idea is what separates the dreamers from the entrepreneurs.
However you see yourself, whatever your age may be, as soon as you make that exciting first
hire, you have taken the first steps in becoming a powerful leader. When money is tight, stress
levels are high, and the visions of instant success dont happen like you thought, its easy to let
those emotions get to you, and thereby your team. Take a breath, calm yourself down, and
remind yourself of the leader you are and would like to become. Here are some key qualities that
every good leader should possess, and learn to emphasize.
Honesty
Whatever ethical plane you hold yourself to, when you are responsible for a team of people, its
important to raise the bar even higher. Your business and its employees are a reflection of
yourself, and if you make honest and ethical behavior a key value, your team will follow suit.
As we do at RockThePost, the crowdfunding platform for entrepreneurs and small businesses I
co-founded, try to make a list of values and core beliefs that both you and your brand represent,
and post this in your office. Promote a healthy interoffice lifestyle, and encourage your team to
live up to these standards. By emphasizing these standards, and displaying them yourself, you
will hopefully influence the office environment into a friendly and helpful workspace.
Ability to Delegate
Finessing your brand vision is essential to creating an organized and efficient business, but if you
dont learn to trust your team with that vision, you might never progress to the next stage. Its
important to remember that trusting your team with your idea is a sign of strength, not weakness.
Delegating tasks to the appropriate departments is one of the most important skills you can
develop as your business grows. The emails and tasks will begin to pile up, and the more you
stretch yourself thin, the lower the quality of your work will become, and the less you will
produce.
The key to delegation is identifying the strengths of your team, and capitalizing on them. Find
out what each team member enjoys doing most. Chances are if they find that task more
enjoyable, they will likely put more thought and effort behind it. This will not only prove to your
team that you trust and believe in them, but will also free up your time to focus on the higher
level tasks, that should not be delegated. Its a fine balance, but one that will have a huge impact
on the productivity of your business.
Communication
Knowing what you want accomplished may seem clear in your head, but if you try to explain it
to someone else and are met with a blank expression, you know there is a problem. If this has
been your experience, then you may want to focus on honing your communication skills. Being
able to clearly and succinctly describe what you want done is extremely important. If you cant
relate your vision to your team, you wont all be working towards the same goal.
Training new members and creating a productive work environment all depend on healthy lines
of communication. Whether that stems from an open door policy to your office, or making it a
point to talk to your staff on a daily basis, making yourself available to discuss interoffice issues
is vital. Your team will learn to trust and depend on you, and will be less hesitant to work harder.
Sense of Humor
If your website crashes, you lose that major client, or your funding dries up, guiding your team
through the process without panicking is as challenging as it is important. Morale is linked to
productivity, and its your job as the team leader to instill a positive energy. Thats where your
sense of humor will finally pay off. Encourage your team to laugh at the mistakes instead of
crying. If you are constantly learning to find the humor in the struggles, your work environment
will become a happy and healthy space, where your employees look forward to working in,
rather than dreading it. Make it a point to crack jokes with your team and encourage personal
discussions of weekend plans and trips. Its these short breaks from the task at hand that help
keep productivity levels high and morale even higher.
At RockThePost, we place a huge emphasis on humor and a light atmosphere. Our office is dog
friendly, and we really believe it is the small, light hearted moments in the day that help keep our
work creative and fresh. One tradition that we like to do and brings the team closer is we plan a
fun prank on all new employees, on their first day. It breaks the ice and immediately creates that
sense of familiarity.
Confidence
There may be days where the future of your brand is worrisome and things arent going
according to plan. This is true with any business, large or small, and the most important thing is
not to panic. Part of your job as a leader is to put out fires and maintain the team morale. Keep
up your confidence level, and assure everyone that setbacks are natural and the important thing is
to focus on the larger goal. As the leader, by staying calm and confident, you will help keep the
team feeling the same. Remember, your team will take cues from you, so if you exude a level of
calm damage control, your team will pick up on that feeling. The key objective is to keep
everyone working and moving ahead.
Commitment
If you expect your team to work hard and produce quality content, youre going to need to lead
by example. There is no greater motivation than seeing the boss down in the trenches working
alongside everyone else, showing that hard work is being done on every level. By proving your
commitment to the brand and your role, you will not only earn the respect of your team, but will
also instill that same hardworking energy among your staff. Its important to show your
commitment not only to the work at hand, but also to your promises. If you pledged to host a
holiday party, or uphold summer Fridays, keep your word. You want to create a reputation for not
just working hard, but also be known as a fair leader. Once you have gained the respect of your
team, they are more likely to deliver the peak amount of quality work possible.
Positive Attitude
You want to keep your team motivated towards the continued success of the company, and keep
the energy levels up. Whether that means providing snacks, coffee, relationship advice, or even
just an occasional beer in the office, remember that everyone on your team is a person. Keep the
office mood a fine balance between productivity and playfulness.
If your team is feeling happy and upbeat, chances are they wont mind staying that extra hour to
finish a report, or devoting their best work to the brand.
Creativity
Some decisions will not always be so clear-cut. You may be forced at times to deviate from your
set course and make an on the fly decision. This is where your creativity will prove to be vital. It
is during these critical situations that your team will look to you for guidance and you may be
forced to make a quick decision. As a leader, its important to learn to think outside the box and to
choose which of two bad choices is the best option. Dont immediately choose the first or easiest
possibility; sometimes its best to give these issues some thought, and even turn to your team for
guidance. By utilizing all possible options before making a rash decision, you can typically reach
the end conclusion you were aiming for.
Intuition
When leading a team through uncharted waters, there is no roadmap on what to do. Everything is
uncertain, and the higher the risk, the higher the pressure. That is where your natural intuition has
to kick in. Guiding your team through the process of your day-to-day tasks can be honed down to
a science. But when something unexpected occurs, or you are thrown into a new scenario, your
team will look to you for guidance. Drawing on past experience is a good reflex, as is reaching
out to your mentors for support. Eventually though, the tough decisions will be up to you to
decide and you will need to depend on your gut instinct for answers. Learning to trust yourself is
as important as your team learning to trust you.
Ability to Inspire
Creating a business often involves a bit of forecasting. Especially in the beginning stages of a
startup, inspiring your team to see the vision of the successes to come is vital. Make your team
feel invested in the accomplishments of the company. Whether everyone owns a piece of equity,
or you operate on a bonus system, generating enthusiasm for the hard work you are all putting in
is so important. Being able to inspire your team is great for focusing on the future goals, but it is
also important for the current issues. When you are all mired deep in work, morale is low, and
energy levels are fading, recognize that everyone needs a break now and then. Acknowledge the
work that everyone has dedicated and commend the team on each of their efforts. It is your job to
keep spirits up, and that begins with an appreciation for the hard work.
Responsive to the groups needs: Being perceptive can also help a leader
be more effective in knowing the needs of the team. Some teams value trust
over creativity; others prefer a clear communicator to a great organizer.
Building a strong team is easier when you know the values and goals of each
individual, as well as what they need from you as their leader.
they play in helping the organization grow and thrive. Full knowledge of your
organization inside and out is vital to becoming an effective leader.
Team building Putting together strong teams that work well is another
trait of great leaders. The opposite is also true: if a team is weak and
dysfunctional, it is generally a failure in leadership.
Risk taking You can learn how to assess risk and run scenarios that will
help you make better decisions. Great leaders take the right risks at the right
time.
Vision and goal setting A team depends on its leader to tell them where
they are going, why they are going, and how theyre going to get there.
People are more motivated when a leader articulates his or her vision for a
project or for the organization, along with the steps or goals needed to
achieve it.
Fail young, often, and hard Learn from mistakes, admit them, and stay humble.
Think the impossible to realize the maximum possible Be bold and brave.
Exercise tough empathy towards your team Give them what they need in your opinion, and
not necessarily what they want.
Be effective and efficient at the same time Do the right things in the right way.
Practice execution as an art Be focused on making decisions and implement them until the
very end in the best possible manner.
Embrace a Poet and Peasant approach Have a strategic mind set and simultaneously dont
mind diving into details and rolling up your sleeves.
Stay human, approachable, and show respect Choose being people-focused over taskfocused, even and especially, when push comes to shove.
Be resilient and display a can-do-attitude If something does not work, try something else. Be
positive and radiate confidence and strength.
Over-communicate and youll over-perform Teams, peers, business partners, etc. need
clarity and transparency.
Recruit, develop and empower the best fitting ones Make sure that there is a cultural and
mental fit between company and employees built on a psychological contract.
Work hard, smart and have fun No output without input. At the same time you should love
and enjoy what you do. Only then you can be highly passionate and committed.
Under-promise and over-deliver Walk your talk.
Inspire Think, behave and communicate beyond pure targets and figures. Stimulate people
around you to play and to experiment.
Stay true to yourself and your core values Adapt yourself, if necessary. Never bend yourself.
If not, you might break and might lose your heart and soul.
Believe in the good Stay always open-minded and curious without being naive.
Its all about the long-term If needed, forgo and sacrifice short-term profit and benefits for
the sake of long-term growth and sustainability.
Lead a holistically fullfilled life Life is much more than work and making career. Spend
enough time with family, friends, and loved ones. Relish your hobbies and passions without bad
feelings.
What do you think? Looking forward to receiving your feedback. Join the discussion!
*****
Andreas von der Heydt is the Country Manager of Amazon BuyVIP in Germany. Before that he
hold senior management positions at L'Oral. Hes a leadership expert, management coach and
NLP master. He also founded Consumer Goods Club. Andreas worked and lived in Europe, the
U.S. and Asia.
Motivation is the driving force that causes the flux from desire to will in life. For example,
hunger is a motivation that elicits a desire to eat.
Motivation has been shown to have roots in physiological, behavioral, cognitive, and social
areas. Motivation may be rooted in a basic impulse to optimize well-being, minimize physical
pain and maximize pleasure. It can also originate from specific physical needs such as eating,
sleeping or resting, and sex.
Motivation is an inner drive to behave or act in a certain manner. These inner conditions such as
wishes, desires and goals, activate to move in a particular direction in behavior.
There are two types of motivation, Intrinsic and Extrinsic motivation. It's important to
understand that we are not all the same; thus effectively motivating your employees requires that
you gain an understanding of the different types of motivation. Such an understanding will
enable you to better categorize your team members and apply the appropriate type of motivation.
You will find each member different and each member's motivational needs will be varied as
well. Some people respond best to intrinsic which means "from within" and will meet any
obligation of an area of their passion. Quite the reverse, others will respond better to extrinsic
motivation which, in their world, provides that difficult tasks can be dealt with provided there is
a reward upon completion of that task. Become an expert in determining which type will work
best with which team members.
Intrinsic Motivation
Intrinsic motivation means that the individual's motivational stimuli are coming from within. The
individual has the desire to perform a specific task, because its results are in accordance with his
belief system or fulfills a desire and therefore importance is attached to it.
Our deep-rooted desires have the highest motivational power. Below are some examples:
Acceptance: We all need to feel that we, as well as our decisions, are
Extrinsic Motivation
Extrinsic motivation means that the individual's motivational stimuli are coming from
outside. In other words, our desires to perform a task are controlled by an outside source.
Note that even though the stimuli are coming from outside, the result of performing the
task will still be rewarding for the individual performing the task.
Extrinsic motivation is external in nature. The most well-known and the most debated
motivation is money. Below are some other examples:
Bonuses
Organized activities
The top line of the following graph shows actual U.S. population from 1970 to
1993, and the U.S. Census Bureau "medium projection" of total population size
from 1994 to 2050.2 It assumes fertility, mortality, and mass immigration levels
will remain similar to 1993. In fact, overall immigration has continued to rise
significantly, meaning that population growth will actually be higher than
shown below.
The green lower portion of the graph represents growth from 1970 Americans
and their descendants. There were 203 million people living in the U.S. in 1970.
The projection of growth in 1970-stock Americans and their descendants from
1994 to 2050 is based on recent native-born fertility and mortality rates. This
growth would occur despite below replacement-level fertility rates because of
population momentum, where today's children will grow up to have their own
children. This segment of Americans is on track to peak at 247 million in 2030
and then gradually decline.11
The red upper portion of the graph represents the difference between the number
of 1970-stock Americans and the total population. The tens of millions of people
represented by this block are the immigrants who have arrived, or are projected
to arrive, since 1970, plus their descendents, minus deaths. They are projected to
comprise 70% of all U.S. population growth between 1993 and 205033.
Immigration numbers
History shows the U.S. has traditionally allowed relatively small numbers to
immigrate, thus allowing for decades of assimilation. After the peak of about 8.7
million in the first decade of the 20th century (the "great wave"), numbers went
steadily down. Immigration averaged only 195,000 per year from 1921 through
1970!40
one year we accept a number equal to what we formerly took in five years; in
two years what took a decade, etc. In response to such concerns a national
bipartisan committee headed by the late Barbara Jordan concluded that the
numbers should be reduced. A recently released RAND report recommends
that the level be reduced.
It is interesting to see how this plays out in the real world. According to
journalist Roy Beck, in California it is necessary to construct a new classroom
every hour of the day, 24 hours per day 365 days of the year to accommodate
immigrant children. The financial cost is borne by native households, who
according to a National Academy of Science report, pay an additional $1200
per year in taxes because of mass immigration. Even so, the primary concern to
environmentalists and Sierra Club members is the tremendous environmental
impact that will be incurred as a consequence of continued U.S. population
growth.
Can not solve third-world population problems
U.S. overimmigration does not relieve overpopulation problems in third-world
countries. Over 4.9 billion people live in countries poorer than Mexico.43 Each
year the populations of the world's impoverished nations grow by tens of
millions. Mexico grows by 2.5 million per year, Latin America by 9.3 million,
South America by 5.4 million, and China by 8.3 million.4 U.S. overimmigration
cannot have any significant affect on this number, even at current high mass
immigration levels of over 1,000,000 per year.
Exponential growth
U.S. population is projected to double.2 Although current population growth
rates are not strictly exponential, an analysis of exponential growth44 reveals
how quickly a population can grow. Exponential growth is like compound
interest. With 1% growth rate, population will double in 70 years, a 2% growth
rate will cause doubling in 35 years, 10% in 10 years. (Divide 70 by the
percentage number to get the approximate doubling time).
U.S. population has grown by 1.2% per year over the last 50 years. This "low"
growth rate means it has taken only 58 years for our population to double.
We can expect this doubling to continue, drastically magnified by the impact of
unrealistically high levels of mass immigration.
Rate of Population
Increase
Years Required to
Double Population
0.01%
6,930
0.1%
693
0.5%
139
1.0%
70
1.5%
47
2.0%
35
2.5%
28
2.8%
25
3.0%
23
3.5%
20
4.0%
18
sum of the immigrant and native born accounts. These calculations for the year
1994, using National Center for Health Statistics (1996) figures on births and
deaths14 and Center for Immigration Studies (1995) figures on immigration,
yield startling results. The foreign born are about ten percent of the population
but had over 18 percent of births. Mass immigration and children born to the
foreign-born sector, in 1994, accounted for a net increase of 1.6 million
persons, or sixty percent, of the United States' annual population growth.
1994 Category
Immigration
Native Born
Foreign Born
Total
1,206,000
1,206,000
Births
3,264,505
731,262
3,995,767
Deaths
-2,074,136
-204,858
-2,278,994
Emigration
-125,000
-125,000
(Est.) -250,000
Population Growth
l,065,369
1,607,404
2,672,773
40%
60%
100%
Percentage Share
Legal Immigrant
Category
1996 U.S.
Admissions
Current
Legal Limits
Jordan Commission
Recommended
Family sponsored
596,264
400,000
Employment-based
117,499
About 140,000
100,000
Diversity programs
58,790
About 55,000
Refugee adjustments
118,528
50,000
Asylee adjustments
10,037
No practical limit
Other
14,598
20,000+
Total
915,900
Analysis courtesy Colorado Population Coalition
Additional data: Federation for American Immigration Reform
550,000 +
For all practical purposes, the U.S. does not have overall limits on mass
immigration, and this is a major reason why the numbers have grown, and will
continue to grow, and why the issue needs to be addressed. Rather, the numbers
result from the wide range of adult "extended family reunification" categories
that have no limits on them. Historically, in the past there were some "caps"
included in immigration expansion legislation, but these caps were "pierceable"
if the need arose, and since the need always arose, these caps turned out to be
meaningless.
Categories, to the extent they exist, are few and small, and are not intended to
limit mass immigration, but rather to ensure that certain nationalities that were
being squeezed out by the extended family/clan "family reunification"
onslaught, have some access to immigration.
This is not to say that a focus on categories or quotas - ratios of categories - is
more important than a focus on overall numbers. Quotas aren't an
environmental issue, they are a social and legislative issue, and indeed, quotas
haven't been implemented for years. The proportion of immigrants allowed
under law to enter into the U.S. is not an environmental issue, and there is no
reason for environmentalists and the Sierra Club to become involved in this
social issue. There is, however, clear reason for environmentalists and the
Sierra Club to be concerned with overpopulation as a fundamental
environmental issue, and to address both of its causes: increase from natural
births and overall immigration numbers.
Discussion
Chemoprophylaxis with single dose rifampicin for preventing leprosy among contacts is a costeffective prevention strategy. At program level an incremental of $ 6 009 was invested and 38
incremental leprosy cases were prevented, resulting in an ICER of $ 158 per one additional
prevented leprosy case.
regions. Globally the distribution is around 40% for MB and 60% for PB in newly detected
leprosy cases, but with widely varying ratios between countries [25]. Since costs for treating PB
and MB leprosy are different, these differences are likely to affect the outcome of the costeffectiveness analysis. Thirdly, the percentage of newly detected cases that are a household
contact of a known leprosy patient differs per country and is possibly determined by the
endemicity level of leprosy in a country or area. In Bangladesh, in the high endemic area where
the COLEP study was conducted, approximately 25% of newly detected cases had a known
index case within the family, whereas in a low endemic area (Thailand) this proportion was 62%
[26]. An intervention aimed at close (household) contacts may therefore be more cost-effective in
countries where relatively many new cases are household contacts. But the background and
implications of such differences on effectiveness of chemoprophylaxis needs further research.
Only few articles have been published about cost-effectiveness analyses of interventions in
leprosy [27]. Most articles assess small parts of leprosy control, such as footwear provision [28],
MDT delivery costs [29], or the economic aspects of hospitalisation versus ambulatory care of
neuritis in leprosy reactions [30]. Only two studies provided a more general cost-effect analysis.
Naik and Ganapati included several costs in their economic evaluation, but a limitation of the
study is the lack of reference about how they obtained their cost data [31]. Remme et al. based
the cost calculations in their study on the limited available published cost data, program
expenditure data and expert opinion, and also provide limited insight into how they obtained
certain costs and effects [30]. Both studies do not mention well how the costs are obtained, (e.g.
real costs, bottom-up or top-down costs). Our current article is basically one of the first
structured cost-effective analyses for leprosy presenting an overview of the costs involved and
can be used for the assessment of the costs of leprosy control in general.
This report shows that chemoprophylaxis with single dose rifampicin given to contacts of newly
diagnosed leprosy patients is a cost-effective intervention strategy. Implementation studies in the
field are necessary to establish whether this intervention is acceptable and feasible in other
leprosy endemic areas of the world.
Treatment of leprosy
Several drugs are used in combination in multidrug therapy (MDT). (See table) These drugs must
never be used alone as monotherapy for leprosy.
Dapsone, which is bacteriostatic or weakly bactericidal against M. leprae, was the mainstay
treatment for leprosy for many years until widespread resistant strains appeared. Combination
therapy has become essential to slow or prevent the development of resistance. Rifampicin is
now combined with dapsone to treat paucibacillary leprosy. Rifampicin and clofazimine are
now combined with dapsone to treat multibacillary leprosy.
A single dose of combination therapy has been used to cure single lesion paucibacillary leprosy:
rifampicin (600 mg), ofloxacin (400 mg), and minocycline (100 mg). The child with a single
lesion takes half the adult dose of the 3 medications.
WHO has designed blister pack medication kits for both paucibacillary leprosy and for
multibacillary leprosy. Each easy-to use kit contains medication for 28 days. The blister pack
medication kit for single lesion paucibacillary leprosy contains the necessary medication for the
one time administration of the 3 medications.
Any patient with a positive skin smear must be treated with the MDT regimen for multibacillary
leprosy. The regimen for paucibacillary leprosy should never be given to a patient with
multibacillary leprosy. Therefore, if the diagnosis in a particular patient is uncertain, treat that
patient with the MDT regimen for multibacillary leprosy.
Ideally, the patient should go to the leprosy clinic once a month so that clinic personnel may
supervise administration of the drugs prescribed once a month. However, many countries with
leprosy have poor coverage of health services and monthly supervision of drug administration by
health care workers may not be possible. In these cases, it may be necessary to designate a
responsible third party, such as a family member or a person in the community, to supervise the
monthly drug administration. Where health care service coverage is poor and supervision of the
monthly administration of drugs by health workers is not possible, the patient may be given more
than the 28 days supply of multidrug therapy blister packs. This tactic helps make multidrug
therapy easily available, even to those patients who live under difficult conditions or in remote
areas. Patients who ask for diagnosis and treatment are often sufficiently motivated to take full
responsibility for their own treatment of leprosy. In this situation, it is important to educate the
patient regarding the importance of compliance with the regimen and to give the patient
responsibility for taking his or her medication correctly and for reporting any untoward signs and
symptoms promptly. The patient should be warned about possible lepra reactions.
Personality traits
Psychosomatic disorders
Psychosomatic means mind (psyche) and body (soma). A psychosomatic disorder is
a disease which involves both mind and body. Some physical diseases are thought
to be particularly prone to be made worse by mental factors such as stress and
anxiety. Your current mental state can affect how bad a physical disease is at any
given time
There is a mental aspect to every physical disease. How we react to and cope
with disease varies greatly from person to person. For example, the rash of
psoriasis may not bother some people very much. However, the rash
covering the same parts of the body in someone else may make them feel
depressed and more ill.
There can be physical effects from mental illness. For example, with some
mental illnesses you may not eat, or take care of yourself, very well which
can cause physical problems.
However, the term psychosomatic disorder is mainly used to mean ... "a physical disease
that is thought to be caused, or made worse, by mental factors".
Some physical diseases are thought to be particularly prone to be made worse by mental
factors such as stress and anxiety. For example, psoriasis, eczema, stomach ulcers, high
blood pressure and heart disease. It is thought that the actual physical part of the illness
(the extent of a rash, the level of the blood pressure, etc) can be affected by mental
factors. This is difficult to prove. However, many people with these and other physical
diseases say that their current mental state can affect how bad their physical disease is at
any given time.
Some people also use the term psychosomatic disorder when mental factors cause
physical symptoms but where there is no physical disease. For example, a chest pain may
be caused by stress and no physical disease can be found. Physical symptoms that are
caused by mental factors are discussed further in another leaflet called
Somatisation/Somatoform Disorders.
These physical symptoms are due to increased activity of nervous impulses sent from the
brain to various parts of the body and to the release of adrenaline (epinephrine) into the
bloodstream when we are anxious.
However, the exact way that the mind can cause certain other symptoms is not clear.
Also, how the mind can affect actual physical diseases (rashes, blood pressure, etc) is not
clear. It may have something to do with nervous impulses going to the body, which we do
not fully understand. There is also some evidence that the brain may be able to affect
certain cells of the immune system, which is involved in various physical diseases.
Adjustment Disorders
This classification of mental disorders is related to an identifiable source of stress that causes
significant emotional and behavioral symptoms. The diagnostic criteria listed by the DSM-IV
diagnostic criteria included:
(1) Distress that is marked and excessive for what would be expected from
the stressor and
(2) Creates significant impairment in school, work or social environments.
In addition to these requirements, the symptoms must occur within three months of exposure to
the stressor, the symptoms must not meet the criteria for an Axis I or Axis II disorder, the
symptoms must not be related to bereavement and the symptoms must not last for longer than six
months after exposure to the stressor.
The DSM-V (released in May of 2013) moved adjustment disorder to a newly created section of
stress-related syndromes.
Ads
Panic attacks?www.flowygame.comFlowy is a game that can help! Sign up for our trial now
schizophreniawww.newsonmentaldisorders.comonline newspaper free
Aspergers: Causes & Cureswww.aspergerssociety.orgLearn Treatments To Help Your Child Be
Happy And Content In Who He Is!
Anxiety Disorders
Anxiety disorders are those that are characterized by excessive and abnormal fear, worry and
anxiety. In one recent survey published in the Archives of General Psychology1, it was estimated
that as many as 18% of American adults suffer from at least one anxiety disorder.
Types of anxiety disorders include:
Agoraphobia
Phobias
Panic disorder
Separation anxiety
Dissociative Disorders
Dissociative disorders are psychological disorders that involve a dissociation or interruption in
aspects of consciousness, including identity and memory. Dissociative disorders include:
Eating Disorders
Eating disorders are characterized by obsessive concerns with weight and disruptive eating
patterns that negatively impact physical and mental health. Types of eating disorders include:
Anorexia nervosa
Bulimia nervosa
Rumination disorder
Factitious Disorders
These psychological disorders are those in which an individual acts as if he or she has an illness,
often be deliberately faking or exaggerating symptoms or even self-inflicting damage to the
body. Types of factitious disorders include:
Munchausen syndrome
Munchausen syndrome by proxy
Ganser syndrome
Impulse-Control Disorders
Impulse-control disorders are those that involve an inability to control impulses, resulting in
harm to oneself or others. Types of impulse-control disorders include:
Kleptomania (stealing)
Pyromania (fire-starting)
Trichotillomania (hair-pulling)
Pathological gambling
Dermatillomania (skin-picking)
Neurocognitive Disorders
These psychological disorders are those that involve cognitive abilities such as memory, problem
solving and perception. Some anxiety disorder, mood disorders and psychotic disorders are
classified as cognitive disorders. Types of cognitive disorders include:
Alzheimer's disease
Delirium
Dementia
Amnesia
Ads
Dementiawww.chinastemcell.com.cnStemcell Treatment for Dementia, Mental problem
PiecesOfMewww.2piecesofme.weebly.comBlog about Mental Health and more! Share your story
here
Mood Disorders
Mood disorder is a term given to a group of mental disorders that are all characterized by
changes in mood. Examples of mood disorders include:
Bipolar disorder
Major depressive disorder
Cyclothymic disorder
Neurodevelopmental Disorders
Developmental disorders, also referred to as childhood disorders, are those that are typically
diagnosed during infancy, childhood, or adolescence. These psychological disorders include:
Communication disorders
Autism
Conduct disorder
Psychosocial Disorders
Definition
A psychosocial disorder is a mental illness caused or influenced by life experiences,
as well as maladjusted cognitive and behavioral processes.
Description
The term psychosocial refers to the psychological and social factors that influence
mental health. Social influences such as peer pressure, parental support, cultural
and religious background, socioeconomic status, and interpersonal relationships all
help to shape personality and influence psychological makeup. Individuals with
psychosocial disorders frequently have difficulty functioning in social situations and
may have problems effectively communicating with others.
Sexual and gender identity disorders. Disorders of sexual desire, arousal, and
performance. It should be noted that the categorization of gender identity
disorder as a mental illness has been a point of some contention among
mental health professionals.
Diagnosis
Patients with symptoms of psychosocial disorders or other mental illness should
undergo a thorough physical examination and patient history to rule out an organic
cause for the illness (such as a neurological disorder). If no organic cause is
suspected, a psychologist or other mental healthcare professional will meet with the
patient to conduct an interview and take a detailed social and medical history. If the
patient is a minor, interviews with a parent or guardian may also be part of the
diagnostic process. The physician may also administer one or more psychological
tests (also called clinical inventories, scales, or assessments).
Treatment
Counseling is typically a front-line treatment for psychosocial disorders. A number of
counseling or talk therapy approaches exist, including psychotherapy, cognitive
therapy, behavioral therapy, and group therapy. Therapy or counseling may be
administered by social workers, nurses, licensed counselors and therapists,
psychologists, or psychiatrists.
Psychoactive medication may also be prescribed for symptom relief in patients with
mental disorders considered psychosocial in nature. For disorders such as major
depression or bipolar disorder, which may have psychosocial aspects but also have
known organic causes, drug therapy is a primary treatment approach. In cases such
as personality disorder that are thought to not have biological roots, psychoactive
medications are usually considered a secondary, or companion treatment to
psychotherapy.
Many individuals are successful in treating psychosocial disorders through regular
attendance in self-help groups or 12-step programs such as Alcoholics Anonymous.
This approach, which allows individuals to seek advice and counsel from others in
similar circumstances, can be extremely effective.
In some cases, treating mental illness requires hospitalization of the patient. This
hospitalization, also known as inpatient treatment, is usually employed in situations
where a controlled therapeutic environment is critical for the patient's recovery
(e.g., rehabilitation treatment for alcoholism or other drug addictions), or when
there is a risk that the patient may harm himself (suicide) or others. It may also be
necessary when the patient's physical health has deteriorated to a point where lifesustaining treatment is necessary, such as with severe malnutrition associated with
anorexia nervosa.
Alternative treatment
Therapeutic approaches such as art therapy that encourage self-discovery and
empowerment may be useful in treating psychosocial disorders. Art therapy, the
use of the creative process to express and understand emotion, encompasses a
broad range of humanistic disciplines, including visual arts, dance, drama, music,
film, writing, literature, and other artistic genres. This use of the creative process is
believed to provide the patient/artist with a means to gain insight to emotions and
thoughts they might otherwise have difficulty expressing. After the artwork is
created, the patient/artist continues the therapeutic journey by interpreting its
meaning under the guidance of a trained therapist.
Key terms
Affective disorder An emotional disorder involving abnormal highs and/or lows
in mood.
Bipolar disorder An affective mental illness that causes radical emotional
changes and mood swings, from manic highs to depressive lows. The majority of
bipolar individuals experience alternating episodes of mania and depression.
Bulimia An eating disorder characterized by binge eating and inappropriate
compensatory behavior such as vomiting, misusing laxatives, or excessive exercise.
Cognitive processes Thought processes (i.e., reasoning, perception, judgment,
memory).
Learning disorders Academic difficulties experienced by children and adults of
average to above-average intelligence that involve reading, writing, and/or
mathematics, and which significantly interfere with academic achievement or daily
living.
Schizophrenia A debilitating mental illness characterized by delusions,
hallucinations, disorganized speech and behavior, and flattened affect (i.e., a lack of
emotions) that seriously hampers normal functioning.
Prognosis
According to the National Institute of Mental Health, more than 90% of Americans
who commit suicide have a diagnosable mental disorder, so swift and appropriate
treatment is important. Because of the diversity of types of mental disorders
influenced by psychosocial factors, and the complexity of diagnosis and treatment,
the prognosis for psychosocial disorders is highly variable. In some cases, they can
Prevention
Patient education (i.e., therapy or self-help groups) can encourage patients to take
an active part in their treatment program and to recognize symptoms of a relapse of
their condition. In addition, educating friends and family members on the nature of
the psychosocial disorder can assist them in knowing how and when to provide
support to the patient.
Resources
Periodicals
Epperly, Ted D., and Kevin E. Moore. "Health Issues in Men: Part II. Common
Psychosocial Disorders." American Family Physician 62 (July 2000): 117-24.
Organizations
National Institute of Mental Health. 6001 Executive Boulevard, Rm. 8184, MSC 9663,
Bethesda, MD 20892-9663. (301) 443-4513.
Other
Satcher, David. Mental Health: A Report of the Surgeon General. Washington, DC:
Government Printing Office, 1999.
Surveillance
Acute Flaccid Paralysis (AFP) surveillance
Nationwide AFP (acute flaccid paralysis) surveillance is the gold standard for detecting cases of
poliomyelitis. The four steps of surveillance are:
1. finding and reporting children with acute flaccid paralysis (AFP)
2. transporting stool samples for analysis
3. isolating and identifying poliovirus in the laboratory
4. mapping the virus to determine the origin of the virus strain.
Environmental surveillance
Environmental surveillance involves testing sewage or other environmental samples for the
presence of poliovirus. Environmental surveillance often confirms wild poliovirus infections in
the absence of cases of paralysis. Systematic environmental sampling (e.g. in Egypt and
Mumbai, India) provides important supplementary surveillance data. Ad-hoc environmental
surveillance elsewhere (especially in polio-free regions) provides insights into the international
spread of poliovirus.
Surveillance indicators
Indicator
At least one case of non-polio AFP should be detected annually per 100 000
population aged less than 15 years. In endemic regions, to ensure even higher
sensitivity, this rate should be two per 100 000.
Completeness of All AFP cases should have a full clinical and virological investigation with at
case investigation least 80% of AFP cases having adequate stool specimens collected.
Adequate stool specimens are two stool specimens of sufficient quantity for
laboratory analysis, collected at least 24 hours apart, within 14 days after the
onset of paralysis, and arriving in the laboratory by reverse cold chain and
Indicator
Completeness of At least 80% of AFP cases should have a follow-up examination for residual
follow-up
paralysis at 60 days after the onset of paralysis
Laboratory
performance
Poliomyelitis
Description: Poliomyelitis, or polio, is a crippling disease caused by any one of three related
viruses, poliovirus types 1, 2 or 3. The only way to spread poliovirus is through the faecal/oral
route. The virus enters the body through the mouth when people eat food or drink water that is
contaminated with faeces. The virus then multiplies in the intestine, enters the bloodstream, and
may invade certain types of nerve cells, which it can damage or destroy. Polioviruses spread very
easily in areas with poor hygiene
Prevention: Live oral polio vaccine (OPV) - four doses in endemic countries
or Inactivated polio vaccine (IPV) given by injection - two-three doses depending on country
schedule
in 2011 and remains in only 4 countries, but this is 17 years after the initial date set for its
eradication.
Polio and Guinea Worm offer malaria some lessons for the present in countries approaching preelimination now and those who will hopefully join them over the next decade (if global funding
levels are maintained). One lessons is that surveillance is an active part of current polio
eradication efforts, otherwise these reports on progress and its challenges would not be
published. But the key lesson is that regardless of the effectiveness of the technical intervention
(e.g. a vaccine), deployment of the technical intervention is subject to human, administrative,
managerial and social complications.
Polio focuses on a vaccine; malaria has treatment medicines, preventive medicines, insecticide
sprays, treated bednets, diagnostic tests, and maybe also one day an effective vaccine. It is not
too early to plan on how to coordinate all this into achieving effective disease elimination,
nationally, regionally and globally.
This entry was posted on Saturday, November 24th, 2012 at 5:35 pm and is filed under Eradication. You can follow
any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
Challenge
Often the biggest challenge facing developing countries is not a lack of information. More
frequently, the challenge is bringing together the many disparate sources and types of data that
are being produced. Developing countries are often overwhelmed with the distinct and
competing requirements for data tied to external program investments, with the greatest burden
falling on the lowest levels of the health system.
In addition, routinely collected data (such as facility-based data) is often not associated with the
intermittently collected data (such as surveys and census data), leaving large gaps in measuring
health systems performance.These challenges often keep policymakers from modifying health
planning and resource allocation based on accurate, timely and relevant health systems data.
Approach
The Health Systems 20/20 strategy for measuring and monitoring health systems is to provide
and maximize the use of a set of established, innovative tools creating a standardized
measurement. A key starting point for countries is to identify the relative strengths and
weaknesses of the health system, priority issues, and potential recommendations by conducting a
health systems assessment.
After an initial assessment, health information systems tools can be implemented to improve
linkages between health care entities at the local, regional, and central levels to increase the flow
of accurate, complete data in a timely manner. HIS strengthening includes leveraging key
analytical tools, such as Geographic Information System technology, to identify trends that
inform program planning and decision making and to correlate service delivery with health
outcomes.
At higher levels, the web-based Health Systems Database allows users to easily compile and
analyze country data from multiple sources to quickly assess the performance of a countrys
health system, benchmark performance against other countries on key indicators, and monitor
progress toward system strengthening goals.
Lung cancer in women differs from lung cancer in men in many ways. Yet, despite obvious
differences in our appearance, we tend to lump men and women together when talking about
lung cancer. This is unfortunate, since the causes, response to various treatments, survival rate,
and even symptoms to watch for differ. What are some facts about lung cancer in women?
Once considered a mans disease, lung cancer is no longer discriminatory. In 2005, the last
year for which we have statistics, 82,271 women (vs 107,416 men) were diagnosed with lung
cancer, and 69,078 (vs 90,139 men) died.
While lung cancer diagnoses decreased each year from 1991-2005 for men, the incidence
increased 0.5% each year for women. The reason for this is not completely clear.
Lung cancer in women occurs at a slightly younger age, and almost half of lung cancers in
people under 50 occur in women.
Causes
Even though smoking is the number one cause of lung cancer in women, a higher
percentage of women who develop lung cancer are life-long non-smokers. Some of
the causes may include exposure to radon in our homes, secondhand smoke, other
environmental and occupational exposures, or a genetic predisposition. Recent
studies suggest infection with the human papilloma virus (HPV) may also play a
role.
Smoking Status
Some, but not all, studies suggest that women may be more susceptible to the
carcinogens in cigarettes, and women tend to develop lung cancer after fewer years
of smoking.
BAC (Bronchioalveolar carcinoma) is a rare form of lung cancer that is more common in
women. For unknown reasons, the incidence of BAC appears to be increasing worldwide,
especially among younger, non-smoking women.
Symptoms
We hear about the symptoms of a heart attack being different in women from in
men. The same could hold true for lung cancer. Squamous cell lung cancer (the type
more common in men) grows near the airways, and often presents with the classic
symptoms of lung cancer, such as a cough and coughing up blood.
Adenocarcinomas (the type of lung cancer that is more common in women), often
develops in the outer regions of the lungs. These tumors can grow quite large or
spread before they cause any symptoms. Symptoms of fatigue, the gradual onset of
shortness of breath, or chest and back pain from the spread of lung cancer to bone,
may be the first sign that something is wrong.
Treatment
Women historically respond to a few chemotherapy medications used for lung
cancer better than men. One of the new targeted therapies, erlotinib (Tarceva), also
appears to be more effective for women. Women who are able to be treated with
surgery for lung cancer also tend to fair better. In one study, the median survival
after surgery for lung cancer was twice as long for women as for men.
On the other hand, even though the National Cancer Institute recommends that all patients with
stage 3 lung cancer be considered candidates for clinical trials, women are less likely to be
involved in clinical trials than are men.
Survival
The survival rate for lung cancer in women is higher than for men at all stages of
the disease. Sadly, the overall 5-year survival rate is only 16% (vs 12% for men).
The more recorded measurements you have the better! Seeing a "pattern" of
growth over several years helps you understand how your child has
progressed. Most Pediatric Endocrinologists (growth specialists) want at least
12 months (measuring at the beginning and end of that year) to establish a
growth pattern.
IMPORTANT NOTE: If you get measurement records from other sources- you
MUST be careful! If they measured your child incorrectly (with his/her shoes
on, or with "items" in their hair, feet not totally flat or without making them
stretch fully etc.)- it will make a big difference on their growth chart as it is
plotted out. So don't panic if some items don't seem to line up correctly.
A growth chart shows how a child's height compares to other children the
exact same age and sex. After the age of 2, most children maintain fairly
steady growth until they hit puberty. They generally follow close to the same
percentile they had at the age of 2. Children over 2 years of age who move
away (loosing or gaining more than 15 percentile points) from their
established growth curve should be thoroughly evaluated and followed by a
doctor, no matter how tall they are. Here is an example of a growth chart and
an explanation about how to read/figure it out.
On each growth chart there is a series of lines swirving from the lower left
and climbing up to the right side of the chart. These lines help people follow
along (so to speak) so that you can see where your child falls on a growth
curve.
If you are concerned about your child's height or weight, talk with your
Pediatrician. Continue to watch the growth annually (or more frequently if you
see your child falling below a normal pattern). It is also important to make
sure your child is not crossing percentiles in an upward swing because this
too can represent a problem (see Precocious Puberty).
Our imaginary child, Sally Sue, is 6 years old and stands 45.5 (115 cm) tall for
this example.
If you look at the very bottom of the chart- you will see numbers starting with
the number 2. Those numbers are the age of the child. In this example-we
listed our sample girls as being 6 years old. Therefore, her growth is on that
line on the bottom.
Next we found the mark on the left side of the page that matched her height
45.5 inches (115cm).
After we had her height and her age...we matched the two points and placed
the blue dot where that information met.
Now.... see the curved lines going from the lower left side upwards towards
the right side? Those lines have numbers too. If you enlarge the picture (or
look at the growth chart generated from our automatic growth chart) you will
see that the lowest line is the 3rd percentile and the top curved line is the
97th percentile. This chart shows that Sally Sue is on the 50th percentile.
That means that out of 100 girls her same age- half are taller and half are
smaller than she is. If she were on the 10th percentile, it would mean that
she was taller than 10 girls and shorter than 90 girls her same age. At the
50th percentile (following the line to age 16 or so) you might GUESStimate
that her final adult height would be somewhere between 63.5 and 64.5
inches tall. But that is true guess work! Genetics and many factors play a
huge role in this process.
You can also read about how to interpret these charts for your child.
The growth charts are those used by the Center for Disease Control (CDC) - the under two year
charts are based on data from the WHO (World Health Organisation) as the WHO data is better
for children under 2 years.
Before you open the links to the infant growth chart of your choice, you will want to understand
how growth charts are made.
Growth Charts are created by looking at a cross section of the
population at one time and then plotting the weight and height of
all the infants and toddlers.
There is a range because we are not all the same size. That range is represented by
centile (or percentile) lines on the child or infant growth chart.
The growth charts here have lines representing the 3rd, 10th, 25th, 50th, 75th,
90th, and 97th centiles (also called percentiles) for the over 2 year olds and lines
representing 2nd, 5th, 10th, 25th, 50th, 75th, 90th, 95th and 98th centiles for under
2 year olds.
The 3rd centile line gives an indication where the lower end of the normal range is - actually 3%
of normal infants and toddlers will be below the 3rd centile (or for the 2nd centile, 2% of the
population will sit below this centile).
The 50th centile is where 50% of the population will sit.
The 97th centile gives an indication where the upper end of the normal range is actually 3% of normal infants and toddlers will be above the 97th centile.
So anywhere between the 2nd and 98th centiles is appropriate growth. It can be
normal to be slightly above the 98th centile or slightly below the 2nd centile. What
is more important than an individual reading is the trend.
Back to list
What is normal growth?
It is far more important to look at the toddler or infant growth chart trend than one reading.
Generally infants and toddlers should follow one centile line (or grow parallel to one centile line)
for height and weight.
Trends are easier to see when time has passed so don't be concerned if there isn't
an appropriate increase in weight over 1 week - wait and see what happens over 3
months. Children get lots of viral illnesses so they may have weight that fluctuates
with those illnesses - over time, they will usually manage to put on the required
weight.
Normal growth is a trend that follows a centile line and is similar for height and
weight on the infant growth chart.
Back to list
What does it mean if my baby crossed centile lines?
Sometimes, there will be a natural moving across the centile lines for weight on the
infant growth chart in the first 6 months or so. This is because babies who are
destined to be small people, because of their genes, can be big babies. They have
to get on their "right" centile line and will do this over the first months.
This is called "Catch Down Growth" but once your baby finds her growth centile,
she should follow that line on the infant growth chart. If she keeps crossing centile
lines, that is not normal. Usually, "catch down growth" involves starting at a high
centile like 90th and then crossing no more than 2 centile lines, say to the 50th on
the infant growth chart.
I often have babies referred to me because their weight is falling away from the
initial centile on the infant growth chart. If the baby is well and is feeding
appropriately, I don't worry too much and just wait and see what happens over the
next month. I don't advocate weekly weighing in these cases because it can be
misleading and stressful. Particularly if you are breast-feeding your baby, you don't
need to be stressed about your baby's weight.
Back to list
Is length in babies a reliable measurement?
Not usually. It depends how much your baby is stretched out before measuring.
Height is a more reliable measurement when your child can stand up straight.
Back to list
How do I interpret weight centiles that are different from height centiles?
As well as looking at the trend, it is also important to look at the weight in relation
to the height - being on the 90th centile for weight is not appropriate if your toddler
is on the 2nd or 3rd centile for height.
Often infants and toddlers are one centile apart for weight and height and this is
usually not a problem - so on the 10th centile for height and the 25th centile for
weight or vice versa is fine.
Back to list
What if my child's weight on a centile line is much more than her height centile
line?
your baby is less than the 3rd centile for weight and is growing away from the
centiles
BMI Cut off points for under weight, overweight, four levels of obese
SAAL seasonal awareness alert letter, measles and dengue stage measured
in terms of DEWS, DMIS
A child suffering from polio brought for vaccination to clinic which vaccines
to give same as given at stage 0 after looking for BCG scar
Carriers
Bioterrorism
Hawthorne effect
Health economics dollars per life years gained, dollars per quality adjusted
life years gained. It is a measure of cost effective analysis which measures
outcome in terms of DALYS
Cost benefit analysis outcome measured in monetary terms- not suitable for
health
MDG
PERT analysis
Personality traits
Psychosomatic disorders
Psychosocial disorders
FCPS
Ob gene
Hidden hunger
Strategic planning
VIVA FCPS-II
I remember only these questions. There were also many graphs for interpretation. They also
asked about different sampling techniques and research designs
Q.1 Qualities of a good leader
2. What is health sector reform?
3. What is validity of a research design? How will you increase the validity of a study?
4. What are types of validity?
5. What social factors are responsible for differences in development of developed &
developing countries?
6. What is motivation & its types?
7. Causes of lung cancer in women. Why its incidence is increasing in women?
8. What is pink ribbon strategy?
9. What is confounding and how it is removed?
10. What percentage of budget is allocated for health in Pakistan & how much it should
ideally be?
11. What is the chemoprophylaxis & treatment of leprosy?
Introduction
Leprosy (Hansen's disease (HD) ) is a chronic infectious disease, caused by the bacillus
Mycobacterium leprae, which affects the skin and peripheral nerves leading to skin lesions, loss
of sensation, and nerve damage. This in turn can lead to secondary impairments or deformities of
the eyes, hands and feet. For treatment purposes, leprosy is classified as either paucibacillary
(PB) or multibacillary (MB) leprosy. The standard treatment for leprosy is multidrug therapy
(MDT) [1]. PB patients are treated for 6 months with dapsone and rifampicin; MB patients are
treated for 12 months with dapsone, rifampicin and clofazamine.
The World Health Organisation (WHO) had set a goal in the early 1990s to eliminate leprosy as a
public health problem by the year 2000. Elimination was defined as reducing the global
prevalence of the disease to less than 1 case per 10 000 population [2]. The WHO elimination
strategy was based on increasing the geographical coverage of MDT and patients' accessibility to
the treatment. The expectation existed that reduction in prevalence through expanding MDT
coverage would eventually also lead to reduction in incidence of the disease and ultimately to
elimination in terms of zero incidence of the disease. An important assumption underlying the
WHO leprosy elimination strategy was that MDT would reduce transmission of M. leprae
through a reduction of the number of contagious individuals in the community [3].
Unfortunately, there is no convincing evidence for this hypothesis [4].
With a total of 249 007 new patients detected globally in 2008 [5], it remains necessary to
develop new and effective interventions to interrupt the transmission of M. leprae. BCG
vaccination against tuberculosis offers some but not full protection against leprosy and in the
absence of another more specific vaccination against the bacillus other strategies need to be
developed, such as preventive treatment (chemoprophylaxis) of possible sub-clinically infected
people at risk of developing leprosy. Recently, the results were published of randomised
controlled trial into the effectiveness of single dose rifampicin (SDR) in preventing leprosy in
contacts of patients [6]. It was shown that this intervention is effective at preventing the
development of leprosy at two years and that the initial effect was maintained afterwards.
In order to assess the economic benefits of SDR as an intervention in the control of leprosy, we
performed a cost-effectiveness analysis. We provide an overview of the direct costs of this new
chemoprophylaxis intervention and calculate the cost-effectiveness compared to standard MDT
provision only.
Discussion
Chemoprophylaxis with single dose rifampicin for preventing leprosy among contacts is a costeffective prevention strategy. At program level an incremental of $ 6 009 was invested and 38
incremental leprosy cases were prevented, resulting in an ICER of $ 158 per one additional
prevented leprosy case.
This is the first report on cost-effectiveness of single dose rifampicin as chemoprophylaxis in
contacts of leprosy patients. The analysis is based on the results of a large randomized controlled
trial in Bangladesh [6]. For the analysis, the health care perspective was taken because indirect
cost data were largely unavailable. The health care perspective excludes indirect costs (patient
costs), such as travel costs, loss of income due to illness and clinic visits, and long term
consequences of disability. Estimating these costs was beyond the scope of this study, but
inclusion would have rendered the intervention even more cost-effective. Another limitation of
the study is that a static approach was taken to the analysis, measuring the effect of the
intervention after two years only. After these two years, there was no further reduction of new
cases in the chemoprophylaxis arm of the trial compared to the placebo arm. Because leprosy is
an infectious disease, with person-to-person transmission of M. leprae, one can expect that
prevention of primary cases (as recorded in the trial) will lead to further prevention of secondary
cases. In time, this would lead to further cost-effectiveness of the intervention. Unfortunately, we
could not apply such a dynamic analysis approach because there is insufficient information about
the long term effects of the intervention, including the number of secondary cases prevented and
the number of primary cases prevented after two years that will eventually develop leprosy after
a longer period of time, beyond the 4 years observation period of the trial.
It is also important to understand that the results of the COLEP trial reflect a comparison
between the chemoprophylaxis intervention and standard MDT treatment plus contact surveys at
2-year intervals with treatment of newly diagnosed cases among contacts. A contact survey in
itself is an intervention that reduces transmission in contact groups and thus new leprosy patients
among contacts. The provision of chemoprophylaxis to contacts requires contact tracing, but
contact tracing is not part of leprosy control programs in many countries and doing so would
increase program costs considerably. WHO however, recognizes the importance of contact
tracing and now recommends that it is introduced in all control programs [21]. This would then
also lay a good foundation for introducing chemoprophylaxis.
WHO reports regarding cost-effectiveness analyses recommend using disability adjusted life
years (DALY) as outcome measure for such studies [22]. In leprosy two measures are common to
express disability: WHO grade 1 and 2 [23]. The disability weight for grade 2 disability (visible
deformity) has been determined at 0.153 [24], but no weight is available for grade 1. Of all
newly detected leprosy cases, a relatively low percentage (235%) have grade 2 disability [25].
In our study we chose for the number of leprosy cases prevented as outcome, because there is
little information available about survival of patients with grade 2 disability and also because the
choice for DALY's would have given a less favourable result due to the low weight of leprosy
disability.
There are a number of issues to take into account when relating the outcome of this study to
other countries. Firstly, the cost level to conduct leprosy control will differ per country, due to
economic standard, budget allocated to primary health care, salaries of health care workers, etc.
In our calculation, program costs were similar for both the standard MDT treatment and
chemoprophylaxis intervention, but these costs will vary per country. The treatment costs are
based on real cost estimates and will vary less between countries and programs. Therefore the
actual costs will differ, but the conclusion that the intervention is cost-effective is very likely to
remain the same. Secondly, the clinical presentation of leprosy differs between countries and
regions. Globally the distribution is around 40% for MB and 60% for PB in newly detected
leprosy cases, but with widely varying ratios between countries [25]. Since costs for treating PB
and MB leprosy are different, these differences are likely to affect the outcome of the costeffectiveness analysis. Thirdly, the percentage of newly detected cases that are a household
contact of a known leprosy patient differs per country and is possibly determined by the
endemicity level of leprosy in a country or area. In Bangladesh, in the high endemic area where
the COLEP study was conducted, approximately 25% of newly detected cases had a known
index case within the family, whereas in a low endemic area (Thailand) this proportion was 62%
[26]. An intervention aimed at close (household) contacts may therefore be more cost-effective in
countries where relatively many new cases are household contacts. But the background and
implications of such differences on effectiveness of chemoprophylaxis needs further research.
Only few articles have been published about cost-effectiveness analyses of interventions in
leprosy [27]. Most articles assess small parts of leprosy control, such as footwear provision [28],
MDT delivery costs [29], or the economic aspects of hospitalisation versus ambulatory care of
neuritis in leprosy reactions [30]. Only two studies provided a more general cost-effect analysis.
Naik and Ganapati included several costs in their economic evaluation, but a limitation of the
study is the lack of reference about how they obtained their cost data [31]. Remme et al. based
the cost calculations in their study on the limited available published cost data, program
expenditure data and expert opinion, and also provide limited insight into how they obtained
certain costs and effects [30]. Both studies do not mention well how the costs are obtained, (e.g.
real costs, bottom-up or top-down costs). Our current article is basically one of the first
structured cost-effective analyses for leprosy presenting an overview of the costs involved and
can be used for the assessment of the costs of leprosy control in general.
This report shows that chemoprophylaxis with single dose rifampicin given to contacts of newly
diagnosed leprosy patients is a cost-effective intervention strategy. Implementation studies in the
field are necessary to establish whether this intervention is acceptable and feasible in other
leprosy endemic areas of the world.
Treatment of leprosy
Several drugs are used in combination in multidrug therapy (MDT). (See table) These drugs must
never be used alone as monotherapy for leprosy.
Dapsone, which is bacteriostatic or weakly bactericidal against M. leprae, was the mainstay
treatment for leprosy for many years until widespread resistant strains appeared. Combination
therapy has become essential to slow or prevent the development of resistance. Rifampicin is
now combined with dapsone to treat paucibacillary leprosy. Rifampicin and clofazimine are
now combined with dapsone to treat multibacillary leprosy.
A single dose of combination therapy has been used to cure single lesion paucibacillary leprosy:
rifampicin (600 mg), ofloxacin (400 mg), and minocycline (100 mg). The child with a single
lesion takes half the adult dose of the 3 medications.
WHO has designed blister pack medication kits for both paucibacillary leprosy and for
multibacillary leprosy. Each easy-to use kit contains medication for 28 days. The blister pack
medication kit for single lesion paucibacillary leprosy contains the necessary medication for the
one time administration of the 3 medications.
Any patient with a positive skin smear must be treated with the MDT regimen for multibacillary
leprosy. The regimen for paucibacillary leprosy should never be given to a patient with
multibacillary leprosy. Therefore, if the diagnosis in a particular patient is uncertain, treat that
patient with the MDT regimen for multibacillary leprosy.
Ideally, the patient should go to the leprosy clinic once a month so that clinic personnel may
supervise administration of the drugs prescribed once a month. However, many countries with
leprosy have poor coverage of health services and monthly supervision of drug administration by
health care workers may not be possible. In these cases, it may be necessary to designate a
responsible third party, such as a family member or a person in the community, to supervise the
monthly drug administration. Where health care service coverage is poor and supervision of the
monthly administration of drugs by health workers is not possible, the patient may be given more
than the 28 days supply of multidrug therapy blister packs. This tactic helps make multidrug
therapy easily available, even to those patients who live under difficult conditions or in remote
areas. Patients who ask for diagnosis and treatment are often sufficiently motivated to take full
responsibility for their own treatment of leprosy. In this situation, it is important to educate the
patient regarding the importance of compliance with the regimen and to give the patient
responsibility for taking his or her medication correctly and for reporting any untoward signs and
symptoms promptly. The patient should be warned about possible lepra reactions.
12. Prevention of dengue fever. What is WHO doing for dengue fever?
Dengue is transmitted by the bite of a mosquito infected with one of the four dengue virus
serotypes. It is a febrile illness that affects infants, young children and adults with symptoms
appearing 3-14 days after the infective bite.
Dengue is not transmitted directly from person-to-person and symptoms range from mild fever,
to incapacitating high fever, with severe headache, pain behind the eyes, muscle and joint pain,
and rash. There is no vaccine or any specific medicine to treat dengue. People who have dengue
fever should rest, drink plenty of fluids and reduce the fever using paracetamol or see a doctor.
Severe dengue (also known as dengue hemorrhagic fever) is characterized by fever, abdominal
pain, persistent vomiting, bleeding and breathing difficulty and is a potentially lethal
complication, affecting mainly children. Early clinical diagnosis and careful clinical management
by trained physicians and nurses increase survival of patients.
Sources
There are various locations, activities or factors which are responsible for releasing pollutants
into the atmosphere. These sources can be classified into two major categories.
Anthropogenic (man-made) sources:
These are mostly related to the burning of multiple types of fuel.
Fumes from paint, hair spray, varnish, aerosol sprays and other solvents
Natural sources:
Dust from natural sources, usually large areas of land with few or no
vegetation
Methane, emitted by the digestion of food by animals, for example cattle
Radon gas from radioactive decay within the Earth's crust. Radon is a
colorless, odorless, naturally occurring, radioactive noble gas that is formed
from the decay of radium. It is considered to be a health hazard. Radon gas
from natural sources can accumulate in buildings, especially in confined
areas such as the basement and it is the second most frequent cause of lung
cancer, after cigarette smoking.
15.
charts mostly include population pyramids, scatter diagrams, line graphs, tables e.g
related to nutrition, validity, reliability etc.
NADRA
NADRA is one of the leading System Integrators in the global identification sector and
boasts extensive experience in designing, implementing and operating solutions for
corporate and public sector clients. NADRA offers its clients a portfolio of
customizable solutions for identification, e-governance and secure documents.
NADRA has successfully implemented the Multi-Biometric National Identity Card &
Multi-Biometric e-Passport solutions for Pakistan, Passport Issuing System for
Kenya, Bangladesh High Security Drivers License, and Civil Registration
Management System for Sudan amongst other projects.
The socioeconomic challenges facing populations, especially in developing and leastdeveloped countries, are enormous. These challenges underscore the need to strengthen
the institutions for sustainable human development in these countries. In the context of
contemporary development discourse and practice, the UNU Institute for Sustainability
and Peace (UNU-ISP) seeks to contribute to strengthening the institutions for sustainable
human development in developing countries. To that end, the Sustainable Human
Development Programme engages in the following: i) research on governance and
transparent management of revenues from extractive minerals in resource-rich
developing countries, including the role of transnational corporations in the extractive
industry; ii) research on how international trade, investment and emerging
biotechnological innovations affect food security; iii) the implications of emerging
publicprivate sector partnerships for sustainable development; iv) challenges of
sustainable rural/urban livelihoods in Africa; and v) capacity development focusing on
the role of higher education in sustainable development in Africa.
The research of this programme is closely linked to the research and teaching activities of
postgraduate programmes of UNU-ISP, including the Master of Science in Sustainability,
Development and Peace and a forthcoming PhD programme, which is expected to launch
in September 2012. The programme further contributes to other UNU-ISP teaching and
capacity development activities, such as the Postgraduate Course on Building Resilience
to Climate Change and other short-term postgraduate and credited courses.
Twinning
This programme envisages twinning with the UNU Institute for Natural Resources in
Africa (UNU-INRA) to jointly organize and facilitate two project workshops in Africa: i)
Impacts of Trade and Investment-Driven Biotechnological Innovations on Food Safety
Security in Africa, and ii) Governance and Institutional Reform for the Sustainable
Development and Use of Africas Natural Resources. In the planning of the workshops,
UNU-ISP and UNU-INRA will collaborate to identify relevant African scholars, experts
and policymakers to participate in the workshops in order to make policy-oriented
recommendations tailored to sustainable policy reform on these themes.
Focal Point
Dr. Obijiofor Aginam, Academic Programme Officer, is the focal point for this
programme.
Purpose
This programme seeks to find sustainable solutions to some of the most pressing
development issues facing developing countries: food security/hunger, management of
natural resources, rural/urban development and the role of higher education in sustainable
development in Africa.
Approach
Links with the parallel UNU-ISP programmes, the relevant UNU institutes, and leading
universities mostly in developing countries, as well as civil society, will facilitate a
holistic approach to these international development problems. The programme will
combine perspectives from a range of disciplines in the natural sciences, social sciences
and humanities in researching these problems.
Gender
Equitable geographical and gender representation will be sought in all the programmes
activities, including selection of project participants and access to research outcomes,
with particular attention to developing countries. It is envisaged that there will be a
gender balance, for instance, in the selection of workshop participants and other planned
outreach activities.
Target Audience
The programme aims to advance academic and policy debate that will enrich the
academic and policymaking communities in most developing countries, the UN system;
international, regional and sub-regional organizations; national governments; and civil
society.
Intended Impact
Impact: Influencing policymaking in the United Nations System
Target: This programme targets the United Nations Development Programme (UNDP)
and all other UN agencies working on the UN Millennium Development Goals, United
Nations Conference on Trade and Development (UNCTAD), the Food and Agriculture
Organization (FAO), the World Health Organization (WHO), the United Nations
Environment Programme (UNEP), the Secretariat of the Convention on Biological
Diversity, UN-Habitat, United Nations Industrial Development Organization (UNIDO)
and the United Nations Educational, Scientific and Cultural Organization (UNESCO).
How: The work and mandate of each of these UN agencies relate to aspects of
development. This programme contributes by addressing the gaps and limits of their
policies by generating concrete outputs of the highest quality to catalyze and inform
policy reform in the work and mandates of the relevant UN agencies.
Impact: Influencing policymaking at the national level
Target: The programme targets relevant development-related sectors in developing
countries.
How: This programme generates ideas aimed at policy reform in the relevant
development sectors of most developing countries. Applied policy recommendations are
made through the publication and dissemination of policy briefs and other outreach
activities.
Impact: Furthering knowledge in an academic field
Target: The programme focuses on such topics as governance and management of
Africas natural resources, biotech, foreign investment/trade and food security,
rural/urban livelihoods, transnational corporations and foreign direct investment, and
higher education and development in Africa.
How: This programme brings together scholars from the relevant disciplines to study and
produce policy briefs and edited books aimed at addressing the gaps in the literature
relevant to these topics.
Impact: Curriculum development
Target: The programme targets selected universities and higher education sectors in
Africa and Asia.
How: The programme identifies common themes in developing joint and collaborative
curricula and research networks between leading Japanese universities and universities in
Africa and other parts of Asia.
Impact: Teaching
Target: This programme relates to one of the core courses offered in the UNU-ISP
masters degree program.
How: The UNU-ISP Masters Degree on Sustainability, Development and Peace offers a
core course on International Cooperation and Development that focuses on some of the
themes to be covered in this programme.
Research Findings
The programme builds on existing development discourse by leading scholars,
policymakers and civil society activists as well as the work of relevant regional and
international institutions. It is envisaged that the project workshops under this program
will lead to research findings that will be published in policy briefs, special issues of
academic and policy journals and peer reviewed edited books, all aiming to inform the
academic and policy communities, especially in developing countries.
Policy Bridging
The programme targets policy reform in the development sectors of developing countries.
As such, it aims to make knowledge practically accessible and realizable by developing
user-friendly, accessible and policy-oriented recommendations. The programme aims to
bridge the dichotomy between the academic and policy communities by producing
concise policy briefs and manuals that target very specific sectors and the way forward in
addressing the gaps in these sectors.
Value Added
The programme links with relevant development programmes of leading UN agencies
UNDP, FAO, UNCTAD, UNIDO, UNESCO, and UNEP. It also links with international
development research and activities of the other research and training programmes in the
UNU system, especially the UNU World Institute for Development Economics Research,
the UNU Institute for Environment and Human Security, UNU-INRA, and the UNU
Vice-Rectorate in Europe (through the UNU-ISP Operating Unit in Bonn). It seeks to
build on the expertise and existing capacities in selected universities across the world to
address some of the most pressing socioeconomic development problems facing the
population, especially in developing countries. The programme also draws from available
expertise in civil society organizations.
Dissemination
Research to be undertaken by this programme will result in peer-reviewed academic
publications, policy briefs, and conference presentations that will be widely disseminated
within the relevant academic, (global, regional, national) policy and epistemic
communities.
Timeline/Programme Cycle
Most projects will proceed on a two-year timeline from initiation to completion, resulting
in an academic publication and/or other output within a third year.
Evaluation
Outputs of this programme will be mainly evaluated through an independent academic
peer-review process (for publications), and through student evaluations (for teaching).
Challenges
Poverty reduction strategies and development policies aimed at tackling socioeconomic
inequalities are some of the most pressing policy and governance issues facing
developing countries. One major challenge of this programme is to contribute to the
development discourse and practice in the context of the socioeconomic disparities
between the least-developed, developed and developing countries. To achieve this, the
programme seeks to build on existing good development practices and develop new
interdisciplinary perspectives and approaches to pressing human development problems
in developing countries.
Theory
Disinfection with chlorine is very popular in water and wastewater treatment
because of its low cost, ability to form a residual, and its effectivness at low
concentrations. Although it is used as a disinfectant, it is a dangerous and
potentially fatal chemical if used improperly.
Despite the fact the disinfection process may seem simple, it is actually a quite
complicated process. Chlorination in wastewater treatment systems is a fairly
complex science which requires knowledge of the plant's effluent characteristics.
When free chlorine is added to the wastewater, it takes on various forms depending
on the pH of the wastewater. It is important to understand the forms of chlorine
which are present because each has a different disinfecting capability. The acid
form, HOCL, is a much stronger disinfectant than the hypochlorite ion, OCL-. The
graph below depicts the chlorine fractions at different pH values (Drawing by Erik
Johnston).
Ammonia present in the effluent can also cause problems as chloramines are
formed, which have very little disinfecting power. Some methods to overcome the
types of chlorine formed are to adjust the pH of the wastewater prior to chlorination
or to simply add a larger amount of chlorine. An adjustment in the pH would allow
the operators to form the most desired form of chlorine, hypochlorus acid, which
has the greatest disinfecting power. Adding larger amounts of chlorine would be an
excellent method to combat the chloramines because the ammonia present would
bond to the chlorine but further addition of chlorine would stay in the hypochlorus
acid or hypochlorite ion state.
a) Chlorine gas, when exposed to water reacts readily to form hypochlorus acid,
HOCl, and hydrochloric acid. Cl2 + H2O -> HOCl + HCl
b) If the pH of the wastewater is greater than 8, the hypochlorus acid will dissociate
to yield hypochlorite ion. HOCl <-> H+ + OCl-- If however, the pH is much less than
7, then HOCl will not dissociate.
c) If ammonia is present in the wastewater effulent, then the hypochlorus acid will
react to form one three types of chloramines depending on the pH, temperature,
and reaction time.
Monochloramine and dichloramine are formed in the pH range of 4.5 to 8.5,
however, monochloramine is most common when the pH is above 8. When the pH
of the wastewater is below 4.5, the most common form of chloramine is
trichloramine which produces a very foul odor. The equations for the formation of
the different chloramines are as follows: (Reynolds & Richards, 1996)
Monochloramine: NH3 + HOCl -> NH2Cl + H2O
Dichloramine: NH2Cl + 2HOCl -> NHCl2 + 2H2O
Trichloramine: NHCl2 + 3HOCl -> NHCl3 + 3H2O
Chloramines are an effective disinfectant against bacteria but not against viruses.
As a result, it is necessary to add more chlorine to the wastewater to prevent the
formation of chloramines and form other stronger forms of disinfectants.
d) The final step is that additional free chlorine reacts with the chloramine to
produce hydrogen ion, water , and nitrogen gas which will come out of solution. In
the case of the monochloramine, the following reaction occurs:
2NH2Cl + HOCl -> N2 + 6HCl + H2O
Thus, added free chlorine reduces the concentration of chloramines in the
disinfection process. Instead the chlorine that is added is allowed to form the
stronger disinfectant, hypochlorus acid.
Perhaps the most important stage of the wastewater treatment process is the
disinfection stage. This stage is most critical because it has the greatest effect on
public health as well as the health of the world's aquatic systems. It is important to
realize that wastewater treatment is not a cut and dry process but requires in depth
knowledge about the type of wastewater being treated and its characteristics to
obtain optimum results. (White, 1972)
The graph shown above depicts the chlorine residual as a function of increasing
chlorine dosage with descriptions of each zone given below (Drawing by Erik
Johnston, adapted from Reynolds and Richards, 1996).
Zone I: Chlorine is reduced to chlorides.
Zone II: Chloramines are formed.
Zone III: Chloramines are broken down and converted to nitrogen gas which
leaves the system (Breakpoint).
Zone IV: Free residual.
Therefore, it is very important to understand the amount and type of chlorine that
must be added to overcome the difficulties in the strength of the disinfectant which
results from the wastewater's characteristics.
Implementation
Water Treatment
The following is a schematic of a water treatment plant (Drawing by Matt Curtis).
Post chlorination is almost always done in water treatment, but can be replaced with
chlorine dioxide or chloramines. In this stage chlorine is fed to the drinking water
stream which is then sent to the chlorine contact basin to allow the chlorine a long
enough detention time to kill all viruses, bacteria, and protozoa that were not
removed and rendered inactive in the prior stages of treatment (Photo by Matt
Curtis).
Drinking water requires a large addition of chlorine because there must be a
residual amount of chlorine in the water that will carry through the system until it
reaches the tap of the user. After post chlorination, the water is retained in a clear
well prior to distribution. In the picture to the right, the clear pipe with the floater
designates the height of the water within the clear well. (Reynolds & Richards,
1996)
Sandwich
Hamburger
Total
Total
Calorie
Fat (g)
s
9
260
Cheeseburger
13
320
Quarter Pounder
21
420
30
530
Cheese
Big Mac
31
560
31
550
34
590
Crispy Chicken
25
500
Fish Fillet
28
560
Grilled Chicken
20
440
300
Predicting:
- If you are looking for values that
fall within the plotted values, you
are interpolating.
- If you are looking for values that
fall outside the plotted values, you
are extrapolating. Be careful
when extrapolating. The further
away from the plotted values you
go, the less reliable is your
prediction.
Very High
High
(55 countries)
(55 countries)
Medium
(57 countries)
Low
(47 countries)
Not Classified
(16 countries)
The Human Poverty Index (HPI) was an indication of the standard of living in a country,
developed by the United Nations (UN) to complement the Human Development Index (HDI) and
was first reported as part of the Human Development Report in 1997. It was considered to better
reflect the extent of deprivation in developed countries compared to the HDI. In 2010 it was
supplanted by the UN's Multidimensional Poverty Index.
The HPI concentrates on the deprivation in the three essential elements of human life already
reflected in the HDI: longevity, knowledge and a decent standard of living. The HPI is derived
separately for developing countries (HPI-1) and a group of select high-income OECD countries
(HPI-2) to better reflect socio-economic differences and also the widely different measures of
deprivation in the two groups
Why is the MPI better than the Human Poverty Index (HPI) which was previously
used in the Human Development Reports?
The MPI replaced the HPI, which was published from 1997 to 2009. Pioneering in its
day, the HPI used country averages to reflect aggregate deprivations in health,
education, and standard of living. It could not identify specific individuals,
households or larger groups of people as jointly deprived. The MPI addresses this
shortcoming by capturing how many people experience overlapping deprivations
(prevalence) and how many deprivations they face on average (intensity). The MPI
can be broken down by indicator to show how the composition of multidimensional
poverty changes for different regions, ethnic groups and so onwith useful
implications for policy.
The human poverty index for industrialised countries uses the same
dimensions of the previous index, but the variables and reference
values are different:
Survival: the likeliness of death at a relatively early age and is represented by the probability of
not surviving to ages 40 and 60 respectively for the HPI-1 and HPI-2.
Knowledge: being excluded from the world of reading and communication and is measured by
the percentage of adults who are illiterate.
Decent standard of living: In particular, overall economic provisioning.
Indicators
Health
Education
Living standards
Green
Yellow
Orange
Red
Grey
UN LDC Conferences
UN
Bilateral Organizations
o
USAID
DfID
Non-Governmental Movements
o
NGOs
Campaigns
Absolute poverty
Absolute poverty refers to a set standard which is consistent over time and between countries.
First introduced in 1990, the dollar a day poverty line measured absolute poverty by the
standards of the worlds poorest countries. The World Bank defined the new international
poverty line as $1.25 a day for 2005 (equivalent to $1.00 a day in 1996 US prices). but have been
updated to be $1.25 and $2.50 per day. Absolute poverty, extreme poverty, or abject poverty is "a
condition characterized by severe deprivation of basic human needs, including food, safe
drinking water, sanitation facilities, health, shelter, education and information. It depends not
only on income but also on access to services." The term 'absolute poverty', when used in this
fashion, is usually synonymous with 'extreme poverty'
Extreme poverty, or absolute poverty, was originally defined by the United Nations in 1995 as
a condition characterized by severe deprivation of basic human needs, including food, safe
drinking water, sanitation facilities, health, shelter, education and information. It depends not
only on income but also on access to services
Relative poverty
Relative poverty views poverty as socially defined and dependent on social context,
hence relative poverty is a measure of income inequality. Usually, relative poverty is
measured as the percentage of population with income less than some fixed
proportion of median income. There are several other different income inequality
metrics, for example the Gini coefficient or the Theil Index.
Poverty reductionmeasures
Poverty reduction
Controlling overpopulation
Income grants
Economic freedoms
Financial services
Electronic waste
Audiovisual components, televisions, VCRs, stereo equipment, mobile phones, other
handheld devices, and computer components contain valuable elements and
substances suitable for reclamation, including lead, copper, and gold. One of the
major challenges is recycling the printed circuit boards from the electronic wastes.
Electronic waste or e-waste describes discarded electrical or electronic devices. Used electronics
which are destined for reuse, resale, salvage, recycling or disposal are also considered as e-waste.
Informal processing of electronic waste in developing countries may cause serious health and
pollution problems, as these countries have limited regulatory oversight of e-wste processing.
Electronic scrap components, such as CRTs, may contain contaminants such as lead, cadmium,
beryllium, or brominated flame retardants. Even in developed countries recycling and disposal of
e-waste may involve significant risk to workers and communities and great care must be taken to
avoid unsafe exposure in recycling operations and leaking of materials such as heavy metals
from landfills and incinerator ashes. Scrap industry and U.S. EPA officials agree that materials
should be managed with caution[1]
"Electronic waste" may be defined as discarded computers, office electronic equipment,
entertainment device electronics, mobile phones, television sets, and refrigerators. This includes
used electronics which are destined for reuse, resale, salvage, recycling, or disposal. Others are
re-usables (working and repairable electronics) and secondary scrap (copper, steel, plastic, etc.)
to be "commodities", and reserve the term "waste" for residue or material which is dumped by
the buyer rather than recycled, including residue from reuse and recycling operations. Because
loads of surplus electronics are frequently commingled (good, recyclable, and non-recyclable),
several public policy advocates apply the term "e-waste" broadly to all surplus electronics.
Cathode ray tubes (CRTs) are considered one of the hardest types to recycle.[2]
CRTs have relatively high concentration of lead and phosphors (not to be confused with
phosphorus), both of which are necessary for the display. The United States Environmental
Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous
household waste"[3] but considers CRTs that have been set aside for testing to be commodities if
they are not discarded, speculatively accumulated, or left unprotected from weather and other
damage.
The EU and its member states operate a system via the European Waste Catalogue (EWC)- a
European Council Directive, which is interpreted into "member state law". In the UK (a EU
member state). This is in the form of the List of Wastes Directive. However, the list (and EWC)
gives broad definition (EWC Code 16 02 13*) of Hazardous Electronic wastes, requiring "waste
operators" to employ the Hazardous Waste Regulations (Annex 1A, Annex 1B) for refined
definition. Constituent materials in the waste also require assessment via the combination of
Annex II and Annex III, again allowing operators to further determine whether a waste is
hazardous.[4]
Debate continues over the distinction between "commodity" and "waste" electronics definitions.
Some exporters are accused of deliberately leaving difficult-to-recycle, obsolete, or nonrepairable equipment mixed in loads of working equipment (though this may also come through
ignorance, or to avoid more costly treatment processes). Protectionists may broaden the
definition of "waste" electronics in order to protect domestic markets from working secondary
equipment.
The high value of the computer recycling subset of electronic waste (working and reusable
laptops, desktops, and components like RAM) can help pay the cost of transportation for a larger
number of worthless pieces than can be achieved with display devices, which have less (or
negative) scrap value. In A 2011 report, "Ghana E-Waste Country Assessment",[5] found that of
215,000 tons of electronics imported to Ghana, 30% were brand new and 70% were used. Of the
used product, the study concluded that 15% was not reused and was scrapped or discarded. This
contrasts with published but uncredited claims that 80% of the imports into Ghana were being
burned in primitive conditions.
In the United States, an estimated 70% of heavy metals in landfills comes from discarded
electronics.[11][12]
While there is agreement that the number of discarded electronic devices is increasing, there is
considerable disagreement about the relative risk (compared to automobile scrap, for example),
and strong disagreement whether curtailing trade in used electronics will improve conditions, or
make them worse. According to an article in Motherboard, attempts to restrict the trade have
driven reputable companies out of the supply chain, with unintended consequences.[13]
estimates that about 80% of the electronic waste directed to recycling in the U.S. does not get
recycled there at all, but is put on container ships and sent to countries such as China.[15][16][17][18]
This figure is disputed as an exaggeration by the EPA, the Institute of Scrap Recycling
Industries, and the World Reuse, Repair and Recycling Association.
Independent research by Arizona State University showed that 87-88% of imported used
computers did not have a higher value than the best value of the constituent materials they
contained, and that "the official trade in end-of-life computers is thus driven by reuse as opposed
to recycling".[19]
Guiyu is only one example of digital dumps but similar places can be found across the world
such as Asia and Africa. With amounts of e-waste growing rapidly each year urgent solutions are
required. While the waste continues to flow into digital dumps like Guiyu there are measures that
can help reduce the flow of e-waste.[23]
A preventative step that major electronics firms should take is to remove the worst chemicals in
their products in order to make them safer and easier to recycle. It is important that all companies
take full responsibility for their products and, once they reach the end of their useful life, take
their goods back for re-use or safely recycle them.
Trade
Proponents of the trade say growth of internet access is a stronger correlation to trade than
poverty. Haiti is poor and closer to the port of New York than southeast Asia, but far more
electronic waste is exported from New York to Asia than to Haiti. Thousands of men, women,
and children are employed in reuse, refurbishing, repair, and remanufacturing, unsustainable
industries in decline in developed countries. Denying developing nations access to used
electronics may deny them sustainable employment, affordable products, and internet access, or
force them to deal with even less scrupulous suppliers. In a series of seven articles for The
Atlantic, Shanghai-based reporter Adam Minter describes many of these computer repair and
scrap separation activities as objectively sustainable.[25]
Opponents of the trade argue that developing countries utilize methods that are more harmful and
more wasteful. An expedient and prevalent method is simply to toss equipment onto an open fire,
in order to melt plastics and to burn away non-valuable metals. This releases carcinogens and
neurotoxins into the air, contributing to an acrid, lingering smog. These noxious fumes include
dioxins and furans.[26] Bonfire refuse can be disposed of quickly into drainage ditches or
waterways feeding the ocean or local water supplies.[18][27]
In June 2008, a container of electronic waste, destined from the Port of Oakland in the U.S. to
Sanshui District in mainland China, was intercepted in Hong Kong by Greenpeace.[28] Concern
over exports of electronic waste were raised in press reports in India,[29][30] Ghana,[31][32][33] Cte
d'Ivoire,[34] and Nigeria.[35]
Airborne dioxins one type found at 100 times levels previously measured
Levels of carcinogens in duck ponds and rice paddies exceeded international standards
for agricultural areas and cadmium, copper, nickel, and lead levels in rice paddies were
above international standards
Heavy metals found in road dust lead over 300 times that of a control villages road
dust and copper over 100 times[37]
Process Used
Breaking and removal of
yoke, then dumping
Information security
E-waste presents a potential security threat to individuals and exporting countries. Hard drives
that are not properly erased before the computer is disposed of can be reopened, exposing
sensitive information. Credit card numbers, private financial data, account information, and
records of online transactions can be accessed by most willing individuals. Organized criminals
in Ghana commonly search the drives for information to use in local scams.[39]
Government contracts have been discovered on hard drives found in Agbogbloshie. Multimillion dollar agreements from United States security institutions such as the Defense
Intelligence Agency (DIA), the Transportation Security Administration and Homeland Security
have all resurfaced in Agbogbloshie.[39][40]
E-waste management
1. Recycling
Computer monitors are typically packed into low stacks on wooden pallets for recycling and then
shrink-wrapped.[26]
See also: Computer recycling
Today the electronic waste recycling business is in all areas of the developed world a large and
rapidly consolidating business. People tend to forget that properly disposing of or reusing
electronics can help prevent health problems, create jobs, and reduce greenhouse-gas emissions.
[41]
Part of this evolution has involved greater diversion of electronic waste from energy-intensive
downcycling processes (e.g., conventional recycling), where equipment is reverted to a raw
material form. This recycling is done by sorting, dismantling, and recovery of valuable materials.
[42]
This diversion is achieved through reuse and refurbishing. The environmental and social
benefits of reuse include diminished demand for new products and virgin raw materials (with
their own environmental issues); larger quantities of pure water and electricity for associated
manufacturing; less packaging per unit; availability of technology to wider swaths of society due
to greater affordability of products; and diminished use of landfills.
Audiovisual components, televisions, VCRs, stereo equipment, mobile phones, other handheld
devices, and computer components contain valuable elements and substances suitable for
reclamation, including lead, copper, and gold.
One of the major challenges is recycling the printed circuit boards from the electronic wastes.
The circuit boards contain such precious metals as gold, silver, platinum, etc. and such base
metals as copper, iron, aluminum, etc. One way e-waste is processed is by melting circuit boards,
burning cable sheathing to recover copper wire and open- pit acid leaching for separating metals
of value.[43] Conventional method employed is mechanical shredding and separation but the
recycling efficiency is low. Alternative methods such as cryogenic decomposition have been
studied for printed circuit board recycling,[44] and some other methods are still under
investigation.
other means that they continually meet specific high environmental standards and safely
manage used electronics. Once certified, the recycler is held to the particular standard by
continual oversight by the independent accredited certifying body. A certification
accreditation board accredits certifying bodies and oversees certifying bodies to ensure
that they meet specific responsibilities and are competent to audit and provide
certification. EPA supports and will continue to push for continuous improvement of
electronics recycling practices and standards.[45]
e-Cycle, LLC: e-Cycle, LLC is the first mobile buyback and recycling company in the
world to be e-Stewards, R2 and ISO 14001 certified. They work with the largest
organizations in the world, including 16 of the Fortune 20 and 356 of the Fortune 500, to
raise awareness on the global e-waste crisis.[46]
Best Buy: Best Buy accepts electronic items for recycling, even if they were not
purchased at Best Buy. For a full list of acceptable items and locations, visit Best Buys
Recycling information page.[47]
Staples: Staples also accepts electronic items for recycling at no additional cost. They
also accept ink and printer toner cartridges. For a full list of acceptable items and
locations, visit the Staples Recycling information page.[48]
In the US, the Consumer Electronics Association (CEA) urges consumers to dispose
properly of end-of-life electronics through its recycling locator at
www.GreenerGadgets.org. This list only includes manufacturer and retailer programs that
use the strictest standards and third-party certified recycling locations, to provide
consumers assurance that their products will be recycled safely and responsibly. CEA
research has found that 58 percent of consumers know where to take their end-of-life
electronics, and the electronics industry would very much like to see that level of
awareness increase. Consumer electronics manufacturers and retailers sponsor or operate
more than 5,000 recycling locations nationwide and have vowed to recycle one billion
pounds annually by 2016,[49] a sharp increase from 300 million pounds industry recycled
in 2010.
The Sustainable Materials Management Electronic Challenge was created by the United
States Environmental Protection Agency (EPA). Participants of the Challenge are
manufacturers of electronics and electronic retailers. These companies collect end-of-life
(EOL) electronics at various locations and send them to a certified, third-party recycler.
Program participants are then able publicly promote and report 100% responsible
recycling for their companies.[50]
The grassroots Silicon Valley Toxics Coalition (svtc.org) focuses on promoting human
health and addresses environmental justice problems resulting from toxins in
technologies.
Take Back My TV[54] is a project of The Electronics TakeBack Coalition and grades
television manufacturers to find out which are responsible and which are not.
The e-Waste Association of South Africa (eWASA)[55] has been instrumental in building a
network of e-waste recyclers and refurbishers in the country. It continues to drive the
sustainable, environmentally sound management of all e-waste in South Africa.
E-Cycling Central is a website from the Electronic Industry Alliance which allows you to
search for electronic recycling programs in your state. It lists different recyclers by state
to find reuse, recycle, or find donation programs across the country.[56]
StEP: Solving the E-Waste Problem This website of StEP, an initiative founded by
various UN organizations to develop strategies to solve the e-waste problem, follows its
activities and programs.[42][58]
Processing techniques
Recycling the lead from batteries.
In many developed countries, electronic waste processing usually first involves dismantling the
equipment into various parts (metal frames, power supplies, circuit boards, plastics), often by
hand, but increasingly by automated shredding equipment. A typical example is the NADIN
electronic waste processing plant in Novi Iskar, Bulgariathe largest facility of its kind in
Eastern Europe.[59][60] The advantages of this process are the human's ability to recognize and save
working and repairable parts, including chips, transistors, RAM, etc. The disadvantage is that the
labor is cheapest in countries with the lowest health and safety standards.
In an alternative bulk system,[61] a hopper conveys material for shredding into an unsophisticated
mechanical separator, with screening and granulating machines to separate constituent metal and
plastic fractions, which are sold to smelters or plastics recyclers. Such recycling machinery is
enclosed and employs a dust collection system. Some of the emissions are caught by scrubbers
and screens. Magnets, eddy currents, and trommel screens are employed to separate glass,
plastic, and ferrous and nonferrous metals, which can then be further separated at a smelter.
Leaded glass from CRTs is reused in car batteries, ammunition, and lead wheel weights,[26] or
sold to foundries as a fluxing agent in processing raw lead ore. Copper, gold, palladium, silver
and tin are valuable metals sold to smelters for recycling. Hazardous smoke and gases are
captured, contained and treated to mitigate environmental threat. These methods allow for safe
reclamation of all valuable computer construction materials.[18] Hewlett-Packard product
recycling solutions manager Renee St. Denis describes its process as: "We move them through
giant shredders about 30 feet tall and it shreds everything into pieces about the size of a quarter.
Once your disk drive is shredded into pieces about this big, it's hard to get the data off".[62]
An ideal electronic waste recycling plant combines dismantling for component recovery with
increased cost-effective processing of bulk electronic waste.
Reuse is an alternative option to recycling because it extends the lifespan of a device. Devices
still need eventual recycling, but by allowing others to purchase used electronics, recycling can
be postponed and value gained from device use.
Benefits of recycling
Recycling raw materials from end-of-life electronics is the most effective solution to the growing
e-waste problem. Most electronic devices contain a variety of materials, including metals that
can be recovered for future uses. By dismantling and providing reuse possibilities, intact natural
resources are conserved and air and water pollution caused by hazardous disposal is avoided.
Additionally, recycling reduces the amount of greenhouse gas emissions caused by the
manufacturing of new products.[63]
Benefits of recycling are extended when responsible recycling methods are used. In the U.S.,
responsible recycling aims to minimize the dangers to human health and the environment that
disposed and dismantled electronics can create. Responsible recycling ensures best management
practices of the electronics being recycled, worker health and safety, and consideration for the
environment locally and abroad.[64]
Hazardous
Recyclers in the street in So Paulo, Brazil with old computers
Americium: The radioactive source in smoke alarms. It is known to be carcinogenic.
Mercury: Found in fluorescent tubes (numerous applications), tilt switches (mechanical
doorbells, thermostats),[66] and flat screen monitors. Health effects include sensory
impairment, dermatitis, memory loss, and muscle weakness. Exposure in-utero causes
fetal deficits in motor function, attention and verbal domains.[67] Environmental effects in
animals include death, reduced fertility, and slower growth and development.
Sulphur: Found in lead-acid batteries. Health effects include liver damage, kidney
damage, heart damage, eye and throat irritation. When released into the environment, it
can create sulphuric acid.
BFRs: Used as flame retardants in plastics in most electronics. Includes PBBs, PBDE,
DecaBDE, OctaBDE, PentaBDE. Health effects include impaired development of the
nervous system, thyroid problems, liver problems. Environmental effects: similar effects
as in animals as humans. PBBs were banned from 1973 to 1977 on. PCBs were banned
during the 1980s.
Lead: Solder, CRT monitor glass, lead-acid batteries, some formulations of PVC.[69] A
typical 15-inch cathode ray tube may contain 1.5 pounds of lead,[3] but other CRTs have
been estimated as having up to 8 pounds of lead.[26] Adverse effects of lead exposure
include impaired cognitive function, behavioral disturbances, attention deficits,
hyperactivity, conduct problems and lower IQ[67]
Beryllium oxide: Filler in some thermal interface materials such as thermal grease used
on heatsinks for CPUs and power transistors,[70] magnetrons, X-ray-transparent ceramic
windows, heat transfer fins in vacuum tubes, and gas lasers.
There is also evidence of cytotixic and genotoxic effects of some chemicals, which have been
shown to inhibit cell proliferation, cause cell membrane lesion, cause DNA single-strand breaks,
and elevate Reactive Oxygen Species (ROS) levels.[72]
DNA breaks can increase the likelihood of developing cancer (if the damage is to a tumor
suppressor gene)
DNA damages are a special problem in non-dividing or slowly dividing cells, where
unrepaired damages will tend to accumulate over time. On the other hand, in rapidly
dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication
will tend to cause replication errors and thus mutation
Elevated Reactive Oxygen Species (ROS) levels can cause damage to cell structures
(oxidative stress)[72]
Generally non-hazardous
An iMac G4 that has been repurposed into a lamp (photographed next to a Mac Classic and a flip
phone).
Aluminium: nearly all electronic goods using more than a few watts of power (heatsinks),
electrolytic capacitors.
Copper: copper wire, printed circuit board tracks, component leads.
Computer Recycling
Digger gold
eDay
Green computing
Polychlorinated biphenyls
Retrocomputing
China RoHS
e-Stewards
Soesterberg Principles
Organizations
Asset Disposal and Information Security Alliance (ADISA)[73]
Empa
IFixit
General:
Correlation
Screening curves
Meta analysis
In statistics, a meta-analysis refers to methods that focus on contrasting and combining results
from different studies, in the hope of identifying patterns among study results, sources of
disagreement among those results, or other interesting relationships that may come to light in the
context of multiple studies.[1] In its simplest form, meta-analysis is normally done by
identification of a common measure of effect size. A weighted average of that common measure
is the output of a meta-analysis. The weighting is related to sample sizes within the individual
studies. More generally there are other differences between the studies that need to be allowed
for, but the general aim of a meta-analysis is to more powerfully estimate the true effect size as
opposed to a less precise effect size derived in a single study under a given single set of
assumptions and conditions
A meta-analysis therefore gives a thorough summary of several studies that have been done on
the same topic, and provides the reader with extensive information on whether an effect exists
and what size that effect has.
Meta analysis can be thought of as "conducting research about research."
Meta-analyses are often, but not always, important components of a systematic
review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a
medical treatment, in an effort to obtain a better understanding of how well the treatment works.
Here it is convenient to follow the terminology used by the Cochrane Collaboration,[2] and use
"meta-analysis" to refer to statistical methods of combining evidence, leaving other aspects of
'research synthesis' or 'evidence synthesis', such as combining information from qualitative
studies, for the more general context of systematic reviews.
Meta-analysis forms part of a framework called estimation statistics which relies on effect
sizes, confidence intervals and precision planning to guide data analysis, and is an alternative
to null hypothesis significance testing.
Pitfalls
A meta-analysis of several small studies does not predict the results of a single
large study.[9] Some have argued that a weakness of the method is that sources of
bias are not controlled by the method: a good meta-analysis of badly designed
studies will still result in bad statistics. [10] This would mean that only
methodologically sound studies should be included in a meta-analysis, a practice
called 'best evidence synthesis'.[10]
Other meta-analysts would include weaker studies, and add a study-level predictor
variable that reflects the methodological quality of the studies to examine the effect
of study quality on the effect size.[11] However, others have argued that a better
approach is to preserve information about the variance in the study sample, casting
as wide a net as possible, and that methodological selection criteria introduce
unwanted subjectivity, defeating the purpose of the approach. [12]
steps of meta analysis
in which
the pooled variance.
5. Selection of meta-regression statistic model. e.g. Simple regression, fixedeffect meta regression and random-effect meta regression. Meta-regression is a
tool used in meta-analysis to examine the impact of moderator variables on
study effect size using regression-based techniques. Meta-regression is more
effective at this task than are standard regression techniques.
Meta-analysis
combines the
results of several
studies
What is meta-analysis?
Meta-analysis is the use of statistical methods to combine results of
individual studies. This allows us to make the best use of all the
information we have gathered in our systematic review by increasing
the power of the analysis. By statistically combining the results of
Retain
ed
Total
Risk
19
36
0.528
Risk
difference
-0.16
Control
13
19
0.684
Retain Tota
Risk
Risk
art
Daycare
ed
58
difference
0.103
4
-0.004
Control
65
0.107
7
What would happen if we pooled all the children as if they were part of
a single trial?
Pooled
results
Daycare
Control
We don't add up
patients across
trials
We don't use
simple averages to
calculate a metaanalysis
Retain Tota
Risk
ed
l
25
20
94
84
0.26
6
0.23
8
Risk
difference
+0.03
WRONG!
Economic growth is the increase in the market value of the goods and services produced by
an economy over time. It is conventionally measured as the percent rate of increase in real gross
domestic product, or real GDP.[1] Of more importance is the growth of the ratio of GDP to
population (GDP per capita), which is also called per capita income. An increase in per capita
income is referred to as intensive growth. GDP growth caused only by increases in population or
territory is called extensive growth.[2]
Growth is usually calculated in real terms i.e., inflation-adjusted terms to eliminate the
distorting effect of inflation on the price of goods produced. In economics, "economic growth" or
"economic growth theory" typically refers to growth of potential output, i.e., production at "full
employment".
As an area of study, economic growth is generally distinguished from development economics.
The former is primarily the study of how countries can advance their economies. The latter is the
study of the economic aspects of the development process in low-income countries. See
also Economic development.
Since economic growth is measured as the annual percent change of gross domestic product
(GDP), it has all the advantages and drawbacks of that measure. For example, GDP only
measures the market economy, which tends to overstate growth during the change over from a
farming economy with household production.[3] An adjustment was made for food grown on and
consumed on farms, but no correction was made for other household production. Also, there is
no allowance in GDP calculations for depletion of natural resources.
Pros
5. Quality of life
Cons
6. Resource depletion
7. Environmental impact
8. Global warming
Inflation graphs
connection between the level of production and the level of prices also holds for the rate of
change of production (that is, the rate of economic growth) and the rate of change of prices (that
is, the inflation rate).
Some simple arithmetic will clarify. Start with the famous equation of exchange, MV = Py, where
M is the money supply; V is the velocity of money that is, the speed at which money circulates;
P is the price level; and y is the real output of the economy (real GDP.) A version of this
equation, incidentally, was on the license plate of the late economist Milton Friedman, who
made a large part of his academic reputation by reviving, and giving evidence for, the role of
money growth in causing inflation.
If the growth rate of real GDP increases and the growth rates of M and V are held constant, the
growth rate of the price level must fall. But the growth rate of the price level is just another term
for the inflation rate; therefore, inflation must fall. An increase in the rate of economic growth
means more goods for money to chase, which puts downward pressure on the inflation rate. If
for example the money supply grows at 7% a year and velocity is constant and if annual
economic growth is 3%, inflation must be 4% (more exactly, 3.9%). If, however, economic
growth rises to 4%, inflation falls to 3% (actually, 2.9%.)
The April numbers for the index of industrial production (IIP), released on Thursday,
brought some cheer on the growthfront. The IIP grew by 3.4 per cent, its highest in
a long time. April, of course, was a month in which the entire country was deep in
electioneering. Therefore, some sort of stimulus from all the campaign spending
might have been reasonable to expect. The biggest beneficiary of this was the
category of "electrical machinery", which grew by over 66 per cent year on year,
reflecting all those campaign rallies, with their generators and audio equipment.
The other significant contributor to the growth in the overall index was electricity,
which grew by almost 12 per cent year on year, significantly higher than its growth
during 2013-14. Typically, a growth acceleration that relies heavily on one or two
sectoral surges does not have much staying power. It would require an across-theboard show of resurgence to allow people to conclude that a sustainable recovery
was under way. That is clearly not happening yet. However, these numbers do
reinforce the perception that things are not getting worse as far as growth is
concerned.
Likewise, there was some room for relief on the inflation front. The consumer price
index, or CPI, numbers for May 2014 showed headline inflation declining slightly,
from 8.6 per cent in April to 8.3 per cent in May. The Central Statistical Office is now
separately reporting a sub-index labelled consumer food price index, or CFPI, which
provides some convenience to observers. The index itself, though, offers little cheer.
It came down modestly between April and May, largely explaining the decline in the
headline rate, but is still significantly above nine per cent. At a time when there are
concerns about the performance of the monsoon and the impact of that on food
prices, these numbers should be a major cause of worry for the government. Milk,
eggs, fish and meat, vegetables and fruit contributed to the persistence of food
inflation. But cereals are also kicking in, as they have been for the past couple of
years, and the government must use its large stocks of rice and wheat quickly to
dampen at least this source of food inflation. It would be unconscionable not to do
so when risks of a resurgence of inflation are high. The larger point on inflation,
though, is how stubborn the rate is despite sluggish growth and high interest rates.
The limitations of monetary policy are being repeatedly underscored.
Against this backdrop, the government's prioritisation of its fight against inflation is
an extremely important development. It has to move quickly from intent to action
on a variety of reforms, from procurement policy to subsidies and to investment in
rural infrastructure. Many of these will generate benefits only over the medium
term. So those expecting a growth stimulus from the Reserve Bank of India any time
soon are bound to be in for a disappointment. Even so, room for optimism should
come from the fact that this government does have the capacity to design and
execute long-term strategies with complete credibility. The simple equation that it
needs to keep in mind is that inflation will not subside unless food prices moderate
and growth will not recover unless inflation subsides.
If properly conducted, the Randomized Controlled Trial (RCT) is the best study-design to
examine the clinical efficacy of health interventions. An RCT is an experimental study where
individuals who are similar at the beginning are randomly allocated to two or more treatment
groups and the outcomes of the groups are compared after sufficient follow-up time. However an
RCT may not always be feasible, because it may not be ethical or desirable to randomize people
or to expose them to certain interventions.
Observational studies provide weaker empirical evidence, because the allocation of factors is
not under control of the investigator, but just happen or are chosen (e.g. smoking). Of the
observational studies, cohort studies provide stronger evidence than case control studies,
because in cohort studies factors are measured before the outcome, whereas in case controls
studies factors are measured after the outcome.
Most people find such a description of study types and levels of evidence too theoretical and not
appealing.
Last year I was challenged to tell about how doctors search medical information (central theme =
Google) for and here it comes. the Society of History and ICT.
To explain the audience why it is important for clinicians to find the best evidence and how
methodological filters can be used to sift through the overwhelming amount of information in for
instance PubMed, I had to introduce RCTs and the levels of evidence. To explain it to them I
used an example that stroke me when I first read about it.
I showed them the following slide :
And clarified: Beta-carotene is a vitamine in carrots and many other vegetables, but you can
also buy it in pure form as pills. There is reason to believe that beta-carotene might help to
prevent lung cancer in cigarette smokers. How do you think you can find out whether betacarotene will have this effect?
Suppose you have two neighbors, both heavy smokers of the same age, both
males. The neighbor who doesnt eat much vegetables gets lung cancer, but the
neighbor who eats a lot of vegetables and is fond of carrots doesnt. Do you think
this provides good evidence that beta-carotene prevents lung cancer?
There is a laughter in the room, so they dont believe in n=1 experiments/case
reports. (still how many people dont think smoking does not necessarily do any
harm because their chainsmoking father reached his nineties in good health).
I show them the following slide with the lowest box only.
O.k. What about this study? Ive a group of lung cancer patients,
who smoke(d) heavily. I ask them to fill in a questionnaire about their eating habits
in the past and take a blood sample, and I do the same with a simlar group of
smokers without cancer (controls). Analysis shows that smokers developing lung
cancer eat much less beta-carotene containing vegetables and have lower
bloodlevels of beta-carotene than the smokers not developing cancer. Does this
mean
that
beta-carotene
is
preventing
lung
cancer?
Humming in the audience, till one man says: perhaps some people dont remember
exactly what they eat and then several people object that it is just an association
and you do not yet know whether beta-carotene really causes this. Right! I show
the box patient-control studies.
Than consider this study design. I follow a large cohort of healthy heavy
smokers and look at their eating habits (including use of supplements) and take
regular blood samples. After a long follow-up some heavy smokers develop lung
cancer whereas others dont. Now it turns out that the group that did not develop
lung cancer had significantly more beta-carotene in their blood and eat larger
amount of beta-carotene containing food. What do you think about that then?
Now the room is a bit quiet, there is some hesitation. Then someone says: well it is
more convincing and finally the chair says: but it may still not be the carrots, but
something else in their food or they may just have other healthy living habits
(including eating carrots). Cohort-study appears on the slide (What a perfect
audience!)
O.k. youre not convinced that these study designs give conclusive evidence.
How could we then establish that beta-carotene lowers the risk of lung cancer in
hea
wanted
to
know,
how
do
vy
you
smokers? Suppose
set
up
such
you really
a
study?
Grinning. Someone says by giving half of the smokersbeta-carotene and the other
half nothing. Or a placebo, someone else says. Right! Randomized Controlled
Trial is on top of the slide. And there is not much room left for another box, so we
are there. I only add that the best way to do it is to do it double blinded.
Than I reveal that all this research has really been done. There have been numerous observational
studies (case-control as well cohorts studies) showing a consistent negative correlation between
the intake of beta-carotene and the development of lung cancer in heavy smokers. The same has
been shown for vitamin E.
Knowing that, I asked the public: Would you as a heavy smoker participate in a trial where
you are randomly assigned to one of the following groups: 1. beta-carotene, 2. vitamin E, 3. both
or 4. neither vitamin (placebo)?
The recruitment fails. Some people say they dont believe in supplements, others say that it
would be far more effective if smokers quit smoking (laughter). Just 2 individuals said they
would at least consider it. But they thought there was a snag in it and they were right. Such
studies have been done, and did not give the expected positive results.
In the first large RCT (appr. 30,000 male smokers!), the ATBC Cancer Prevention Study, betacarotene rather increased the incidence of lung cancer with 18 percent and overall mortality with
8 percent (although harmful effects faded after men stopped taking the pills). Similar results were
obtained in the CARET-study, but not in a 3rd RCT, the Physicians Health Trial, the only
difference being that the latter trial was performed both with smokers nd non-smokers.
It is now generally thought that cigarette smoke causes beta-carotene to breakdown in
detrimental products, a process that can be halted by other anti-oxidants (normally present in
food). Whether vitamins act positively (anti-oxidant) or negatively (pro-oxidant) depends very
much on the dose and the situation and on whether there is a shortage of such supplements or
not.
I found that this way of explaining study designs to well-educated layman was very effective and
fun!
The take-home message is that no matter how reproducible the observational studies seem to
indicate a certain effect, better evidence is obtained by randomized control trials. It also shows
that scientists should be very prudent to translate observational findings directly in a particular
lifestyle advice.
On the other hand, I wonder whether all hypotheses have to be tested in a costly RCT (the costs
for the ATCB trial were $46 million). Shouldnt there be very very solid grounds to start a
prevention study with dietary supplements in healthy individuals ? Arent their any dangers?
Personally I think we should be very restrictive about these chemopreventive studies. Till now
most chemopreventive studies have not met the high expectations, anyway.
And what about coenzyme-Q and komkommerslank? Besides that I do not expect the evidence to
be convincing, tiredness can obviously be best combated by rest and I already eat enough
cucumbers. ;)
To be continued
Ecological studies are studies of risk-modifying factors on health or other outcomes based on
populations defined either geographically or temporally. Both risk-modifying factors and
outcomes are averaged for the populations in each geographical or temporal unit and then
compared using standard statistical methods.
Ecological studies have often found links between risk-modifying factors and health outcomes
well in advance of other epidemiological or laboratory approaches. Several examples are given
here.
The study by John Snow regarding a cholera outbreak in London is considered the first
ecological study to solve a health issue. He used a map of deaths from cholera to determine that
the source of the cholera was a pump on Broad Street. He had the pump handle removed in 1854
and people stopped dying there [Newsom, 2006]. It was only when Robert Koch discovered
bacteria years later that the mechanism of cholera transmission was understood.[1]
Dietary risk factors for cancer have also been studied using both geographical and temporal
ecological studies. Multi-country ecological studies of cancer incidence and mortality rates with
respect to national diets have shown that some dietary factors such as animal products (meat,
milk, fish and eggs), added sweeteners/sugar, and some fats appear to be risk factors for many
types of cancer, while cereals/grains and vegetable products as a whole appear to be risk
reduction factors for many types of cancer.[2][3] Temporal changes in Japan in the types of cancer
common in Western developed countries have been linked to the nutrition transition to the
Western diet.[4]
An important advancement in the understanding of risk-modifying factors for cancer was made
by examining maps of cancer mortality rates. The map of colon cancer mortality rates in the
United States was used by the brothers Cedric and Frank C. Garland to propose the hypothesis
that solar ultraviolet B (UVB) radiation, through vitamin D production, reduced the risk of
cancer (the UVB-vitamin D-cancer hypothesis).[5] Since then many ecological studies have been
performed relating the reduction of incidence or mortality rates of over 20 types of cancer to
lower solar UVB doses.[6]
Links between diet and Alzheimers disease have been studied using both geographical and
temporal ecological studies. The first paper linking diet to risk of Alzheimers disease was a
multicountry ecological study published in 1997.[7] It used prevalence of Alzheimers disease in
11 countries along with dietary supply factors, finding that total fat and total energy (caloric)
supply were strongly correlated with prevalence, while fish and cereals/grains were inversely
correlated (i.e., protective). Diet is now considered an important risk-modifying factor for
Alzheimers disease.[8] Recently it was reported that the rapid rise of Alzheimers disease in
Japan between 1985 and 2007 was likely due to the nutrition transition from the traditional
Japanese diet to the Western diet.[9]
Another example of the use of temporal ecological studies relates to influenza. John Cannell and
associates hypothesized that the seasonality of influenza was largely driven by seasonal
variations in solar UVB doses and calcidiol levels.[10] A randomized controlled trial involving
Japanese school children found that taking 1000 IU per day vitamin D3 reduced the risk of type
A influenza by two-thirds.[11]
Ecological studies are particularly useful for generating hypotheses since they can use existing
data sets and rapidly test the hypothesis. The advantages of the ecological studies include the
large number of people that can be included in the study and the large number of risk-modifying
factors that can be examined.
The term ecological fallacy means that the findings for the groups may not apply to individuals
in the group. However, this term also applies to observational studies and randomized controlled
trials. All epidemiological studies include some people who have health outcomes related to the
risk-modifying factors studied and some who do not. For example, genetic differences affect how
people respond to pharmaceutical drugs. Thus, concern about the ecological fallacy should not be
used to disparage ecological studies. The more important consideration is that ecological studies
should include as many known risk-modifying factors for any outcome as possible, adding others
if warranted. Then the results should be evaluated by other methods, using, for example, Hills
criteria for causality in a biological system.
The ecological fallacy may occur when conclusions about individuals are drawn from
analyses conducted on grouped data. The nature of this type of analysis tends to
overestimate the degree of association between variables.
Survival rate.
Life table.....
Life tables are also used extensively in biology and epidemiology. The concept is
also of importance in product life cycle management.
Using
with
will live at
least years and can be used to discuss annuity issues from the boomer viewpoint where an
increase in group size will have major effects.
For those in the age range covered by the chart, the "5 yr" curve indicates the group that will
reach beyond the life expectancy. This curve represents the need for support that
covers longevity requirements.
The "20 yr" and "25 yr" curves indicate the continuing diminishing of the life expectancy value
as "age" increases. The differences between the curves are very pronounced starting around the
age of 50 to 55 and ought to be used for planning based upon expectation models.
The "10 yr" and "15 yr" curves can be thought of as the trajectory that is followed by the life
expectancy curve related to those along the median which indicates that the age of 90 is not out
of the question.
A "life table" is a kind of bookkeeping system that ecologists often use to keep
track of stage-specific mortality in the populations they study.
It is an especially
From a pest
management standpoint, it is very useful to know when (and why) a pest population
suffers high mortality -- this is usually the time when it is most vulnerable.
By
This number represents the maximum biotic potential of the species (i.e. the
greatest number of offspring that could be produced in one generation under ideal
conditions). The first line of the life table lists the main cause(s) of death, the
number dying, and the percent mortality during the egg stage. In this example,
an average of only 100 individuals survive the egg stage and become larvae.
The second line of the table lists the mortality experience of these 100 larvae: only
10 of them survive to become pupae (90% mortality of the larvae). The third
line of the table lists the mortality experience of the 10 pupae -- three-fifths die of
freezing. This leaves only 4 individuals alive in the adult stage to reproduce. If we
assume a 1:1 sex ratio, then there are 2 males and 2 females to start the next
generation.
If there is no mortality of these females, they will each lay an average of 200 eggs
to start the next generation. Thus there are two females in the cohort to replace
the one original female -- this population is DOUBLING in size each generation!!
In ecology, the symbol "R" (capital R) is known as the replacement rate. It is a
way to measure the change in reproductive capacity from generation to
generation. The value of "R" is simply the number of reproductive daughters that
each female produces over her lifetime:
Number of daughters
R = ------------------------------Number of mothers
Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs. On average, 32 of these eggs are infertile and 64 are killed by parasites.
Of
the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct
a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio). Is this population increasing, decreasing, or remaining
stable?
Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs. On average, 32 of these eggs are infertile and 64 are killed by parasites. Of
the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct
a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio). Is this population increasing, decreasing, or remaining
stable?
Relative Risk
Y-Y analysis
Forest Graph
A forest plot (or blobbogram[1]) is a graphical display designed to illustrate the relative strength
of treatment effects in multiple quantitative scientific studies addressing the same question. It
was developed for use in medical research as a means of graphically representing a metaanalysis of the results of randomized controlled trials. In the last twenty years, similar metaanalytical techniques have been applied in observational studies (e.g. environmental
epidemiology) and forest plots are often used in presenting the results of such studies also.
Although forest plots can take several forms, they are commonly presented with two columns.
The left-hand column lists the names of the studies (frequently randomized controlled
trials or epidemiological studies), commonly in chronological order from the top downwards.
The right-hand column is a plot of the measure of effect (e.g. an odds ratio) for each of these
studies (often represented by a square) incorporating confidence intervals represented by
horizontal lines. The graph may be plotted on a natural logarithmic scale when using odds ratios
or other ratio-based effect measures, so that the confidence intervals are symmetrical about the
means from each study and to ensure undue emphasis is not given to odds ratios greater than 1
when compared to those less than 1. The area of each square is proportional to the study's weight
in the meta-analysis. The overall meta-analysed measure of effect is often represented on the plot
as a dashed vertical line. This meta-analysed measure of effect is commonly plotted as a
diamond, the lateral points of which indicate confidence intervals for this estimate.
A vertical line representing no effect is also plotted. If the confidence intervals for individual
studies overlap with this line, it demonstrates that at the given level of confidence their effect
sizes do not differ from no effect for the individual study. The same applies for the metaanalysed measure of effect: if the points of the diamond overlap the line of no effect the overall
meta-analysed result cannot be said to differ from no effect at the given level of confidence.
Forest plots date back to at least the 1970s. One plot is shown in a 1985 book about metaanalysis.[2]:252 The first use in print of the word "forest plot" may be in an abstract for a poster at
the Pittsburgh (USA) meeting of the Society for Clinical Trials in May 1996.[3] An informative
investigation on the origin of the notion "forest plot" was published in 2001.[4] The name refers to
the forest of lines produced. In September 1990, Richard Peto joked that the plot was named
after a breast cancer researcher called Pat Forrest and as a result the name has sometimes been
spelt "forrest plot".[4]
Effective Human Resource Management is the Center for Effective Organizations' (CEO) sixth
report of a fifteen-year study of HR management in today's organizations. The only long-term
analysis of its kind, this book compares the findings from CEO's earlier studies to new data
collected in 2010. Edward E. Lawler III and John W. Boudreau measure how HR management is
changing, paying particular attention to what creates a successful HR functionone that
contributes to a strategic partnership and overall organizational effectiveness. Moreover, the
book identifies best practices in areas such as the design of the HR organization and HR metrics.
It clearly points out how the HR function can and should change to meet the future demands of a
global and dynamic labor market.
For the first time, the study features comparisons between U.S.-based firms and companies in
China, Canada, Australia, the United Kingdom, and other European countries. With this new
analysis, organizations can measure their HR organization against a worldwide sample, assessing
their positioning in the global marketplace, while creating an international standard for HR
management.
(PDF 2 docs)
Policy?
1. Politics: (1) The basic principles by which a government is guided.
(2) The declared objectives that a government or party seeks to achieve and
preserve in the interest of national community. See also public policy.
2. Insurance: The formal contract issued by an insurer that contains terms and
conditions of the insurance cover and serves as its legal evidence.
3. Management: The set of basic principles and associated guidelines, formulated
and enforced by the governing body of an organization, to direct and limit
its actions in pursuit of long-term goals. See also corporate policy.
A policy is a principle or protocol to guide decisions and achieve rational outcomes. A policy is a
statement of intent, and is implemented as a procedure[1] or protocol. Policies are generally
adopted by the Board of or senior governance body within an organization whereas procedures
or protocols would be developed and adopted by senior executive officers. Policies can assist in
both subjective and objective decision making. Policies to assist in subjective decision making
would usually assist senior management with decisions that must consider the relative merits of a
number of factors before making decisions and as a result are often hard to objectively test
e.g. work-life balance policy. In contrast policies to assist in objective decision making are
usually operational in nature and can be objectively tested e.g. password policy.[citation needed]
The term may apply to government, private sector organizations and groups, as well as
individuals. Presidential executive orders, corporate privacy policies, and parliamentary rules of
order are all examples of policy. Policy differs from rules or law. While law can compel or
prohibit behaviors (e.g. a law requiring the payment of taxes on income), policy merely guides
actions toward those that are most likely to achieve a desired outcome.[citation needed]
Policy or policy study may also refer to the process of making important organizational
decisions, including the identification of different alternatives such as programs or spending
priorities, and choosing among them on the basis of the impact they will have. Policies can be
understood as political, management, financial, and administrative mechanisms arranged to reach
explicit goals. In public corporate finance, a critical accounting policy is a policy for a
firm/company or an industry which is considered to have a notably high subjective element, and
that has a material impact on the financial statements.[citation needed]
Micro-planning
Micro Planning: A tool to empower people
Micro-planning is a comprehensive planning approach wherein the community
prepares development plans themselves considering the priority needs of the
village. Inclusion and participation of all sections of the community is central to
micro-planning, thus making it an integral component of decentralized governance.
For village development to be sustainable and participatory, it is imperative that the
community owns its village development plans and that the community ensures
that development is in consonance with its needs.
However, from our experience of working with the panchayats in Mewat, we realized
that this bottom-up planning approach was never followed in making village
development plans in the past. Many a times, the elected panchayat
representatives had not even heard of this term.
Acknowledging the significance of micro-planning for village development, IRRADs
Capacity Building Center organized a week long training workshop on microplanning for elected representatives of panchayats and IRRADs staff working with
panchayats in the villages. The aim of this workshop was to educate the
participants about the concept of micro-planning and its importance in
decentralized governance system.
As part of this workshop the participants were explained, in detail about the
concept, why and how of micro planning; the difference between micro-planning
and the traditional planning approaches. To give practical exposure to the
participants, a three day micro-planning exercise was carried out in Untaka Village
of Nuh Block, Mewat. The objective of this exposure was to show participants how
micro-planning is carried out and what challenges may arise during its conduct and
prepare the village development plan following the micro-planning approach.
The village sarpanch led the process from the front, and the entire village and
panchayat members participated wholeheartedly in this exercise. Participatory Rural
Appraisal (PRA) technique which incorporates the knowledge and opinions of rural
people in the planning and management of development projects and programmes
was used to gather information and prioritize development works. Resource, social
and development issue prioritization maps were prepared by the villagers after
analyzing the collected information. The villagers further identified the problems
associated with village development and recommended solutions for specific
problems while working in groups. The planning process went on for two days
subsequent to which a Gram Sabha (village committee), the first power unit in the
panchayati raj system, was organized on the third day. About 250 people
participated in the Gram Sabha including 65 women and 185 men. The sarpanch
shared the final village analysis and development plans with the villagers present in
Gram Sabha and asked for their inputs and suggestions. After incorporating the
suggestions received, a plan was prepared and submitted to Block Development
Office for final approval and sanction of funds.
"After the successful conduct of Gram Sabha in our village, we now need to build
synergies with the district level departments to implement the plans drawn in the
meeting," said the satisfied Sarpanch of Untka after experiencing the conduct of
micro planning exercise in their village.
Macro-planning
Macro Planning and Policy Division (MPPD) is responsible for setting macroeconomic
policies and strategies in consultation with key agencies, such as the Reserve Bank
of Fiji (RBF) and Ministry of Finance. The Division analyzes and forecasts movements
in macroeconomic indicators and accounts, including Gross Domestic Product (GDP),
Exports and Imports, and the Balance of Payments (BOP). Macroeconomic
forecasting involves making assessments on production data in the various sectors
of the economy for compilation of quarterly forecasts of the National Accounts.
The
Division
also
involves
in
undertaking
assessments
and
research
on
acceptable) for various levels because of students changing needs. And finally, my
old school kindly granted the teachers one day a month of paid prep time/new
student intake, where wed decide on the themes that wed be using for our class to
ensure there wasnt too much overlap with other classes. We did have a set
curriculum in terms of grammar points, but themes and supplementary materials
were up to us. Doing a bit of planning before the semester started ensured that we
stayed organized and kept the students interest throughout the semester.
Another benefit of macro lesson planning is that teachers can share the overall
goals of the course with their students on the first day, and they can reiterate those
goals as the semester progresses. Students often lose sight of the big picture and
get discouraged with their English level, and having clear goals that they see
themselves reaching helps prevent this.
2. Micro lesson planning
The term micro comes from the Greek mikros meaning small, little. In the ELT
industry, micro lesson planning refers to planning one specific lesson based on
one target (e.g., the simple past). It involves choosing a topic or grammar point and
building a full lesson to complement it. A typical lesson plan involves a warm-up
activity, which introduces the topic or elicits the grammar naturally, followed by an
explanation/lesson of the point to be covered. Next, teachers devise a few activities
that allow students to practice the target point, preferably through a mix of skills
(speaking, listening, reading, writing). Finally, teachers should plan a brief wrap-up
activity that brings the lesson to a close. This could be as simple as planning to ask
students to share their answers from the final activity as a class.
Some benefits of micro lesson planning include classes that runs smoothly and
students who dont get bored. Lesson planning ensures that youll be prepared for
every class and that youll have a variety of activities on hand for whatever
situation may arise (well, the majority of situationsIm sure weve all had those
classes where an activity we thought would rock ends up as an epic fail).
For more information on micro lesson planning, check out How to Make a Lesson
Plan, a blog post I wrote last year, where I emphasized the importance of planning
fun, interesting fillers so that students stay engaged. I also provided links in that
post to many examples of activities you can use for warm-ups, main activities,
fillers, homework, etc. There is also a good template for a typical lesson plan
at.docstoc.
Can anyone think of other benefits of macro or micro lesson planning? Does anyone
have a different definition of these terms? Let us know below.
Happy planning!
Tanya
Macro is big and micro is very small. Macro economics depends on big projects like
steel mills, big industrial units, national highway projects etc. which aim at
producing good and services at a very large quantity and serve a wide area. These
take time to porduce results because of the size of the projects. Micro economics is
on a small scale, limited to specific area or location and purpose and normally
produce results in a much shorter time. The best example of micro economics is the
Grameen Bank of Bangladesh started by Md. Yunus, who also got international
awards for his initative.The concept of Micro credit was pioneered by the
Bangladesh-based Grameen Bank, which broke away from the age old belief that
low income amounted to low savings and low investment. It started what came to
be a system which followed this sequence: low income, credit, investment, more
income, more credit, more investment, more income. It is owned by the poor
borrowers of the bank who are mostly women. Borrowers of Grameen Bank at
present own 95 per cent of the total equity and the balance 5% by the Govt. Micro
economics was also one of the policies of Mahatma Gandhi who wanted planning to
start from local village level and spread thru the country; unfortunately this has not
happened and even now the result of developments has not percolated to the
common man, particularly in the rural areas.
lasts 40 or 50 minutes. Of course, there is no clear cut difference between these two
types of planning. Micro planning should be based on macro planning, and macro
planning is apt to be modified as lessons go on.
Read through the following items and decide which belong to macro planning and
which belong to micro planning. Some could belong to both. When you have
finished, compare your decisions with your partner.
Thinking and sharing activity
TASK 2
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
In a sense, macro planning is not writing lesson plans for specific lessons but rather
familiarizing with the context in which language teaching is taking place. Macro
planning involves the following:
1) Knowing about the course: The teacher should get to know which language areas
and language skills should be taught or practised in the course, what materials and
teaching aids are available, and what methods and techniques can be used.
2) Knowing about the institution: The teacher should get to know the institution's
arrangements regarding time, length, frequency of lessons, physical conditions of
classrooms, and exam requirements.
3) Knowing about the learners: The teacher should acquire information about the
students?age range, sex ratio, social background, motivation, attitudes, interests,
learning needs and other individual factors.
4) Knowing about the syllabus: The teacher should be clear about the purposes,
documentation approach
technologies (tools).
Definition of P&P program
A policies and procedures (P&P) program refers to the context in which an
organization formally plans, designs, implements, manages, and uses P&P
communication in support of performance-based learning and on-going reference.
Description of components
The five components of a formal P&P program are described below:
The following information is provided as a template to assist learners draft a policy. However it
must be remembered that policies are written to address specific issues, and therefore the
structure and components of a policy will differ considerably according to the need. A policy
document may be many pages or it may be a single page with just a few simple statements.
The following template is drawn from an Information Bulletin "Policy and Planning" by Sport
and Recreation Victoria. It is suggested that there are nine components. The example given at the
right of the table should not be construed as a complete policy
Compone
Brief Example
nt
A statement of what
clients
Underpinning principl
philosophies
Broad
service objectives wh
ich explain the areas in
Strategies to achieve
each objective
Specific actions to be
taken
Desired outcomes of
specific actions
Performance
indicators
A reduction in injuries
Management
operational rules
A review program
Health financing systems are critical for reaching universal health coverage. Health
financing levers to move closer to universal health coverage lie in three interrelated
areas:
to use funds more effectively and efficiently. MSH believes in integrated approaches to health
finance and works with sets of policy levers that will produce the best outcomes, including
government regulations, budgeting mechanisms, insurance payment methods and provider and
patient incentives.
Healthcare Financing
The Need
More than 120 million people in Pakistan do not have health coverage. This pushes
the poor into debt and an inevitable medical-poverty trap. Two-thirds of households
surveyed over the last three years, reported that they were affected by one or more
health problems and went into debt to finance the cost. Many who cannot afford
treatment, particularly women, forego medical treatment altogether.
The Solution
To fill this vacuum in healthcare financing, the American Pakistan Foundation has
partnered with Heartfile Health Financing to support their groundbreaking work in
healthcare reform and health financing for the poor in Pakistan.
Success Stories
At the age of 15 Majjid was the only breadwinner of his family. After being hit by a
tractor he was out of a job with a starving family and no money for an operation.
Through Heartfile he was able to get the treatment he needed and stay out of debt.
Majid
The Process
Heartfile is contacted via text or email when a person of dire financial need is
admitted into one of a list of preregistered hospitals.
Within 24 hours a volunteer is mobilized to see the patient, assess poverty status
and the eligibility by running their identity card information through the national
database authority.
Once eligibility is established, the patient is sent funds within 72 hours through a
cash transfer to their service provider.
Donors to Heartfile have full control over their donation through a web database
that allows them to decide where they want their funds to go. They are connected
to the people they support through a personal donation page that allows them to
see exactly how their funds were used.
Hills Criteria of Causation
Hills Criteria of Causation outlines the minimal
conditions needed to establish a causal relationship
between two items. These criteria were originally
presented by Austin Bradford Hill (1897-1991), a British
medical statistician, as a way of determining the causal
link between a specific factor (e.g., cigarette smoking)
and a disease (such as emphysema or lung
cancer). Hill's Criteria form the basis of modern
epidemiological research, which attempts to establish
scientifically valid causal connections between
potential disease agents and the many diseases that
afflict humankind. While the criteria established by
Hill (and elaborated by others) were developed as a
research tool in the medical sciences, they are equally
applicable to sociology, anthropology and other social
sciences, which attempt to establish causal
relationships among social phenomena. Indeed, the
principles set forth by Hill form the basis of evaluation
used in all modern scientific research. While it is quite
easy to claim that agent "A" (e.g., smoking) causes
disease "B" (lung cancer), it is quite another matter to
Temporal Relationship:
Strength:
Dose-Response Relationship:
Consistency:
Plausibility:
Experiment:
Specificity:
Coherence:
The Bradford Hill criteria, otherwise known as Hill's criteria for causation, are
a group of minimal conditions necessary to provide adequate evidence of a causal
relationship between an incidence and a consequence, established by
the English epidemiologist Sir Austin Bradford Hill (18971991) in 1965.
The list of the criteria is as follows:
10.
Strength: A small association does not mean that there is not a causal
effect, though the larger the association, the more likely that it is causal. [1]
11.
12.
13.
Temporality: The effect has to occur after the cause (and if there is an
expected delay between the cause and expected effect, then the effect must
occur after that delay).[1]
14.
15.
16.
17.
18.
based
on
significant
underestimations
of
the
long-term
health
Since the disaster, India has experienced rapid industrialization. While some
positive changes in government policy and behavior of a few industries have
taken place, major threats to the environment from rapid and poorly
regulated industrial growth remain. Widespread environmental degradation
with significant adverse human health consequences continues to occur
throughout India.
December 2004 marked the twentieth anniversary of the massive toxic gas
leak from Union Carbide Corporation's chemical plant in Bhopal in the state of
Madhya Pradesh, India that killed more than 3,800 people. This review
examines the health effects of exposure to the disaster, the legal response,
the lessons learned and whether or not these are put into practice in India in
History
In the 1970s, the Indian government initiated policies to encourage
foreign companies to invest in local industry. Union Carbide Corporation
(UCC) was asked to build a plant for the manufacture of Sevin, a pesticide
commonly used throughout Asia. As part of the deal, India's government
insisted that a significant percentage of the investment come from local
shareholders. The government itself had a 22% stake in the company's
subsidiary, Union Carbide India Limited (UCIL) [1]. The company built the
plant in Bhopal because of its central location and access to transport
infrastructure. The specific site within the city was zoned for light industrial
and commercial use, not for hazardous industry. The plant was initially
approved only for formulation of pesticides from component chemicals, such
as MIC imported from the parent company, in relatively small quantities.
However, pressure from competition in the chemical industry led UCIL to
implement "backward integration" the manufacture of raw materials and
intermediate products for formulation of the final product within one facility.
This was inherently a more sophisticated and hazardous process [2].
In 1984, the plant was manufacturing Sevin at one quarter of its production capacity due
to decreased demand for pesticides. Widespread crop failures and famine on the
subcontinent in the 1980s led to increased indebtedness and decreased capital for farmers
to invest in pesticides. Local managers were directed to close the plant and prepare it for
sale in July 1984 due to decreased profitability [3]. When no ready buyer was found,
UCIL made plans to dismantle key production units of the facility for shipment to another
developing country. In the meantime, the facility continued to operate with safety
equipment and procedures far below the standards found in its sister plant in Institute,
West Virginia. The local government was aware of safety problems but was reticent to
place heavy industrial safety and pollution control burdens on the struggling industry
because it feared the economic effects of the loss of such a large employer [3].
At 11.00 PM on December 2 1984, while most of the one million residents of Bhopal
slept, an operator at the plant noticed a small leak of methyl isocyanate (MIC) gas and
increasing pressure inside a storage tank. The vent-gas scrubber, a safety device designer
to neutralize toxic discharge from the MIC system, had been turned off three weeks prior
[3]. Apparently a faulty valve had allowed one ton of water for cleaning internal pipes to
mix with forty tons of MIC [1]. A 30 ton refrigeration unit that normally served as a
safety component to cool the MIC storage tank had been drained of its coolant for use in
another part of the plant [3]. Pressure and heat from the vigorous exothermic reaction in
the tank continued to build. The gas flare safety system was out of action and had been
for three months. At around 1.00 AM, December 3, loud rumbling reverberated around
the plant as a safety valve gave way sending a plume of MIC gas into the early morning
air [4]. Within hours, the streets of Bhopal were littered with human corpses and the
carcasses of buffaloes, cows, dogs and birds. An estimated 3,800 people died
immediately, mostly in the poor slum colony adjacent to the UCC plant [1,5]. Local
hospitals were soon overwhelmed with the injured, a crisis further compounded by a lack
of knowledge of exactly what gas was involved and what its effects were [1]. It became
one of the worst chemical disasters in history and the name Bhopal became synonymous
with industrial catastrophe [5].
Estimates of the number of people killed in the first few days by the plume
from the UCC plant run as high as 10,000, with 15,000 to 20,000 premature
deaths reportedly occurring in the subsequent two decades [6]. The Indian
government reported that more than half a million people were exposed to
the gas [7]. Several epidemiological studies conducted soon after the
accident showed significant morbidity and increased mortality in the exposed
population. Table Table1.1. summarizes early and late effects on health.
These data are likely to under-represent the true extent of adverse health
Aftermath
Immediately after the disaster, UCC began attempts to dissociate itself from
responsibility for the gas leak. Its principal tactic was to shift culpability to
UCIL, stating the plant was wholly built and operated by the Indian subsidiary.
It also fabricated scenarios involving sabotage by previously unknown Sikh
extremist groups and disgruntled employees but this theory was impugned
by numerous independent sources [1].
The toxic plume had barely cleared when, on December 7, the first multibillion dollar lawsuit was filed by an American attorney in a U.S. court. This
was the beginning of years of legal machinations in which the ethical
implications of the tragedy and its affect on Bhopal's people were largely
ignored. In March 1985, the Indian government enacted the Bhopal Gas Leak
Disaster Act as a way of ensuring that claims arising from the accident would
be dealt with speedily and equitably. The Act made the government the sole
representative of the victims in legal proceedings both within and outside
India. Eventually all cases were taken out of the U.S. legal system under the
ruling of the presiding American judge and placed entirely under Indian
jurisdiction much to the detriment of the injured parties.
and insured for in 1984 [10]. By the end of October 2003, according to the Bhopal Gas
Tragedy Relief and Rehabilitation Department, compensation had been awarded to
554,895 people for injuries received and 15,310 survivors of those killed. The average
amount to families of the dead was $2,200 [9].
At every turn, UCC has attempted to manipulate, obfuscate and withhold scientific data
to the detriment of victims. Even to this date, the company has not stated exactly what
was in the toxic cloud that enveloped the city on that December night [8]. When MIC is
exposed to 200 heat, it forms degraded MIC that contains the more deadly hydrogen
cyanide (HCN). There was clear evidence that the storage tank temperature did reach this
level in the disaster. The cherry-red color of blood and viscera of some victims were
characteristic of acute cyanide poisoning [11]. Moreover, many responded well to
administration of sodium thiosulfate, an effective therapy for cyanide poisoning but not
MIC exposure [11]. UCC initially recommended use of sodium thiosulfate but withdrew
the statement later prompting suggestions that it attempted to cover up evidence of HCN
in the gas leak. The presence of HCN was vigorously denied by UCC and was a point of
conjecture among researchers [8,11-13].
As further insult, UCC discontinued operation at its Bhopal plant following the disaster
but failed to clean up the industrial site completely. The plant continues to leak several
toxic chemicals and heavy metals that have found their way into local aquifers.
Dangerously contaminated water has now been added to the legacy left by the company
for the people of Bhopal [1,14].
Lessons learned
The events in Bhopal revealed that expanding industrialization in developing countries
without concurrent evolution in safety regulations could have catastrophic consequences
[4]. The disaster demonstrated that seemingly local problems of industrial hazards and
toxic contamination are often tied to global market dynamics. UCC's Sevin production
plant was built in Madhya Pradesh not to avoid environmental regulations in the U.S. but
to exploit the large and growing Indian pesticide market. However the manner in which
the project was executed suggests the existence of a double standard for multinational
corporations operating in developing countries [1]. Enforceable uniform international
operating regulations for hazardous industries would have provided a mechanism for
significantly improved in safety in Bhopal. Even without enforcement, international
standards could provide norms for measuring performance of individual companies
engaged in hazardous activities such as the manufacture of pesticides and other toxic
chemicals in India [15]. National governments and international agencies should focus on
widely applicable techniques for corporate responsibility and accident prevention as
much in the developing world context as in advanced industrial nations [16]. Specifically,
prevention should include risk reduction in plant location and design and safety
legislation [17].
Local governments clearly cannot allow industrial facilities to be situated within urban
areas, regardless of the evolution of land use over time. Industry and government need to
bring proper financial support to local communities so they can provide medical and
other necessary services to reduce morbidity, mortality and material loss in the case of
industrial accidents.
Public health infrastructure was very weak in Bhopal in 1984. Tap water was available
for only a few hours a day and was of very poor quality. With no functioning sewage
system, untreated human waste was dumped into two nearby lakes, one a source of
drinking water. The city had four major hospitals but there was a shortage of physicians
and hospital beds. There was also no mass casualty emergency response system in place
in the city [3]. Existing public health infrastructure needs to be taken into account when
hazardous industries choose sites for manufacturing plants. Future management of
industrial development requires that appropriate resources be devoted to advance
planning before any disaster occurs [18]. Communities that do not possess infrastructure
and technical expertise to respond adequately to such industrial accidents should not be
chosen as sites for hazardous industry.
Since 1984
Following the events of December 3 1984 environmental awareness and activism in India
increased significantly. The Environment Protection Act was passed in 1986, creating the
Ministry of Environment and Forests (MoEF) and strengthening India's commitment to
the environment. Under the new act, the MoEF was given overall responsibility for
administering and enforcing environmental laws and policies. It established the
importance of integrating environmental strategies into all industrial development plans
for the country. However, despite greater government commitment to protect public
health, forests, and wildlife, policies geared to developing the country's economy have
taken precedence in the last 20 years [19].
India has undergone tremendous economic growth in the two decades since the Bhopal
disaster. Gross domestic product (GDP) per capita has increased from $1,000 in 1984 to
$2,900 in 2004 and it continues to grow at a rate of over 8% per year [20]. Rapid
industrial development has contributed greatly to economic growth but there has been
significant cost in environmental degradation and increased public health risks. Since
abatement efforts consume a large portion of India's GDP, MoEF faces an uphill battle as
it tries to fulfill its mandate of reducing industrial pollution [19]. Heavy reliance on coalfired power plants and poor enforcement of vehicle emission laws have result from
economic concerns taking precedence over environmental protection [19].
With the industrial growth since 1984, there has been an increase in small scale industries
(SSIs) that are clustered about major urban areas in India. There are generally less
stringent rules for the treatment of waste produced by SSIs due to less waste generation
within each individual industry. This has allowed SSIs to dispose of untreated wastewater
into drainage systems that flow directly into rivers. New Delhi's Yamuna River is
illustrative. Dangerously high levels of heavy metals such as lead, cobalt, cadmium,
chrome, nickel and zinc have been detected in this river which is a major supply of
potable water to India's capital thus posing a potential health risk to the people living
there and areas downstream [21].
Land pollution due to uncontrolled disposal of industrial solid and hazardous waste is
also a problem throughout India. With rapid industrialization, the generation of industrial
solid and hazardous waste has increased appreciably and the environmental impact is
significant [22].
India relaxed its controls on foreign investment in order to accede to WTO rules and
thereby attract an increasing flow of capital. In the process, a number of environmental
regulations are being rolled back as growing foreign investments continue to roll in. The
Indian experience is comparable to that of a number of developing countries that are
experiencing the environmental impacts of structural adjustment. Exploitation and export
of natural resources has accelerated on the subcontinent. Prohibitions against locating
industrial facilities in ecologically sensitive zones have been eliminated while
conservation zones are being stripped of their status so that pesticide, cement and bauxite
mines can be built [23]. Heavy reliance on coal-fired power plants and poor enforcement
of vehicle emission laws are other consequences of economic concerns taking precedence
over environmental protection [19].
The Bhopal disaster could have changed the nature of the chemical industry and caused a
reexamination of the necessity to produce such potentially harmful products in the first
place. However the lessons of acute and chronic effects of exposure to pesticides and
their precursors in Bhopal has not changed agricultural practice patterns. An estimated 3
million people per year suffer the consequences of pesticide poisoning with most
UCC has shrunk to one sixth of its size since the Bhopal disaster in an effort to
restructure and divest itself. By doing so, the company avoided a hostile takeover, placed
a significant portion of UCC's assets out of legal reach of the victims and gave its
shareholder and top executives bountiful profits [1]. The company still operates under the
ownership of Dow Chemicals and still states on its website that the Bhopal disaster was
"cause by deliberate sabotage". [28].
Some positive changes were seen following the Bhopal disaster. The British chemical
company, ICI, whose Indian subsidiary manufactured pesticides, increased attention to
health, safety and environmental issues following the events of December 1984. The
subsidiary now spends 3040% of their capital expenditures on environmental-related
projects. However, they still do not adhere to standards as strict as their parent company
in the UK. [24].
The US chemical giant DuPont learned its lesson of Bhopal in a different way. The
company attempted for a decade to export a nylon plant from Richmond, VA to Goa,
India. In its early negotiations with the Indian government, DuPont had sought and won a
remarkable clause in its investment agreement that absolved it from all liabilities in case
of an accident. But the people of Goa were not willing to acquiesce while an important
ecological site was cleared for a heavy polluting industry. After nearly a decade of
protesting by Goa's residents, DuPont was forced to scuttle plans there. Chennai was the
next proposed site for the plastics plant. The state government there made significantly
greater demand on DuPont for concessions on public health and environmental
protection. Eventually, these plans were also aborted due to what the company called
"financial concerns". [29].
Conclusion
The tragedy of Bhopal continues to be a warning sign at once ignored and heeded.
Bhopal and its aftermath were a warning that the path to industrialization, for developing
countries in general and India in particular, is fraught with human, environmental and
economic perils. Some moves by the Indian government, including the formation of the
MoEF, have served to offer some protection of the public's health from the harmful
practices of local and multinational heavy industry and grassroots organizations that have
also played a part in opposing rampant development. The Indian economy is growing at
a tremendous rate but at significant cost in environmental health and public safety as
large and small companies throughout the subcontinent continue to pollute. Far more
remains to be done for public health in the context of industrialization to show that the
lessons of the countless thousands dead in Bhopal have truly been heeded.
Thar disaster
Arid areas of the world are always prone to famines whenever the average annual rainfall is less than
250mm. The Thar region of Sindh, which has climatic and ecological conditions similar to the Indian state
of Rajasthans portion of Thar, faces severe droughts for two to three years in every 10-year cycle.
These areas have been witnessing famine-like conditions for ages. The average annual rainfall is less
than 250mm, which is usually uneven and erratic.
The northern sandy area, known as Achro Thar in districts Sanghar, Khairpur, Sukkur and Ghotki,
receives average annual precipitation of less than seven inches and therefore is termed hyper-arid. But
even then it is not a desert like the Sahara, as it has reasonable vegetative cover.
Some patches of sand in Sanghar district are barren and are termed dhain. The total geographical area of
Thar in Sindh is 48,000 square kilometres, out of which 25,000sq km is in Tharparkar and Umerkot
districts.
It is a potentially productive and vegetative sandy area which is turning into desert due to overexploitation, although it still has sufficient tree cover and shrubs, and if properly protected and managed
will remain productive.
A realistic approach is needed to make Thar less prone to disaster.
The sandy arid area with high wind velocity has indeed a fragile ecosystem. If its vegetative cover is
overexploited and marginal lands on the slopes of sandy dunes are brought under cultivation, the area will
turn into a barren desert.
The sandy arid area of Cholistan in Bahawalpur division of Punjab is also similar to Thar in its
geomorphology, but the desertification process is less as wind velocity is not as high as that of Thar, and
much of its area has been developed and brought under canal irrigation.
The sandy arid area of Rajasthan has been properly managed by the Indian government since 1953,
when the Central Arid Zone Research Institute was established. Unfortunately, no concerted efforts were
made to conserve and develop the potential of our portion of Thar through a scientific and institutional
approach and no government research and development institute has done any appreciable work in the
area.
The Sindh Arid Zone Development Authority, formed in 1985, was assigned multidisciplinary duties of all
the line departments of the Sindh government and due to its major role in civil works and services, it could
not carry out any sustainable development to ameliorate the suffering of Thars people. The main
emphasis of Sazda should have been aimed at income-generating activities through livestock
development, silvopastoral development and desertification control. But the resources were wasted in civil
works.
Lack of honesty and commitment among the functionaries was also a major cause of its failure. Sazda
was wound up in 2003 after the implementation of the devolved local bodies district government system.
Yet despite Sazdas questionable role, reasonable achievements were made in the groundwater
investigation sector. The credit for this goes to late Abdul Khalique Sheikh, chief hydro-geologist of Sazda,
whose dedicated efforts made it possible to explore groundwater sources beyond the depth of some 300
metres.
The area is now approachable through metalloid roads connecting all taluka headquarters and main
localities, while communication of information has also become faster and easier. This is why the
electronic media has been able to cover the Thar area.
The media has done well to highlight the sufferings of the people of Thar. But droughts and famines are
not new for the people of the region. Old-timers in the area are witness to the misery and death wrought
by the droughts and famines of 1951, 1968, 1969, 1987 and 1988. Similarly, the destruction and death in
the famines of 1899 and 1939 are also remembered in Thar and Rajasthan, when there was not a single
drop of rain throughout the years.
Those at the helm of affairs must adopt a realistic approach to make this area less prone to famines,
otherwise in the present global village and the age of free media the reputation of the government will be
jeopardized.
From my own experience of the area, I would suggest that the government should establish an
independent and autonomous institute of research and development to carry out research in agroforestry, range and livestock development, saline water use, fisheries, desertification control, ecology,
saline groundwater use for crops, rainwater harvesting and salt-resistant plants and grasses as lasting
solutions to Thars problems.
All government departments should continue their usual activities in the area with better funding by the
state. The agriculture, forest and livestock departments should strengthen their extension services and
carry the benefits of the research results to farmers. Nothing is impossible if there is an honest approach
and dedication to find solutions to problems.
1.
2.
3.
4.
5.
Each phase is broken down into steps. One strength of this model is
the explicit recognition of interventions that focus on the multiple
levels of individuals, organizations, and governments/communities.
2.
3.
4.
5.
6.
7.
8.
9.
Strategic Planning
Strategic planning implies that the planning process is significance.
In concept, it is usually done by higher-level decision makers within
Subsistence agriculture
Subsistence agriculture is self-sufficiency farming in which the farmers
focus on growing enough food to feed themselves and their families.
The typical subsistence farm has a range of crops and animals needed
by the family to feed and clothe themselves during the year. Planting
decisions are made principally with an eye toward what the family will
need during the coming year, and secondarily toward market prices.
Demographic dividend
Demographic dividend refers to a period usually 20 to 30 years
when fertility rates fall due to significant reductions in child and infant
mortality rates. This fall is often accompanied by an extension in
average life expectancy that increases the portion of the population
that is in the working age-group. This cuts spending on dependents
and spurs economic growth. As women and families realize that fewer
children will die during infancy or childhood, they will begin to have
fewer children to reach their desired number of offspring further
reducing the proportion of non-productive dependents.
However, this drop in fertility rates is not immediate. The lag between produces a
generational population bulge that surges through society. For a period of time this
bulge is a burden on society and increases the dependency ratio. Eventually this group
begins to enter the productive labor force. With fertility rates continue to fall and older
generations having shorter life expectancies, the dependency ratio declines dramatically.
This demographic shift initiates the demographic dividend. With fewer younger
dependents, due to declining fertility and child mortality rates, and fewer older
dependents, due to the older generations having shorter life expectancies, and the largest
segment of the population of productive working age, the dependency ratio declines
dramatically leading to the demographic dividend. Combined with effective public
policies this time period of the demographic dividend can help facilitate more rapid
economic growth and puts less strain on families. This is also a time period when many
women enter the labor force for the first time. In many countries this time period has led
to increasingly smaller families, rising income, and rising life expectancy rates.
However, dramatic social changes can also occur during this time, such as increasing
divorce rates, postponement of marriage, and single-person households.
Demographic dividend
The freeing up of resources for a country's economic development and
the future prosperity of its populace as it switches from an agrarian to
an industrial economy. In the initial stages of this transition, fertility
rates fall, leading to a labor force that is temporarily growing faster
than the population dependent on it. All else being equal, per capita
income grows more rapidly during this time too
children and youth under 15 years falls below 30 per cent and the
proportion of people 65 years and older is still below 15 per cent.
During the course of the demographic dividend there are four mechanisms through which
the benefits are delivered. The first is the increased labor supply. However, the magnitude
of this benefit appears to be dependent on the ability of the economy to absorb and
productively employ the extra workers rather than be a pure demographic gift. The
second mechanism is the increase in savings. As the number of dependents decreases
individuals can save more. This increase in national savings rates increases the stock of
capital in developing countries already facing shortages of capital and leads to higher
Expansive pyramid
A population pyramid that is very wide at the base, indicating high
birth and death rates.
Constrictive pyramid
A population pyramid that comes in at the bottom. The population is
generally older on average, as the country has long life expectancy, a
low death rate, but also a low birth rate. This pyramid is becoming
Demographic transition
Demographic transition (DT) refers to the transition from high birth and death rates to
low birth and death rates as a country develops from a pre-industrial to an industrialized
economic system. This is typically demonstrated through a demographic transition model
(DTM). The theory is based on an interpretation of demographic history developed in
1929 by the American demographer Warren Thompson (18871973). Thompson
observed changes, or transitions, in birth and death rates in industrialized societies over
the previous 200 years. Most developed countries are in stage 3 or 4 of the model; the
majority of developing countries have reached stage 2 or stage 3. The major (relative)
exceptions are some poor countries, mainly in sub-Saharan Africa and some Middle
Eastern countries, which are poor or affected by government policy or civil strife, notably
Pakistan, Palestinian Territories, Yemen and Afghanistan.
Although this model predicts ever decreasing fertility rates, recent data show that beyond
a certain level of development fertility rates increase again.
A correlation matching the demographic transition has been established; however, it is
not certain whether industrialization and higher incomes lead to lower population or if
lower populations lead to industrialization and higher incomes. In countries that are now
developed this demographic transition began in the 18th century and continues today. In
less developed countries, this demographic transition started later and is still at an earlier
stage
During stage four there are both low birth rates and low death
rates. Birth rates may drop to well below replacement level as
has happened in countries like Germany, Italy, and Japan, leading
to a shrinking population, a threat to many industries that rely on
population growth. As the large group born during stage two
ages, it creates an economic burden on the shrinking working
population. Death rates may remain consistently low or increase
slightly due to increases in lifestyle diseases due to low exercise
levels and high obesity and an aging population in developed
countries. By the late 20th century, birth rates and death rates in
developed countries leveled off at lower rates.
Demographic gift
Demographic gift is a term in used, to describe the initially favorable
effect of falling fertility rates on the age dependency ratio, the fraction
of children and aged as compared to that of the working population.
Fertility declines in a population combined with falls in mortality ratesthe so-called
"demographic transition"produce a typical sequence of effects on age structures. The
child-dependency ratio (the ratio of children to those who support them) at first rises
somewhat due to more children surviving, then falls sharply as average family size
decreases. Later, the overall population ages rapidly, as currently seen in many developed
and rapidly developing nations. Between these two periods is a long interval of favorable
age distributions, known as the "demographic gift," with low and falling total dependency
ratios (including both children and aged persons).
The term was used by David Bloom and Jeffrey Williamson to signify the economic
benefits of a high ratio of working-age to dependent population during the demographic
transition. Bloom et al. introduced the term demographic dividend to emphasize the idea
that the effect is not automatic but must be earned by the presence of suitable economic
policies that allow a relatively large workforce to be productively employed.
The term has also been used by the Middle East Youth Initiative to describe the current
youth bulge in the Middle East and North Africa in which 15-29 year olds comprise
around 30% of the total population. It is believed that, through educational and
employment, the current youth population in the Middle East could fuel economic growth
and development as young East Asians were able to for the Asian Tigers.
FECUNDITY
Fecundity, derived from the word fecund, generally refers to the ability to reproduce. In
demography, fecundity is the potential reproductive capacity of an individual or
population. In biology, the definition is more equivalent to fertility, or the actual
reproductive rate of an organism or population, measured by the number of gametes
(eggs), seed set, or asexual propagules. This difference is because demography considers
human fecundity which is often intentionally limited, while biology assumes that
organisms do not limit fertility. Fecundity is under both genetic and environmental
control, and is the major measure of fitness. Fecundation is another term for fertilization.
Super fecundity refers to an organism's ability to store another organism's sperm (after
copulation) and fertilize its own eggs from that store after a period of time, essentially
making it appear as though fertilization occurred without sperm (i.e. parthenogenesis).
Fecundity is important and well studied in the field of population ecology. Fecundity can
increase or decrease in a population according to current conditions and certain regulating
factors. For instance, in times of hardship for a population, such as a lack of food,
juvenile and eventually adult fecundity has been shown to decrease (i.e. due to a lack of
resources the juvenile individuals are unable to reproduce, eventually the adults will run
out of resources and reproduction will cease).
Fecundity has also been shown to increase in ungulates with relation to warmer weather.
In sexual evolutionary biology, especially in sexual selection, fecundity is contrasted to
reproductivity.
In obstetrics and gynecology, fecundability is the probability of being pregnant in a single
menstrual cycle, and fecundity is the probability of achieving a live birth within a single
cycle.
FERTILITY
Fertility is the natural capability to produce offspring. As a measure, "fertility rate" is the
number of offspring born per mating pair, individual or population. Fertility differs from
fecundity, which is defined as the potential for reproduction (influenced by gamete
production, fertilization and carrying a pregnancy to term. A lack of fertility is infertility
while a lack of fecundity would be called sterility.
Human fertility depends on factors of nutrition, sexual behavior, culture, instinct,
endocrinology, timing, economics, way of life, and emotions.
Infertility
Infertility primarily refers to the biological inability of a person to contribute to
conception. Infertility may also refer to the state of a woman who is unable to
carry a pregnancy to full term. There are many biological causes of infertility,
including some that medical intervention can treat
Period measures
Cohort measures
Net Reproduction Rate (NRR) - the NRR starts with the GRR
and adds the realistic assumption that some of the women will
die before age 49; therefore they will not be alive to bear some of
the potential babies that were counted in the GRR. NRR is always
lower than GRR, but in countries where mortality is very low,
almost all the baby girls grow up to be potential mothers, and the
NRR is practically the same as GRR. In countries with high
The Pearson correlation coefficient indicates the strength of a linear relationship between two variables,
but its value generally does not completely characterize their relationship [16] . In particular, if the conditional
mean of Y given X, denoted E(Y|X), is not linear in X, the correlation coefficient will not fully determine the
form of E(Y|X).
The image on the right shows scatterplots of Anscombe's quartet, a set of four different pairs of variables
created by Francis Anscombe.[17] The four y variables have the same mean (7.5), variance (4.12),
correlation (0.816) and regression line (y = 3 + 0.5x). However, as can be seen on the plots, the
distribution of the variables is very different. The first one (top left) seems to be distributed normally, and
corresponds to what one would expect when considering two variables correlated and following the
assumption of normality. The second one (top right) is not distributed normally; while an obvious
relationship between the two variables can be observed, it is not linear. In this case the Pearson
correlation coefficient does not indicate that there is an exact functional relationship: only the extent to
which that relationship can be approximated by a linear relationship. In the third case (bottom left), the
linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation
coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one
outlier is enough to produce a high correlation coefficient, even though the relationship between the two
variables is not linear.
These examples indicate that the correlation coefficient, as a summary statistic, cannot replace visual
examination of the data. Note that the examples are sometimes said to demonstrate that the Pearson
correlation assumes that the data follow a normal distribution, but this is not correct.[4]
Several sets of (x, y) points, with the Pearson correlation coefficient ofx and y for each set. Note
that the correlation reflects the noisiness and direction of a linear relationship (top row), but not
the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom).
N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is
undefined because the variance of Y is zero.
The French paradox is a catchphrase, first used in the late 1980s, which summarizes
the apparently paradoxicalepidemiological observation that French people have a
relatively low incidence of coronary heart disease (CHD), while having a diet relatively
rich in saturated fats,[1] in apparent contradiction to the widely held belief that the high
consumption of such fats is a risk factor for CHD. The paradox is that if the thesis linking
saturated fats to CHD is valid, the French ought to have a higher rate of CHD than
comparable countries where the per capita consumption of such fats is lower.
The French paradox implies two important possibilities. The first is that the hypothesis
linking saturated fats to CHD is not completely valid (or, at the extreme, is entirely
invalid). The second possibility is that the link between saturated fats and CHD is valid,
but that some additional factor in the French diet or lifestyle mitigates this risk
presumably with the implication that if this factor can be identified, it can be incorporated
into the diet and lifestyle of other countries, with the same lifesaving implications
observed in France. Both possibilities have generated considerable media interest, as
well as some scientific research.
The Israeli paradox is a catchphrase, first used in 1996, to summarize the apparently
paradoxical epidemiological observation that Israeli Jews have a relatively
high incidence of coronary heart disease (CHD), despite having a diet relatively low
in saturated fats, in apparent contradiction to the widely held belief that the high
consumption of such fats is a risk factor for CHD. The paradox is that if the thesis linking
saturated fats to CHD is valid, the Israelis ought to have a lower rate of CHD than
comparable countries where the per capita consumption of such fats is higher.
The observation of Israel's paradoxically high rate of CHD is one of a number of
paradoxical outcomes for which a literature now exists, regarding the thesis that a high
consumption of saturated fats ought to lead to an increase in CHD incidence, and that a
lower consumption ought to lead to the reverse outcome. The most famous of these
paradoxes is known as the "French paradox": France enjoys a relatively low incidence
of CHD despite a high per-capita consumption of saturated fat.
The Israeli paradox implies two important possibilities. The first is that the hypothesis
linking saturated fats to CHD is not completely valid (or, at the extreme, is entirely
invalid). The second possibility is that the link between saturated fats and CHD is valid,
but that some additional factor in the Israeli diet or lifestyle creates another CHD risk
presumably with the implication that if this factor can be identified, it can be isolated in
the diet and / or lifestyle of other countries, thereby allowing both the Israelis, and
others, to avoid that particular risk.
Stroke Belt or Stroke Alley is a name given to a region in the southeastern United
States that has been recognized by public health authorities for having an unusually
high incidence of stroke and other forms of cardiovascular disease. It is typically defined
as an 11-state region consisting
of Alabama,Arkansas, Georgia, Indiana, Kentucky, Louisiana, Mississippi, North
Carolina, South Carolina,Tennessee, and Virginia.
Although many possible causes for the high stroke incidence have been investigated,
the reasons for the phenomenon have not been determined.
Simpson's Paradox disappears when causal relations are brought into consideration.
Many statisticians believe that the mainstream public should be informed of the counterintuitive results in statistics such as Simpson's paradox. [4][5]
birth-weight children born to smoking mothers have a lower infant mortality rate than the
low birth weight children of non-smokers. It is an example ofSimpson's paradox.
Examples
Kidney stone treatment[edit]
This is a real-life example from a medical study[10] comparing the success rates of two
treatments for kidney stones.[11]
The table below shows the success rates and numbers of treatments for treatments involving
both small and large kidney stones, where Treatment A includes all open surgical procedures
and Treatment B is percutaneous nephrolithotomy (which involves only a small puncture). The
numbers in parentheses indicate the number of success cases over the total size of the group.
(For example, 93% equals 81 divided by 87.)
Small Stones
Treatment A
Treatment B
Group 1
Group 2
93% (81/87)
87% (234/270)
Large Stones
Both
Group 3
Group 4
73% (192/263)
69% (55/80)
78% (273/350)
83% (289/350)
The paradoxical conclusion is that treatment A is more effective when used on small stones, and
also when used on large stones, yet treatment B is more effective when considering both sizes
at the same time. In this example the "lurking" variable (or confounding variable) of the stone
size was not previously known to be important until its effects were included.
Which treatment is considered better is determined by an inequality between two ratios
(successes/total). The reversal of the inequality between the ratios, which creates Simpson's
paradox, happens because two effects occur together:
1. The sizes of the groups, which are combined when the lurking variable is ignored, are
very different. Doctors tend to give the severe cases (large stones) the better treatment
(A), and the milder cases (small stones) the inferior treatment (B). Therefore, the totals
are dominated by groups 3 and 2, and not by the two much smaller groups 1 and 4.
2. The lurking variable has a large effect on the ratios, i.e. the success rate is more strongly
influenced by the severity of the case than by the choice of treatment. Therefore, the
group of patients with large stones using treatment A (group 3) does worse than the
group with small stones, even if the latter used the inferior treatment B (group 2).
Based on these effects, the paradoxical result can be rephrased more intuitively as follows:
Treatment A, when applied to a patient population consisting mainly of patients with large
stones, is less successful than Treatment B applied to a patient population consisting mainly of
patients with small stones.
Maternal mortality
Maternal morbidity
Zoonotic diseases; anthrax, workers
Rabies injection schedule
Poliomyelitis; boarder restrictions; SIAD technique per week two doses;
not to go again as its a hard to reach area
Polio strategy
TB; DOT
Disability indicators
Epidemiology
Surveillance
Screening
Management graphs
Bar graphs
Components bar chart
Preamble
The Ethical Review Committee (ERC) shall be concerned with ethical issues involved
in proposals for research on human subjects. The terms of reference have taken into
consideration recommendations of a sub-committee of the Bio-ethics Group of
Faculty of Health Sciences (FHS) and particularly the report of the Royal College of
Physicians of London (1996) titled "Guidelines on the Practice of Ethics Committees
in Medical Research Involving Human Subjects". The terms have been derived
mainly from principles and generalised for application to both bio-medical and social
science research. A deliberate attempt has been made to avoid detail, with the
expectation that experience will determine the need for revision and elaboration.
2.0
Terms of Reference
2.1
All research projects involving human subjects, whether as individuals or
communities, including the use of foetal material, embryos and tissues from the
recently dead, undertaken or supported by Aga Khan University (AKU) faculty, staff
or students, wherever conducted, shall be reviewed by the ERC before a study can
begin.
2.2
The duration of approval for a study shall be limited. Any change in conditions
that could affect the rights of subjects during a study must be approved for the
study to continue.
2.3
The Committee shall provide written guidelines on ethical considerations for
research involving humans and review them at least once in two years. The
guidelines shall be based on but not restricted to the following principles:
Respect for an individual's capacity to make reasoned decisions, and protection of those
whose capacity is impaired or who are in some way dependent or vulnerable.
The risks of the proposed research in respect of expected benefits, the research design
and competence of the investigators having been assessed.
A proposal must state the purpose of the research; the reasons for using humans as the
subjects; the nature and degree of all known risks to the subjects; and the means for
ensuring that the subjects' consent will be adequately informed and voluntary.
The subjects of research should be clearly aware of the nature of the research and their
position in respect of it.
Consent must be valid. The participants must be sufficiently informed and have
adequate time to decide without pressure. Consent must be obtained from the subjects,
preferably written.
Subjects must be able to easily withdraw from a research protocol without giving
reasons and without incurring any penalty or alteration in their relationship with providers
of services.
Further guidance should be obtained from publications, such as the World Medical
Association Declaration of Helsinki: Recommendations Guiding Medical Doctors in
Biomedical Research Involving Human Subjects (1989), consultation with experts and
other sources, according to need.
Specify procedures, including periodic appraisal of the progress of approved projects, for
ensuring that subjects of research are protected from harm, their confidentiality is
maintained, and their rights are respected.
2.4
The Committee shall report annually or more frequently, as necessary, to the
URC and the Chief Academic Officer.
2.5
Method of Working
2.5.1
The Committee will need substantial administrative and secretarial
assistance from the Research Office.
2.5.2
Authors of research proposals may be invited to attend meetings of the ERC
when their study is being reviewed.
2.5.3
Some business may be conducted by mail, but reasonably frequent
meetings are essential to allow a committee ethos to develop.
2.5.4
A quorum should include a layperson, a research oriented member who is
broadly familiar with the proposed field of study, and a member of each gender.
2.5.5
2.5.6
The Chairman's approval may be given for studies that pose no ethical
problems. Such approvals shall be reported to the next meeting of ERC for
ratification.
2.5.7
Investigators are entitled to have an adverse decision reviewed, and to
make written and/or oral representations to the Committee.
2.5.8
The Committee may withdraw approval if it is not satisfied with the conduct
of the investigation.
2.5.9
The ERC should approve amendments to protocols that affect human
subjects.
2.5.10
2.5.11
Members of the Committee should declare their own interests, for
example when testing of the product of a company of which the member is an
advisor.
2.5.12
2.5.13
Members should not be paid; however, if honorarium is necessary, then it
should be modest.
2.6
Membership
2.6.1
Membership shall include representation from researchers, the professional
disciplines currently found in the University, discerning public, legal expertise, and
both genders.
2.6.2
The Chief Academic Officer shall appoint the chair and members, in
consultation with the URC.
2.6.3
The tenure of AKU members on ERC shall be 3 years which may be renewed
for one term.
2.6.4
The tenure of External Members on ERC would be of one year which may be
renewed for another term of one year at the discretion of the Chair ERC based on
the attendance and quality of input by the member. Further extension would require
URC's approval.
2.6.5
For faculty, it is essential to select persons who are not members of other
research related committees.
2.6.6
The total number of members should not exceed 15, including the chair. In
addition, there will be one adjunct member from AKU-East Africa and one from ISMC
who would be called upon as required by the Chair to give input for projects falling
into their respective areas of expertise.
Terms of reference
Terms of reference show how the scope will be defined, developed, and verified. They should
also provide a documented basis for making future decisions and for confirming or developing a
common understanding of the scope among stakeholders. In order to meet these criteria, success
factors/risks and restraints should be fundamental keys. Very important for project proposals.
Creating detailed terms of reference is critical, as they define the:
Fertility is the natural capability to produce offspring. As a measure, "fertility rate" is the
number of offspring born per mating pair, individual or population. Fertility differs from
fecundity, which is defined as the potential for reproduction (influenced by gamete production,
fertilization and carrying a pregnancy to term)[citation needed]. A lack of fertility is infertility while a
lack of fecundity would be called sterility.
Fecundity, derived from the word fecund, generally refers to the ability to reproduce. In
demography,[1][2] fecundity is the potential reproductive capacity of an individual or population.
In biology, the definition is more equivalent to fertility, or the actual reproductive rate of an
organism or population, measured by the number of gametes (eggs), seed set, or asexual
propagules. This difference is because demography considers human fecundity which is often
intentionally limited through contraception, while biology assumes that organisms do not limit
fertility. Fecundity is under both genetic and environmental control, and is the major measure of
fitness. Fecundation is another term for fertilization. Superfecundity refers to an organism's
ability to store another organism's sperm (after copulation) and fertilize its own eggs from that
store after a period of time, essentially making it appear as though fertilization occurred without
sperm (i.e. parthenogenesis).[citation needed]
Fecundity is important and well studied in the field of population ecology. Fecundity can
increase or decrease in a population according to current conditions and certain regulating
factors. For instance, in times of hardship for a population, such as a lack of food, juvenile and
eventually adult fecundity has been shown to decrease (i.e. due to a lack of resources the juvenile
individuals are unable to reproduce, eventually the adults will run out of resources and
reproduction will cease).
Demographic transition (DT) refers to the transition from high birth and death rates to low
birth and death rates as a country develops from a pre-industrial to an industrialized economic
system. This is typically demonstrated through a demographic transition model (DTM). The
theory is based on an interpretation of demographic history developed in 1929 by the American
demographer Warren Thompson (18871973).[1] Thompson observed changes, or transitions, in
birth and death rates in industrialized societies over the previous 200 years. Most developed
countries are in stage 3 or 4 of the model; the majority of developing countries have reached
stage 2 or stage 3. The major (relative) exceptions are some poor countries, mainly in subSaharan Africa and some Middle Eastern countries, which are poor or affected by government
policy or civil strife, notably Pakistan, Palestinian Territories, Yemen and Afghanistan.[2]
Population ageing is a phenomenon that occurs when the median age of a country or region
rises due to rising life expectancy and/or declining birth rates. There has been, initially in the
more economically developed countries ( MEDC ) but also more recently in LEDCs ( less
economically developed countries ), an increase in the life expectancy which causes ageing
population. This is the case for every country in the world except the 18 countries designated as
"demographic outliers" by the UN.[1][2] For the entirety of recorded human history, the world has
never seen as aged a population as currently exists globally.[3] The UN predicts the rate of
population ageing in the 21st century will exceed that of the previous century.[3] Countries vary
significantly in terms of the degree, and the pace, of these changes, and the UN expects
populations that began ageing later to have less time to adapt to the many implications of these
changes.[3]
Subsistence agriculture is self-sufficiency farming in which the farmers focus on growing
enough food to feed themselves and their families. The typical subsistence farm has a range of
crops and animals needed by the family to feed and clothe themselves during the year. Planting
decisions are made principally with an eye toward what the family will need during the coming
year, and secondarily toward market prices. Tony Waters [1] writes: "Subsistence peasants are
people who grow what they eat, build their own houses, and live without regularly making
purchases in the marketplace
According to the Encyclopedia of International Development, the term demographic trap is
used by demographers "to describe the combination of high fertility (birth rates) and declining
mortality (death rates) in developing countries, resulting in a period of high population growth
rate (PGR)."[1] High fertility combined with declining mortality happens when a developing
country moves through the demographic transition of becoming developed.
During "stage 2" of the demographic transition, quality of health care improves and death rates
fall, but birth rates still remain high, resulting in a period of high population growth.[1] The term
"demographic trap" is used by some demographers to describe a situation where stage 2
persists because "falling living standards reinforce the prevailing high fertility, which in turn
reinforces the decline in living standards."[2] This results in more poverty, where people rely on
more children to provide them with economic security. Social scientist John Avery explains that
this results because the high birth rates and low death rates "lead to population growth so rapid
that the development that could have slowed population is impossible."[3]
Demographic dividend refers to a period usually 20 to 30 years when fertility rates fall due
to significant reductions in child and infant mortality rates. This fall is often accompanied by an
extension in average life expectancy that increases the portion of the population that is in the
working age-group. This cuts spending on dependents and spurs economic growth. As women
and families realize that fewer children will die during infancy or childhood, they will begin to
have fewer children to reach their desired number of offspring,further reducing the proportion of
non-productive dependents.
However, this drop in fertility rates is not immediate. The lag between produces a generational
population bulge that surges through society. For a period of time this bulge is a burden on
society and increases the dependency ratio. Eventually this group begins to enter the productive
labor force. With fertility rates continuing to fall and older generations having shorter life
expectancies, the dependency ratio declines dramatically. This demographic shift initiates the
demographic dividend. With fewer younger dependents, due to declining fertility and child
mortality rates, and fewer older dependents, due to the older generations having shorter life
expectancies, and the largest segment of the population of productive working age, the
dependency ratio declines dramatically leading to the demographic dividend. Combined with
effective public policies this time period of the demographic dividend can help facilitate more
rapid economic growth and puts less strain on families. This is also a time period when many
women enter the labor force for the first time.[1] In many countries this time period has led to
increasingly smaller families, rising income, and rising life expectancy rates.[2] However,
dramatic social changes can also occur during this time, such as increasing divorce rates,
postponement of marriage, and single-person households.[3]
The Human Development Index (HDI) is a composite statistic of life expectancy, education,
and income indices used to rank countries into four tiers of human development. It was created
by the Pakistani economist Mahbub ul Haq and the Indian economist Amartya Sen in 1990[1] and
was published by the United Nations Development Programme.[2]
In the 2010 Human Development Report a further Inequality-adjusted Human Development
Index (IHDI) was introduced. While the simple HDI remains useful, it stated that "the IHDI is
the actual level of human development (accounting for inequality)" and "the HDI can be viewed
as an index of "potential" human development (or the maximum IHDI that could be achieved if
there were no inequality)".[3]
Demographic window is defined to be that period of time in a nation's demographic evolution
when the proportion of population of working age group is particularly prominent. This occurs
when the demographic architecture of a population becomes younger and the percentage of
people able to work reaches its height.[1] Typically, the demographic window of opportunity lasts
for 3040 years depending upon the country. Because of the mechanical link between fertility
levels and age structures, the timing and duration of this period is closely associated to those of
fertility decline: when birth rates fall, the age pyramid first shrinks with gradually lower
proportions of young population (under 15s) and the dependency ratio decreases as is happening
(or happened) in various parts of East Asia over several decades. After a few decades, low
fertility however causes the population to get older and the growing proportion of elderly people
inflates again the dependency ratio as is observed in present-day Europe.
The exact technical boundaries of definition may vary. The UN Population Department has
defined it as period when the proportion of children and youth under 15 years falls below 30 per
cent and the proportion of people 65 years and older is still below 15 per cent.
Europe's demographic window lasted from 1950 to 2000. It began in China in 1990 and is
expected to last until 2015. India is expected to enter the demographic window in 2010, which
may last until the middle of the present century. Much of Africa will not enter the demographic
window until 2045 or later.
Societies who have entered the demographic window have smaller dependency ratio (ratio of
dependents to working-age population) and therefore the demographic potential for high
economic growth as favorable dependency ratios tend to boost savings and investments in human
capital. But this so-called "demographic bonus" (or demographic dividend) remains only a
potential advantage as low participation rates (for instance among women) or rampant
uneployment may limit the impact of favorable age structures.
In demography and medical geography, epidemiological transition is a phase of development
witnessed by a sudden and stark increase in population growth rates brought about by medical
innovation in disease or sickness therapy and treatment, followed by a re-leveling of population
growth from subsequent declines in fertility rates. "Epidemiological transition" accounts for the
replacement of infectious diseases by chronic diseases over time due to expanded public health
and sanitation.[1] This theory was originally posited by Abdel Omran in 1971.[2]
A Malthusian catastrophe (also known as Malthusian check) was originally foreseen to be a
forced return to subsistence-level conditions once population growth had outpaced agricultural
production.
Demographic economics or population economics is the application of economic analysis to
demography, the study of human populations, including size, growth, density, distribution, and
vital statistics.[1][2]
Human overpopulation occurs if the number of people in a group exceeds the carrying capacity
of the region occupied by the group. The term often refers to the relationship between the entire
human population and its environment: the Earth,[1] or to smaller geographical areas such as
countries. Overpopulation can result from an increase in births, a decline in mortality rates, an
increase in immigration, or an unsustainable biome and depletion of resources. It is possible for
very sparsely populated areas to be overpopulated if the area has a meager or non-existent
capability to sustain life (e.g. a desert). Quality of life issues, rather than sheer carrying capacity
or risk of starvation, are a basis to argue against continuing high human population growth.
Demographic gift is a term in used to describe the initially favorable effect of falling fertility
rates on the age dependency ratio, the fraction of children and aged as compared to that of the
working population.
Overview
Fertility declines in a population combined with falls in mortality ratesthe so-called
"demographic transition"produce a typical sequence of effects on age structures.
The child-dependency ratio (the ratio of children to those who support them) at first
rises somewhat due to more children surviving, then falls sharply as average family
size decreases. Later, the overall population ages rapidly, as currently seen in many
developed and rapidly developing nations. Between these two periods is a long
interval of favorable age distributions, known as the "demographic gift," with low
and falling total dependency ratios (including both children and aged persons).
total population.[3] It is believed that, through educational and employment, the current youth
population in the Middle East could fuel economic growth and development as young East
Asians were able to for the Asian Tigers.
The Preston curve is an empirical cross-sectional relationship between life expectancy and real
per capita income. It is named after Samuel H. Preston who described it in his article "The
Changing Relation between Mortality and Level of Economic Development" published in the
journal Population Studies in 1975.[1][2] Preston studied the relationship for the 1900s, 1930s and
the 1960s and found it held for each of the three decades. More recent work has updated this
research.[3]
The Preston curve, using cross-country data for 2005. The x-axis shows GDP per capita in 2005
international dollars, the y-axis shows life expectancy at birth. Each dot represents a particular
country.
Improvements in health technology shift the Preston Curve upwards. In panel A, the new
technology is equally applicable in all countries regardless of their level of income. In panel B,
the new technology has a disproportionately larger effect in rich countries. In panel C, poorer
countries benefit more.
There are examples of communities that have safely used recycled water for many years. Los
Angeles County's sanitation districts have provided treated wastewater for landscape irrigation in
parks and golf courses since 1929. The first reclaimed water facility in California was built at
San Francisco's Golden Gate Park in 1932. The Irvine Ranch Water District (IRWD) was the first
water district in California to receive an unrestricted use permit from the state for its recycled
water; such a permit means that water can be used for any purpose except drinking. IRWD
maintains one of the largest recycled water systems in the nation with more than 400 miles
serving more than 4,500 metered connections. The Irvine Ranch Water District and Orange
County Water District in Southern California are established leaders in recycled water. Further,
the Orange County Water District, located in Orange County, and in other locations throughout
the world such as Singapore, water is given more advanced treatments and is used indirectly for
drinking.[4]
In spite of quite simple methods that incorporate the principles of water-sensitive urban design
(WSUD)[5] for easy recovery of stormwater runoff, there remains a common perception that
reclaimed water must involve sophisticated and technically complex treatment systems,
attempting to recover the most complex and degraded types of sewage. As this effort is
supposedly driven by sustainability factors, this type of implementation should inherently be
associated with point source solutions, where it is most economical to achieve the expected
outcomes. Harvesting of stormwater or rainwater can be an extremely simple to comparatively
complex, as well as energy and chemical intensive, recovery of more contaminated sewage.
Strategy (Greek ""stratgia, "art of troop leader; office of general, command,
generalship"[1]) is a high level plan to achieve one or more goals under conditions of uncertainty.
Strategy is important because the resources available to achieve these goals are usually limited.
Henry Mintzberg from McGill University defined strategy as "a pattern in a stream of decisions"
to contrast with a view of strategy as planning,[2] while Max McKeown (2011) argues that
"strategy is about shaping the future" and is the human attempt to get to "desirable ends with
available means". Dr. Vladimir Kvint defines strategy as "a system of finding, formulating, and
developing a doctrine that will ensure long-term success if followed faithfully." [3]
HACCP Principles
Hazard Analysis Critical Control Points (HACCP) is a tool that can be
useful in the prevention of food safety hazards. While
extremely important, HACCP is only one part of a multicomponent food safety system. HACCP is not a stand alone
program. Other parts must include: good manufacturing
practices, sanitation standard operating procedures, and a
personal hygiene program.
Safety of the food supply is key to consumer confidence. In the past, periodic plant
inspections and sample testing have been used to ensure the quality and safety of
food products. Inspection and testing, however, are like a photographic snapshot.
They provide information about the product that is relevant only for the specific
time the product was inspected and tested. What happened before or after? That
information is not known! From a public health and safety point of view, these
traditional methods offer little protection or assurance.
New concepts have emerged which are far more promising for controlling food safety hazards
from production to consumption.
HACCP was introduced as a system to control safety as the product is manufactured, rather than
trying to detect problems by testing the finished product. This new system is based on assessing
the inherent hazards or risks in a particular product or process and designing a system to control
them. Specific points where the hazards can be controlled in the process are identified.
The HACCP system has been successfully applied in the food industry. The system fits in well
with modern quality and management techniques. It is especially compatible with the ISO 9000
quality assurance system and just in time delivery of ingredients. In this environment,
manufacturers are assured of receiving quality products matching their specifications. There is
little need for special receiving tests and usually time does not allow for extensive quality tests.
The general principles of HACCP are as follows:
Principle #1 Hazard Analysis
Hazards (biological, chemical, and physical) are conditions which may pose an unacceptable
health risk to the consumer. A flow diagram of the complete process is important in conducting
the hazard analysis. The significant hazards associated with each specific step of the
manufacturing process are listed. Preventive measures (temperature, pH, moisture level, etc.) to
control the hazards are also listed.
Principle #2 Identify Critical Control Points
Critical Control Points (CCP) are steps at which control can be applied and a food safety hazard
can be prevented, eliminated or reduced to acceptable levels. Examples would be cooking,
acidification or drying steps in a food process..
Principle #3 Establish Critical Limits
All CCP's must have preventive measures which are measurable! Critical limits are the
operational boundaries of the CCPs which control the food safety hazard(s). The criteria for the
critical limits are determined ahead of time in consultation with competent authorities. If the
critical limit criteria are not met, the process is "out of control", thus the food safety hazard(s) are
not being prevented, eliminated, or reduced to acceptable levels.
Principle #4 Monitor the CCP's
Monitoring is a planned sequence of measurements or observations to ensure the product or
process is in control (critical limits are being met). It allows processors to assess trends before a
loss of control occurs. Adjustments can be made while continuing the process. The monitoring
interval must be adequate to ensure reliable control of the process.
Principle #5 Establish Corrective Action
HACCP is intended to prevent product or process deviations. However, should loss of control
occur, there must be definite steps in place for disposition of the product and for correction of the
process. These must be pre-planned and written. If, for instance, a cooking step must result in a
product center temperature between 165oF and 175oF, and the temperature is 163oF, the
corrective action could require a second pass through the cooking step with an increase in the
temperature of the cooker..
Principle #6 Record keeping
The HACCP system requires the preparation and maintenance of a written HACCP plan together
with other documentation. This must include all records generated during the monitoring of each
CCP and notations of corrective actions taken. Usually, the simplest record keeping system
possible to ensure effectiveness is the most desirable.
Principle #7 Verification
Verification has several steps. The scientific or technical validity of the hazard analysis and the
adequacy of the CCP's should be documented. Verification of the effectiveness of the HACCP
plan is also necessary. The system should be subject to periodic revalidation using independent
audits or other verification procedures.
HACCP offers continuous and systematic approaches to assure food safety. In light of recent
food safety related incidents, there is a renewed interest in HACCP from a regulatory point of
view. Both FDA and USDA are proposing umbrella regulations which will require HACCP plans
of industry. The industry will do well to adopt HACCP approaches to food safety whether or not
it is required.
HACCP is a Tool
HACCP is merely a tool and is not designed to be a stand-alone program. To be
effective other tools must include adherence to Good Manufacturing Practices, use
of Sanitation Standard Operating Procedures, and Personal Hygiene Programs.
Plants determine the food safety hazards identify the preventive measures
the plant can apply to control these hazards.
Monitoring activities are necessary to ensure that the process is under control
at each critical control point. FSIS is requiring that each monitoring procedure
and its frequency be listed in the HACCP plan.
Principle 7: Establish procedures for verifying the HACCP system is working as intended.
Validation ensures that the plans do what they were designed to do; that is,
they are successful in ensuring the production of safe product. Plants will be
required to validate their own HACCP plans. FSIS will not approve HACCP
plans in advance, but will review them for conformance with the final rule.
Verification ensures the HACCP plan is adequate, that is, working as intended.
Verification procedures may include such activities as review of HACCP plans,
CCP records, critical limits and microbial sampling and analysis. FSIS is
requiring that the HACCP plan include verification tasks to be performed by
plant personnel. Verification tasks would also be performed by FSIS
inspectors. Both FSIS and industry will undertake microbial testing as one of
several verification activities. the occurrence of the identified food safety
hazard.
PPP diagrams
What is Disinfection?
anthrax
E. coli infection
tuberculosis
salmonellosis,
paratyphoid
cholera
Hepatitis Virus
Polio Virus
Parasites:
Hepatitis A
polio
Cryptosporidium
Giardia lamblia
cryptosporidiosis
giardiasis
Purpose
Chlorination is the application of chlorine to water to accomplish
some definite purpose. We will be concerned with the application of
chlorine for the purpose of disinfection, but you should be aware
that chlorination can also be used for taste and odor control, iron
and manganese removal, and to remove some gases such as
ammonia and hydrogen sulfide.
Chlorination is currently the most frequently used form of
disinfection in the water treatment field. However, other
disinfection processes have been developed.
Forms of Chlorine
Elemental Chlorine
Elemental chlorine is either liquid or gaseous in form.
In its liquid form, it must be under extreme pressure.
In its gaseous form, it is 2.5 times as heavy as air.
Hypochlorite Ion
The chemical symbol for hypochlorite is OClThe hypochlorite ion (OCl-) is not the same as the salts calcium
hypochlorite and sodium hypochlorite although the term is
commonly used for both the ion and the salts.
Liquid Chlorine
Liquid chlorine is a clear, amber colored liquid.
Common properties of chlorine are listed in the following table:
Vapor Pressure
Vapor pressure is a function of temperature and is independent of
volume. The gage pressure of a container with 1 pound of chlorine
will be essentially the same as if it contained 100 pounds, at the
same temperature conditions.
Vapor pressure increases as the temperature increases, as
demonstrated in the following figure:
Gaseous Chlorine
Gaseous chlorine is a greenish, yellow gas.
Common properties of gaseous chlorine are listed in the following
table:
There are two types of gas masks a canister type with a full face
piece and a self contained breathing apparatus.
Protective clothing.
Emergency showers and eye-wash stations.
Automatic leak detection.
First Aid
The following guidelines should be adhered to in the event of
exposure to chlorine.
Inhalation
Remove the injured party to an uncontaminated outdoor area. Use
appropriate respiratory equipment during rescuedo not become
another victim.
Skin Contact
Immediately shower with large quantities of water.
Remove protective clothing and equipment while in shower.
Flush skin with water for at least 5 minutes.
Call for medical assistance.
Keep affected area cool.
Eye Contact
Immediately shower with large quantities of water while holding
eyes open.
Call a physician immediately.
Transfer promptly to medical facility.
Ingestion
Do not induce vomiting.
Give large quantities of water.
Call physician immediately.
Transfer promptly to a medical facility.
Leak Detection
The sense of smell can detect chlorine concentrations as low as 4
parts per million (ppm).
Portable and permanent automatic chlorine detection devices can
detect at concentrations of 1 ppm or less.
A rag saturated with strong ammonia solution will indicate leaks by
the presence of white fumes.
Leak Repair
In the event of a chlorine leak, the following guidelines should be
followed.
Activate the chlorine leak absorption system, if available.
All other persons should leave the danger area until conditions
are safe again.
If the leak is large, evacuate the area and obtain help from the
local fire company. They have self-contained breathing
equipment and can assist with evacuation efforts. The local
police can also assist in the event there are curious sightseers.
Keep in mind that emergency vehicles and vehicle engines
may quit operating due to a lack of oxygen
Clean, dry and test repair for leak prior to returning the system
to service.
Repair as required.
Increase the feed rate if possible, and cool the tank to reduce
leak rate.
Turn, if possible, so that gas escapes rather than liquid. The
quantity of chlorine that escapes as gas is 1/15 that which
escapes as liquid through the same size hole.
Fire
Chlorine will not burn in air. It is a strong oxidizer and contact with
combustible materials may cause fire. When heated, chlorine is
dangerous and emits highly toxic fumes.
In the event of a fire caused by chlorine, the following fire fighting
measures should be adhered to:
Quantities
Storage Requirements
Separate rooms for storage and feed facilities should be provided.
Storage and feed rooms need to be separate from other operating
areas.
Rooms should have an inspection window to permit equipment to be
viewed without entering the room.
All openings between rooms and the remainder of the plant need to
be sealed.
Storage for a 30 day supply should be available.
Ton Containers
Provide storage area with 2 ton capacity monorail or crane for
cylinder movement and placement.
Roller trunions are necessary to properly position cylinders.
Cylinder valves must be positioned vertically. Gas flows from the top
valve and liquid flows from the bottom valve.
Tank Cars
Tank cars are generally only provided for the largest plants.
Rail siding is required.
Hypochlorites
Instead of using chlorine gas, some plants apply chlorine to water as
a hypochlorite, also known as bleach. Hypochlorites are less pure
than chlorine gas, which means that they are also less dangerous.
However, they have the major disadvantage that they decompose in
strength over time while in storage. Temperature, light, and
physical energy can all break down hypochlorites before they are
able to react with pathogens in water.
There are three types of hypochlorites - sodium hypochlorite,
calcium hypochlorite, and commercial bleach:
Storage Facilities
Basic Facilities and Housing
Heating
Ventilation
Lighting
Chlorine Scrubbers
Description of Equipment
Description of Process
Two chlorine scrubbing processes are available: one uses a caustic
solution and the other uses solid media.
Next, between points 2 and 3, the chlorine reacts with organics and
ammonia naturally found in the water. Some combined chlorine
residual is formed - chloramines. Note that if chloramines were to
be used as the disinfecting agent, more ammonia would be added to
the water to react with the chlorine. The process would be stopped
at point 3. Using chloramine as the disinfecting agent results in
little trihalomethane production but causes taste and odor problems
since chloramines typically give a "swimming pool" odor to water.
In contrast, if hypochlorous acid is to be used as the chlorine
residual, then chlorine will be added past point 3. Between points 3
and 4, the chlorine will break down most of the chloramines in the
water, actually lowering the chlorine residual.
Finally, the water reaches the breakpoint, shown at point 4.
The breakpoint is the point at which the chlorine demand has been
totally satisfied - the chlorine has reacted with all reducing agents,
organics, and ammonia in the water. When more chlorine is added
past the breakpoint, the chlorine reacts with water and forms
hypochlorous acid in direct proportion to the amount of chlorine
added. This process, known as breakpoint chlorination, is the
most common form of chlorination, in which enough chlorine is
added to the water to bring it past the breakpoint and to create
some free chlorine residual.
Efficiency
Residual and Dosage
A variety of factors can influence disinfection efficiency when using
breakpoint chlorination or chloramines. One of the most important
of these is the concentration of chlorine residual in the water.
The chlorine residual in the clearwell should be at least 0.5 mg/L.
This residual, consisting of hypochlorous acid and/or chloramines,
must kill microorganisms already present in the water and must also
kill any pathogens which may enter the distribution system through
cross-connections or leakage. In order to ensure that the water is
free of microorganisms when it reaches the customer, the chlorine
residual should be about 0.2 mg/L at the extreme ends of the
distribution system. This residual in the distribution system will also
act to control microorganisms in the distribution system which
produces slimes, tastes, or odors.
Determining the correct dosage of chlorine to add to water will
depend on the quantity and type of substances in the water creating
a chlorine demand. The chlorine dose is calculated as follows:
So, if the required chlorine residual is 0.5 mg/L and the chlorine
demand is known to be 2 mg/L, then 2.5 mg/L of chlorine will have
to be added to treat the water.
The chlorine demand will typically vary over time as the
characteristics of the water change. By testing the chlorine residual,
the operator can determine whether a sufficient dose of chlorine is
being added to treat the water. In a large system, chlorine must be
sampled every two hours at the plant and at various points in the
distribution system.
It is also important to understand the breakpoint curve when
changing chlorine dosages. If the water smells strongly of chlorine,
it may not mean that too much chlorine is being added. More likely,
chloramines are being produced, and more chlorine needs to be
added to pass the breakpoint.
Contact Time
Contact time is just as important as the chlorine residual in
determining the efficiency of chlorination. Contact time is the
amount of time which the chlorine has to react with the
microorganisms in the water, which will equal the time between the
moment when chlorine is added to the water and the moment when
that water is used by the customer. The longer the contact time,
the more efficient the disinfection process is. When using chlorine
for disinfection a minimum contact time of 30 minutes is required for
adequate disinfection.
Regulatory Requirements
Continuous disinfection is required of all public water systems.
For surface water supplies:
1 log inactivation = 90 %
2 log inactivation = 99%
Process Calculations
There are two basic chlorination process calculations: chlorine
dosage and chlorine demand.
Sample Calculations
Example 1.
The chlorinator at a water treatment plant operating at a flowrate of
1.0 million gallons per day is set to feed 20 pounds in a 24 hour
period. The chlorine residual in the finished water leaving the plant
after a 20 minute contact period is 0.5 mg/l. Calculate the chlorine
demand of the water.
Known: Flow, (mgd) = 1.0 MGD
Chlorinator setting = 20 pounds/day
Finished water chlorine residual = 0.5 mg/l
Find: Chlorine Dosage (mg/l) and Chlorine Demand (mg/l)
Demographic fatigue
Demographic dividend (working population, productive group)
Occupational Zoonotic diseases
Anthrax (leather industry, skin problems etc)
Measles
Poliomyelitis
MCH to improve
MMR to reduce
IMR to reduce
Non- Parametric Tests
Biostatistics books
Kuzma
Bethykirkwood
Land
Epidemiology
Leon Gordis
Correlation
Screening curves
Meta analysis
In statistics, a meta-analysis refers to methods that focus on contrasting and combining results
from different studies, in the hope of identifying patterns among study results, sources of
disagreement among those results, or other interesting relationships that may come to light in
the context of multiple studies.[1] In its simplest form, meta-analysis is normally done by
identification of a common measure of effect size. A weighted average of that common measure
is the output of a meta-analysis. The weighting is related to sample sizes within the individual
studies. More generally there are other differences between the studies that need to be allowed
for, but the general aim of a meta-analysis is to more powerfully estimate the true effect size as
opposed to a less precise effect size derived in a single study under a given single set of
assumptions and conditions. A meta-analysis therefore gives a thorough summary of several
studies that have been done on the same topic, and provides the reader with extensive
information on whether an effect exists and what size that effect has.
Meta analysis can be thought of as "conducting research about research."
Meta-analyses are often, but not always, important components of a systematic
review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a
medical treatment, in an effort to obtain a better understanding of how well the treatment works.
Here it is convenient to follow the terminology used by the Cochrane Collaboration,[2] and use
"meta-analysis" to refer to statistical methods of combining evidence, leaving other aspects of
'research synthesis' or 'evidence synthesis', such as combining information from qualitative
studies, for the more general context of systematic reviews.
Meta-analysis forms part of a framework called estimation statistics which relies on effect
sizes, confidence intervals and precision planning to guide data analysis, and is an alternative
to null hypothesis significance testing.
The precision and accuracy of estimates can be improved as more data is used. This, in
turn, may increase the statistical power to detect an effect.
Inconsistency of results across studies can be quantified and analyzed. For instance,
does inconsistency arise from sampling error, or are study results (partially) influenced by
between-study heterogeneity.
Pitfalls
A meta-analysis of several small studies does not predict the results of a single large study.
[9]
Some have argued that a weakness of the method is that sources of bias are not controlled
by the method: a good meta-analysis of badly designed studies will still result in bad statistics.
[10]
This would mean that only methodologically sound studies should be included in a metaanalysis, a practice called 'best evidence synthesis'. [10] Other meta-analysts would include
weaker studies, and add a study-level predictor variable that reflects the methodological quality
of the studies to examine the effect of study quality on the effect size.[11] However, others have
argued that a better approach is to preserve information about the variance in the study sample,
casting as wide a net as possible, and that methodological selection criteria introduce unwanted
subjectivity, defeating the purpose of the approach.[12]
Based on quality criteria, e.g. the requirement of randomization and blinding in a clinical
trial
Decide whether unpublished studies are included to avoid publication bias (file drawer
problem)
4. Decide which dependent variables or summary measures are allowed. For instance:
in which
the
pooled variance.
5. Selection of meta-regression statistic model. e.g. Simple regression, fixed-effect meta
regression and random-effect meta regression. Meta-regression is a tool used in metaanalysis to examine the impact of moderator variables on study effect size using regressionbased techniques. Meta-regression is more effective at this task than are standard
regression techniques.
Meta-analysis
combines the
results of several
studies
What is meta-analysis?
Meta-analysis is the use of statistical methods to combine
results of individual studies. This allows us to make the best
use of all the information we have gathered in our systematic
review by increasing the power of the analysis. By statistically
combining the results of similar studies we can improve the
precision of our estimates of treatment effect, and assess
whether treatment effects are similar in similar situations. The
decision about whether or not the results of individual studies
are similar enough to be combined in a meta-analysis is
essential to the validity of the result, and will be covered in
the next module on heterogeneity. In this module we will look
at the process of combining studies and outline the various
methods available.
There are many approaches to meta-analysis. We have
discussed already that meta-analysis is not simply a matter of
adding up numbers of participants across studies (although
unfortunately some non-Cochrane reviews do this). This is the
'pooling participants' or 'treat-as-one-trial' method and we will
discuss it in a little more detail now.
Pooling participants (not a valid approach to meta-analysis).
This method effectively considers the participants in all the
studies as if they were part of one big study. Suppose the
19
36
Risk
Risk difference
0.528
-0.16
Control
13
19
0.684
58
Risk
Risk difference
0.1034
-0.004
Control
65
0.1077
We don't add up
patients across
trials
We don't use
simple averages to
calculate a metaanalysis
Risk
Daycare
25
94
0.266
Control
20
84
0.238
Risk difference
+0.03
WRONG!
Definition:
What is a meta-analysis? A meta-analysis is a type of research study in which the
researcher compiles numerous previously published studies on a particular research
question and re-analyzes the results to find the general trend for results across the
studies.
A meta-analysis is a useful tool because it can help overcome the problem of small
sample sizes in the original studies, and can help identify trends in an area of the
research literature that may not be evident by merely reading the published
studies.
Graphs
Economic growth
Definition of 'Economic Growth'
Economic growth is the increase in the market value of the goods and services produced by
an economy over time. It is conventionally measured as the percent rate of increase
in real gross domestic product, or real GDP.[1] Of more importance is the growth of the ratio of
GDP to population (GDP per capita), which is also called per capita income. An increase in per
capita income is referred to as intensive growth. GDP growth caused only by increases in
population or territory is called extensive growth.[2]
Growth is usually calculated in real terms i.e., inflation-adjusted terms to eliminate the
distorting effect of inflation on the price of goods produced. In economics, "economic growth" or
"economic growth theory" typically refers to growth of potential output, i.e., production at "full
employment".
As an area of study, economic growth is generally distinguished from development economics.
The former is primarily the study of how countries can advance their economies. The latter is
the study of the economic aspects of the development process in low-income countries. See
also Economic development.
Since economic growth is measured as the annual percent change of gross domestic product
(GDP), it has all the advantages and drawbacks of that measure. For example, GDP only
measures the market economy, which tends to overstate growth during the change over from a
farming economy with household production.[3] An adjustment was made for food grown on and
consumed on farms, but no correction was made for other household production. Also, there is
no allowance in GDP calculations for depletion of natural resources.
Pros
9. Quality of life
Cons
10.Resource depletion
11.Environmental impact
12.Global warming
Inflation graphs
All other things being equal, an increase in economic growth must cause inflation to
drop, and a reduction in growth must cause inflation to rise. In his congressional
testimony yesterday, Federal Reserve chairman Ben Bernanke thankfully did not
state that the higher economic growth he expects will lead to higher inflation.
Although he didnt connect growth and inflation at all, Mr. Bernanke has long
understood that higher growth leads to lower inflation.
Heres why. Inflation, as the old saying goes, is caused by too much money
chasing too few goods. Just as more money means higher prices, fewer goods
also mean higher prices. The connection between the level of production and the
level of prices also holds for the rate of change of production (that is, the rate of
economic growth) and the rate of change of prices (that is, the inflation rate).
Some simple arithmetic will clarify. Start with the famous equation of exchange, MV
= Py, where M is the money supply; V is the velocity of money that is, the speed
at which money circulates; P is the price level; and y is the real output of the
economy (real GDP.) A version of this equation, incidentally, was on the license
plate of the late economist Milton Friedman, who made a large part of his academic
reputation by reviving, and giving evidence for, the role of money growth in causing
inflation.
If the growth rate of real GDP increases and the growth rates of M and V are held
constant, the growth rate of the price level must fall. But the growth rate of the
price level is just another term for the inflation rate; therefore, inflation must fall.
An increase in the rate of economic growth means more goods for money to
chase, which puts downward pressure on the inflation rate. If for example the
money supply grows at 7% a year and velocity is constant and if annual economic
growth is 3%, inflation must be 4% (more exactly, 3.9%). If, however, economic
growth rises to 4%, inflation falls to 3% (actually, 2.9%.)
The April numbers for the index of industrial production (IIP), released on Thursday,
brought some cheer on the growthfront. The IIP grew by 3.4 per cent, its highest in
a long time. April, of course, was a month in which the entire country was deep in
electioneering. Therefore, some sort of stimulus from all the campaign spending
might have been reasonable to expect. The biggest beneficiary of this was the
category of "electrical machinery", which grew by over 66 per cent year on year,
reflecting all those campaign rallies, with their generators and audio equipment.
The other significant contributor to the growth in the overall index was electricity,
which grew by almost 12 per cent year on year, significantly higher than its growth
during 2013-14. Typically, a growth acceleration that relies heavily on one or two
sectoral surges does not have much staying power. It would require an across-theboard show of resurgence to allow people to conclude that a sustainable recovery
was under way. That is clearly not happening yet. However, these numbers do
reinforce the perception that things are not getting worse as far as growth is
concerned.
Likewise, there was some room for relief on the inflation front. The consumer price
index, or CPI, numbers for May 2014 showed headline inflation declining slightly,
from 8.6 per cent in April to 8.3 per cent in May. The Central Statistical Office is now
separately reporting a sub-index labelled consumer food price index, or CFPI, which
provides some convenience to observers. The index itself, though, offers little cheer.
It came down modestly between April and May, largely explaining the decline in the
headline rate, but is still significantly above nine per cent. At a time when there are
concerns about the performance of the monsoon and the impact of that on food
prices, these numbers should be a major cause of worry for the government. Milk,
eggs, fish and meat, vegetables and fruit contributed to the persistence of food
inflation. But cereals are also kicking in, as they have been for the past couple of
years, and the government must use its large stocks of rice and wheat quickly to
dampen at least this source of food inflation. It would be unconscionable not to do
so when risks of a resurgence of inflation are high. The larger point on inflation,
though, is how stubborn the rate is despite sluggish growth and high interest rates.
The limitations of monetary policy are being repeatedly underscored.
Against this backdrop, the government's prioritisation of its fight against inflation is
Last year I was challenged to tell about how doctors search medical
information (central theme = Google) for and here it comes. the Society of
History and ICT.
To explain the audience why it is important for clinicians to find the best evidence
and how methodological filters can be used to sift through the overwhelming
amount of information in for instance PubMed, I had to introduce RCTs and the
levels of evidence. To explain it to them I used an example that stroke me when I
first read about it.
I showed them the following slide :
And clarified: Beta-carotene is a vitamine in carrots and many other vegetables, but
you can also buy it in pure form as pills. There is reason to believe that betacarotene might help to prevent lung cancer in cigarette smokers. How do you think
you can find out whether beta-carotene will have this effect?
Suppose you have two neighbors, both heavy smokers of the same age, both
males. The neighbor who doesnt eat much vegetables gets lung cancer, but the
neighbor who eats a lot of vegetables and is fond of carrots doesnt. Do you think
this provides good evidence that beta-carotene prevents lung cancer?
There is a laughter in the room, so they dont believe in n=1 experiments/case
reports. (still how many people dont think smoking does not necessarily do any
harm because their chainsmoking father reached his nineties in good health).
I show them the following slide with the lowest box only.
O.k. What about this study? Ive a group of lung cancer patients,
who smoke(d) heavily. I ask them to fill in a questionnaire about their eating habits
in the past and take a blood sample, and I do the same with a simlar group of
smokers without cancer (controls). Analysis shows that smokers developing lung
cancer eat much less beta-carotene containing vegetables and have lower
bloodlevels of beta-carotene than the smokers not developing cancer. Does this
mean
that
beta-carotene
is
preventing
lung
cancer?
Humming in the audience, till one man says: perhaps some people dont remember
exactly what they eat and then several people object that it is just an association
and you do not yet know whether beta-carotene really causes this. Right! I show
the box patient-control studies.
Than consider this study design. I follow a large cohort of healthy heavy
smokers and look at their eating habits (including use of supplements) and take
regular blood samples. After a long follow-up some heavy smokers develop lung
cancer whereas others dont. Now it turns out that the group that did not develop
lung cancer had significantly more beta-carotene in their blood and eat larger
amount of beta-carotene containing food. What do you think about that then?
Now the room is a bit quiet, there is some hesitation. Then someone says: well it is
more convincing and finally the chair says: but it may still not be the carrots, but
something else in their food or they may just have other healthy living habits
(including eating carrots). Cohort-study appears on the slide (What a perfect
audience!)
O.k. youre not convinced that these study designs give conclusive evidence.
How could we then establish that beta-carotene lowers the risk of lung cancer in
hea
vy smokers? Suppose you really
wanted
to
know,
how
do
you
set
up
such
a
study?
Grinning. Someone says by giving half of the smokersbeta-carotene and the other
half nothing. Or a placebo, someone else says. Right! Randomized Controlled
Trial is on top of the slide. And there is not much room left for another box, so we
are there. I only add that the best way to do it is to do it double blinded.
Than I reveal that all this research has really been done. There have been numerous
observational studies (case-control as well cohorts studies) showing a consistent
Ecological studies are studies of risk-modifying factors on health or other outcomes based on
populations defined either geographically or temporally. Both risk-modifying factors and
outcomes are averaged for the populations in each geographical or temporal unit and then
compared using standard statistical methods.
Ecological studies have often found links between risk-modifying factors and health outcomes
well in advance of other epidemiological or laboratory approaches. Several examples are given
here.
The study by John Snow regarding a cholera outbreak in London is considered the first
ecological study to solve a health issue. He used a map of deaths from cholera to determine
that the source of the cholera was a pump on Broad Street. He had the pump handle removed
in 1854 and people stopped dying there [Newsom, 2006]. It was only when Robert
Koch discovered bacteria years later that the mechanism of cholera transmission was
understood.[1]
Dietary risk factors for cancer have also been studied using both geographical and temporal
ecological studies. Multi-country ecological studies of cancer incidence and mortality rates with
respect to national diets have shown that some dietary factors such as animal products (meat,
milk, fish and eggs), added sweeteners/sugar, and some fats appear to be risk factors for many
types of cancer, while cereals/grains and vegetable products as a whole appear to be risk
reduction factors for many types of cancer.[2][3] Temporal changes in Japan in the types of cancer
common in Western developed countries have been linked to the nutrition transition to the
Western diet.[4]
An important advancement in the understanding of risk-modifying factors for cancer was made
by examining maps of cancer mortality rates. The map of colon cancer mortality rates in the
United States was used by the brothers Cedric and Frank C. Garland to propose the hypothesis
that solar ultraviolet B (UVB) radiation, through vitamin D production, reduced the risk of cancer
(the UVB-vitamin D-cancer hypothesis).[5] Since then many ecological studies have been
performed relating the reduction of incidence or mortality rates of over 20 types of cancer to
lower solar UVB doses.[6]
Links between diet and Alzheimers disease have been studied using both geographical and
temporal ecological studies. The first paper linking diet to risk of Alzheimers disease was a
multicountry ecological study published in 1997.[7] It used prevalence of Alzheimers disease in
11 countries along with dietary supply factors, finding that total fat and total energy (caloric)
supply were strongly correlated with prevalence, while fish and cereals/grains were inversely
correlated (i.e., protective). Diet is now considered an important risk-modifying factor for
Alzheimers disease.[8] Recently it was reported that the rapid rise of Alzheimers disease in
Japan between 1985 and 2007 was likely due to the nutrition transition from the traditional
Japanese diet to the Western diet.[9]
Another example of the use of temporal ecological studies relates to influenza. John
Cannell and associates hypothesized that the seasonality of influenza was largely driven by
seasonal variations in solar UVB doses and calcidiol levels.[10] A randomized controlled
trial involving Japanese school children found that taking 1000 IU per day vitamin D3 reduced
the risk of type A influenza by two-thirds.[11]
Ecological studies are particularly useful for generating hypotheses since they can use existing
data sets and rapidly test the hypothesis. The advantages of the ecological studies include the
large number of people that can be included in the study and the large number of risk-modifying
factors that can be examined.
The term ecological fallacy means that the findings for the groups may not apply to individuals
in the group. However, this term also applies to observational studies and randomized controlled
trials. All epidemiological studies include some people who have health outcomes related to the
risk-modifying factors studied and some who do not. For example, genetic differences affect
how people respond to pharmaceutical drugs. Thus, concern about the ecological fallacy should
not be used to disparage ecological studies. The more important consideration is that ecological
studies should include as many known risk-modifying factors for any outcome as possible,
adding others if warranted. Then the results should be evaluated by other methods, using, for
example, Hills criteria for causality in a biological system.
The ecological fallacy may occur when conclusions about individuals are drawn from analyses
conducted on grouped data. The nature of this type of analysis tends to overestimate the
degree of association between variables.
Survival rate.
Life table.....
In actuarial science and demography, a life table (also called a mortality table or actuarial
table) is a table which shows, for each age, what the probability is that a person of that age will
die before his or her next birthday ("probability of death"). From this starting point, a number of
inferences can be derived.
Life tables are also used extensively in biology and epidemiology. The concept is also of
importance in product life cycle management.
Using
with
These curves show the probability that someone at (who has reached) the age of
least
will live at
years and can be used to discuss annuity issues from the boomer viewpoint where an
A "life table" is a kind of bookkeeping system that ecologists often use to keep
track of stage-specific mortality in the populations they study.
It is an especially
From a pest
pest populations can often be suppressed without any other control methods.
To create a life table, an ecologist follows the life history of many individuals in a
population, keeping track of how many offspring each female produces, when each
one dies, and what caused its death.
predators, 90% of the larvae will die from parasitization, and three-fifths of the
pupae will freeze to death in the winter.
are based on a large database of observations.) A life table can be created from
the above data.
Female).
This number represents the maximum biotic potential of the species (i.e. the
greatest number of offspring that could be produced in one generation under ideal
conditions).
The first line of the life table lists the main cause(s) of death, the
number dying, and the percent mortality during the egg stage.
In this example,
an average of only 100 individuals survive the egg stage and become larvae.
The second line of the table lists the mortality experience of these 100 larvae: only
10 of them survive to become pupae (90% mortality of the larvae).
The third
line of the table lists the mortality experience of the 10 pupae -- three-fifths die of
freezing.
If
we assume a 1:1 sex ratio, then there are 2 males and 2 females to start the next
generation.
If there is no mortality of these females, they will each lay an average of 200 eggs
to start the next generation.
the one original female -- this population is DOUBLING in size each generation!!
In ecology, the symbol "R" (capital R) is known as the replacement rate.
It is a
Number of daughters
R = ------------------------------Number of mothers
Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.
Of the survivors, 64 die as larvae due to habitat destruction (gum is cleared away
by the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct a life table for this species and calculate a value for "R", the replacement
rate (assume a 1:1 sex ratio).
remaining stable?
Practice Problem:
A typical female of the bubble gum maggot (Bubblicious blowhardi Meyer) lays 250
eggs.
the survivors, 64 die as larvae due to habitat destruction (gum is cleared away by
the janitorial staff) and 87 die as pupae because the gum gets too hard.
Construct
a life table for this species and calculate a value for "R", the replacement rate
(assume a 1:1 sex ratio).
stable?
after a breast cancer researcher called Pat Forrest and as a result the name has sometimes
been spelt "forrest plot".[4]
Effective Human Resource Management is the Center for Effective Organizations' (CEO) sixth
report of a fifteen-year study of HR management in today's organizations. The only long-term
analysis of its kind, this book compares the findings from CEO's earlier studies to new data
collected in 2010. Edward E. Lawler III and John W. Boudreau measure how HR management
is changing, paying particular attention to what creates a successful HR functionone that
contributes to a strategic partnership and overall organizational effectiveness. Moreover, the
book identifies best practices in areas such as the design of the HR organization and HR metrics.
It clearly points out how the HR function can and should change to meet the future demands of
a global and dynamic labor market.
For the first time, the study features comparisons between U.S.-based firms and companies in
China, Canada, Australia, the United Kingdom, and other European countries. With this new
analysis, organizations can measure their HR organization against a worldwide sample,
assessing their positioning in the global marketplace, while creating an international standard
for HR management.
(PDF 2 docs)
Policy?
1. Politics: (1) The basic principles by which a government is guided.
(2) The declared objectives that a government or party seeks to achieve and
preserve in the interest of national community. See also public policy.
2. Insurance: The formal contract issued by an insurer that contains terms and
conditions of the insurance cover and serves as its legal evidence.
3. Management: The set of basic principles and associated guidelines, formulated
and enforced by the governing body of an organization, to direct and limit
its actions in pursuit of long-term goals. See also corporate policy.
A policy is a principle or protocol to guide decisions and achieve rational outcomes. A policy is
a statement of intent, and is implemented as a procedure[1] or protocol. Policies are generally
adopted by the Board of or senior governance body within an organization whereas procedures
or protocols would be developed and adopted by senior executive officers. Policies can assist in
both subjective and objective decision making. Policies to assist in subjective decision making
would usually assist senior management with decisions that must consider the relative merits of
a number of factors before making decisions and as a result are often hard to objectively test
e.g. work-life balance policy. In contrast policies to assist in objective decision making are
usually operational in nature and can be objectively tested e.g. password policy.[citation needed]
The term may apply to government, private sector organizations and groups, as well as
individuals. Presidential executive orders, corporate privacy policies, and parliamentary rules of
order are all examples of policy. Policy differs from rules or law. While law can compel or
prohibit behaviors (e.g. a law requiring the payment of taxes on income), policy merely guides
actions toward those that are most likely to achieve a desired outcome.[citation needed]
Policy or policy study may also refer to the process of making important organizational
decisions, including the identification of different alternatives such as programs or spending
priorities, and choosing among them on the basis of the impact they will have. Policies can be
understood as political, management, financial, and administrative mechanisms arranged to
reach explicit goals. In public corporate finance, a critical accounting policy is a policy for a
firm/company or an industry which is considered to have a notably high subjective element, and
that has a material impact on the financial statements.[citation needed]
Micro-planning
Micro Planning: A tool to empower people
Micro-planning is a comprehensive planning approach wherein the community
prepares development plans themselves considering the priority needs of the
village. Inclusion and participation of all sections of the community is central to
of Nuh Block, Mewat. The objective of this exposure was to show participants how
micro-planning is carried out and what challenges may arise during its conduct and
prepare the village development plan following the micro-planning approach.
The village sarpanch led the process from the front, and the entire village and
panchayat members participated wholeheartedly in this exercise. Participatory Rural
Appraisal (PRA) technique which incorporates the knowledge and opinions of rural
people in the planning and management of development projects and programmes
was used to gather information and prioritize development works. Resource, social
and development issue prioritization maps were prepared by the villagers after
analyzing the collected information. The villagers further identified the problems
associated with village development and recommended solutions for specific
problems while working in groups. The planning process went on for two days
subsequent to which a Gram Sabha (village committee), the first power unit in the
panchayati raj system, was organized on the third day. About 250 people
participated in the Gram Sabha including 65 women and 185 men. The sarpanch
shared the final village analysis and development plans with the villagers present in
Gram Sabha and asked for their inputs and suggestions. After incorporating the
suggestions received, a plan was prepared and submitted to Block Development
Office for final approval and sanction of funds.
"After the successful conduct of Gram Sabha in our village, we now need to build
synergies with the district level departments to implement the plans drawn in the
meeting," said the satisfied Sarpanch of Untka after experiencing the conduct of
micro planning exercise in their village.
Macro-planning
Macro Planning and Policy Division (MPPD) is responsible for setting macroeconomic policies
and strategies in consultation with key agencies, such as the Reserve Bank of Fiji (RBF) and
Ministry of Finance. The Division analyzes and forecasts movements in macroeconomic
indicators and accounts, including Gross Domestic Product (GDP), Exports and Imports, and
the Balance of Payments (BOP). Macroeconomic forecasting involves making assessments on
production data in the various sectors of the economy for compilation of quarterly forecasts of
the National Accounts.
because of students changing needs. And finally, my old school kindly granted the teachers one
day a month of paid prep time/new student intake, where wed decide on the themes that wed
be using for our class to ensure there wasnt too much overlap with other classes. We did have a
set curriculum in terms of grammar points, but themes and supplementary materials were up to
us. Doing a bit of planning before the semester started ensured that we stayed organized and
kept the students interest throughout the semester.
Another benefit of macro lesson planning is that teachers can share the overall goals of the
course with their students on the first day, and they can reiterate those goals as the semester
progresses. Students often lose sight of the big picture and get discouraged with their English
level, and having clear goals that they see themselves reaching helps prevent this.
2. Micro lesson planning
The term micro comes from the Greek mikros meaning small, little. In the ELT industry, micro
lesson planning refers to planning one specific lesson based on one target (e.g., the simple
past). It involves choosing a topic or grammar point and building a full lesson to complement it.
A typical lesson plan involves a warm-up activity, which introduces the topic or elicits the
grammar naturally, followed by an explanation/lesson of the point to be covered. Next, teachers
devise a few activities that allow students to practice the target point, preferably through a mix of
skills (speaking, listening, reading, writing). Finally, teachers should plan a brief wrap-up activity
that brings the lesson to a close. This could be as simple as planning to ask students to share
their answers from the final activity as a class.
Some benefits of micro lesson planning include classes that runs smoothly and students who
dont get bored. Lesson planning ensures that youll be prepared for every class and that youll
have a variety of activities on hand for whatever situation may arise (well, the majority of
situationsIm sure weve all had those classes where an activity we thought would rock ends
up as an epic fail).
For more information on micro lesson planning, check out How to Make a Lesson Plan, a blog
post I wrote last year, where I emphasized the importance of planning fun, interesting fillers so
that students stay engaged. I also provided links in that post to many examples of activities you
can use for warm-ups, main activities, fillers, homework, etc. There is also a good template for a
typical lesson plan at.docstoc.
Can anyone think of other benefits of macro or micro lesson planning? Does anyone have a
different definition of these terms? Let us know below.
Happy planning!
Tanya
Macro is big and micro is very small. Macro economics depends on big projects like steel mills,
big industrial units, national highway projects etc. which aim at producing good and services at a
very large quantity and serve a wide area. These take time to porduce results because of the
size of the projects. Micro economics is on a small scale, limited to specific area or location and
purpose and normally produce results in a much shorter time. The best example of micro
economics is the Grameen Bank of Bangladesh started by Md. Yunus, who also got
international awards for his initative.The concept of Micro credit was pioneered by the
Bangladesh-based Grameen Bank, which broke away from the age old belief that low income
amounted to low savings and low investment. It started what came to be a system which
followed this sequence: low income, credit, investment, more income, more credit, more
investment, more income. It is owned by the poor borrowers of the bank who are mostly women.
Borrowers of Grameen Bank at present own 95 per cent of the total equity and the balance 5%
by the Govt. Micro economics was also one of the policies of Mahatma Gandhi who wanted
planning to start from local village level and spread thru the country; unfortunately this has not
happened and even now the result of developments has not percolated to the common man,
particularly in the rural areas.
to micro planning. Some could belong to both. When you have finished, compare your decisions
with your partner.
Thinking and sharing activity
TASK 2
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
In a sense, macro planning is not writing lesson plans for specific lessons but rather
familiarizing with the context in which language teaching is taking place. Macro
planning involves the following:
1) Knowing about the course: The teacher should get to know which language areas
and language skills should be taught or practised in the course, what materials and
teaching aids are available, and what methods and techniques can be used.
2) Knowing about the institution: The teacher should get to know the institution's
arrangements regarding time, length, frequency of lessons, physical conditions of
classrooms, and exam requirements.
3) Knowing about the learners: The teacher should acquire information about the
students?age range, sex ratio, social background, motivation, attitudes, interests,
learning needs and other individual factors.
4) Knowing about the syllabus: The teacher should be clear about the purposes,
requirements and targets specified in the syllabus.
Much of macro planning is done prior to the commencement of a course. However,
macro planning is a job that never really ends until the end of the course.
Macro planning provides general guidance for language teachers. However, most
teachers have more confidence if they have a kind of written plan for each lesson
they teach. All teachers have different personalities and different teaching
strategies, so it is very likely their lesson plans would differ from each other.
However, there are certain guidelines that we can follow and certain elements that
we can incorporate in our plans to help us create purposeful, interesting and
motivating lessons for our learners.
documentation approach
technologies (tools).
Definition of P&P program
A policies and procedures (P&P) program refers to the context in which an organization formally
plans, designs, implements, manages, and uses P&P communication in support of
performance-based learning and on-going reference.
Description of components
The five components of a formal P&P program are described below:
The information plan or architecture which identifies the coverage and organization of
subject matter and related topics to be included
The documentation approach which designates how P&P content will be designed and
presented, including the documentation methods, techniques, formats, and styles
develop a strategic P&P program plan. The strategic plan will enable your organization to
achieve the necessary level of maturity for each component and ensure that your organization
will maximize the value of its P&P investment.
Conclusion
Organizations with informal P&P programs do not usually reap the benefits that formal P&P
programs provide. An effective P&P program must include five components. It is essential to
have an objective P&P program assessment to determine the existing P&P maturity grade and
where it should be. The P&P strategic plan is the basis for achieving a higher level of
performance in your P&P program
The following information is provided as a template to assist learners draft a policy. However
it must be remembered that policies are written to address specific issues, and therefore the
structure and components of a policy will differ considerably according to the need. A policy
document may be many pages or it may be a single page with just a few simple statements.
The following template is drawn from an Information Bulletin "Policy and Planning" by Sport
and Recreation Victoria. It is suggested that there are nine components. The example given at
the right of the table should not be construed as a complete policy
Component
1
Brief Example
A statement of what the The following policy aims to ensure that XYZ
organisation seeks to
Broad
service objectives which
explain the areas in
which the organisation
will be dealing
Strategies to achieve
each objective
Specific actions to be
taken
Desired outcomes of
specific actions
Performance indicators
A reduction in injuries
A review program
Health financing systems are critical for reaching universal health coverage. Health financing
levers to move closer to universal health coverage lie in three interrelated areas:
Healthcare Financing
The Need
More than 120 million people in Pakistan do not have health coverage. This pushes the poor
into debt and an inevitable medical-poverty trap. Two-thirds of households surveyed over the
last three years, reported that they were affected by one or more health problems and went into
debt to finance the cost. Many who cannot afford treatment, particularly women, forego medical
treatment altogether.
The Solution
To fill this vacuum in healthcare financing, the American Pakistan Foundation has partnered with
Heartfile Health Financing to support their groundbreaking work in healthcare reform and health
financing for the poor in Pakistan.
Success Stories
At the age of 15 Majjid was the only breadwinner of his family. After being hit by a tractor he
was out of a job with a starving family and no money for an operation. Through Heartfile he was
able to get the treatment he needed and stay out of debt.
Majid
The Process
Heartfile is contacted via text or email when a person of dire financial need is admitted into one
of a list of preregistered hospitals.
Within 24 hours a volunteer is mobilized to see the patient, assess poverty status and the
eligibility by running their identity card information through the national database authority.
Once eligibility is established, the patient is sent funds within 72 hours through a cash transfer
to their service provider.
Donors to Heartfile have full control over their donation through a web database that allows
them to decide where they want their funds to go. They are connected to the people they
support through a personal donation page that allows them to see exactly how their funds were
used.
Descriptions
Advantages
Disadvantages
Simple
random
Random sample
from whole
population
Highly representative
if all subjects
participate; the ideal
Stratified
random
Random sample
from identifiable
groups (strata),
subgroups, etc.
Cluster
Random samples
of successive
clusters of subjects
(e.g., by
institution) until
small groups are
chosen as units
Possible to select
randomly when no
single list of
population members
exists, but local lists
do; data collected on
groups may avoid
introduction of
confounding by
isolating members
Stage
Combination of
cluster (randomly
selecting clusters)
and random or
stratified random
sampling of
individuals
Can make up
probability sample by
random at stages and
within groups;
possible to select
random sample when
population lists are
Complex, combines
limitations of cluster and
stratified random
sampling
very localized
Purposive
Hand-pick subjects
on the basis of
specific
characteristics
Ensures balance of
group sizes when
multiple groups are to
be selected
Quota
Select individuals
as they come to fill
a quota by
characteristics
proportional to
populations
Ensures selection of
adequate numbers of
subjects with
appropriate
characteristics
Snowball
Subjects with
desired traits or
characteristics
give names of
further
appropriate
subjects
Possible to include
members of groups
where no lists or
identifiable clusters
even exist (e.g., drug
abusers, criminals)
No way of knowing
whether the sample is
representative of the
population
Volunteer,
accidental,
convenien
ce
Inexpensive way of
ensuring sufficient
numbers of a study
Can be highly
unrepresentative
Source: Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated
approach to research design, measurement, and statistics. Thousand Oaks, CA: SAGE
Publications, Inc. (p. 118)
Acron
ym
Definition
PATCH
PATCH
and FATA is tremendous and very positive. He said men and women are also jubilant
over the scheme.
He said youth from far-flung district Chitral to DI Khan and Kohistan are contacting
SMEDA offices to get information about different aspects of this youth friendly
scheme.
He said during last two days, a large number of youth had either visited or phoned
them regarding the scheme. He said most of the women showed their keen interest
in the opening of beauty clinics, boutiques and processing spices, packing and
marketing programmes.
He said SMEDA has floated a list of 56 pre-feasibility studies for guidance and
facilitation of interested entrepreneurs under the PM programme.
For the facilitation of the people in obtaining the loan under the scheme SMEDA has
established information and helping desks in Khyber-Pakthunkhwa Chamber of
Commerce and Industry Peshawar, Swat Chamber of Commerce and Industry in
Mingora, Women Business Development Centre Peshawar and Regional Business
Co-ordination Office Abbottabad for proper guidance and facilitation of youth.
He said settled areas response from tribesmen of FATA is also tremendous and
contacting us for getting information regarding different features and plans of PM
Youth Business Loan programme. Most of the tribesmen of Mohmand Agency have
shown interest in the Marble and Onyx Products Manufacturing, Marble Tiles
Manufacturing Units and Marble Mosaic Development Centres as this area is rich for
this resource.
He said tribesmen of South and North Waziristan besides Khyber, Kurram and
Orakzai Agencies are also showing keen interest in PM's business loan programme.
The SMEDA official assured that all out technical support would be provided to
interested youth in this programme and urged the candidates to come with solid
business plans for their own interest.
Similarly, a senior official of NBP said application form for the scheme is very simple
and comprehensive. He said scheme would usher an era of progress and
development and overcome the growing problem of unemployment. He said the
stifling of business plan with application form is pre-requisite.
Background
Before monovalent OPV (mOPV) became available, the Global Polio Eradication
Initiative (PEI) had used trivalent tOPV in 2 rounds with an interval of 4-6 weeks. The
interval was adopted , because vaccine virus persists in the gut of recipients for that
period of time, risking interference between the three different virus types in the
OPV, while generating an immune response in the vaccinated child. However, most
of the interference between serotypes in tOPV is due to type 2. Since the licensing
of mOPV, studies in Nigeria have shown that approximately 67% of children will
develop immunity to poliovirus type 1 after the first dose of mOPV1 as compared
with approximately 35% of children after the first dose of tOPV. Similarly better seroconversion rates have been estimated for immunity to poliovirus type 3 using
mOPV3 as compared with tOPV.
The SIAD approach is founded on the principle that when monovalent vaccines
are used there will be no interference in the process of sero-conversion when
One SIAD round consists of two passages providing two opportunities for the
target population to be reached during a short period of time, typically two
passages of 5 days each separated by a two day interval.
SIAD has the added operational advantage of enabling the field concentration of
supervisors from other areas to be brought in to support quality implementation
over the whole 2 week period .
Recruit competent international and national supervisors who can dedicate 2 weeks
for the duration of the SIAD.
Monitoring during each passage and making adjustments between 1st and 2nd
passages
During the break between 1st and 2nd passages supervisors should collect and
analyze data, and meet with teams and communities to make corrective action for
the 2nd passage.
Introducing IPV
production, diagnostics and research. Criteria for the safe handling and bio-containment of such
polioviruses, and processes to monitor their application, are essential to minimize the risk of
poliovirus re-introduction in the post-eradication era. Consequently, this area of work includes
finalizing international consensus on long-term bio-containment requirements for polioviruses
and the timelines for their application. Verifying application of those requirements, under the
oversight of the Global Certification Commission, will be a key aspect of the processes for
certifying global eradication. All 194 Member States of the World Health Organization are
affected by work towards this objective.
The goal of the 2013-2018 Polio Eradication and Endgame Strategic Plan is
to complete the eradication and containment of all wild, vaccine-related
and Sabin polioviruses, such that no child ever again suffers paralytic
poliomyelitis.
India was able to interrupt transmission because of its ability to apply a comprehensive set of
tactics and tools to reach and immunize all children that included innovations in:
o Microplanning
o Operations
o Monitoring & accountability
o Technology (e.g. bOPV)
o Social mobilization
o Surge support
INPUT RISKS
1. Insufficient funding
2. Inability to recruit and retain the right people
3. Insufficient Supply of Appropriate Vaccines
IMPLEMENTATION RISKS
4. Inability to Operate in areas of insecurity
5. Decline in Political and/or social will
6. Lack of Accountability for Quality Activities
ENABLING FUNCTIONS
Successful execution of the 2013-2018 Polio Eradication and Endgame Strategic Plan will
require collaboration across the GPEI partners, national governments, donors and other relevant
organizations and institutions. Whilst national governments will be primarily responsible for
successful execution of the Plan at the local level, the GPEI and its partners will lead on a set of
enabling functions to facilitate successful execution of country operations. These functions
include:
The Haddon Matrix is the most commonly used paradigm in the injury prevention field.
Developed by William Haddon in 1970, the matrix looks at factors related to personal attributes,
vector or agent attributes, and environmental attributes before, during and after an injury
or death. By utilizing this framework, one can then think about evaluating the relative
importance of different factors and design interventions.[1]
Phase
Precrash
Environmental Factors
Equipment Factors
Information
Roadworthiness
Attitudes
Lighting
Impairment
Braking
Speed limits
Speed
Management
Pedestrian facilities
Occupant
restraints
Other safety
devices
Crash-protective
design
Crash
Post-
Vehicles and
Human Factors
Police
Enforcement
Use of
restraints
Impairments
First-aid skills
Crash
Access to
medics
Crash-protective
roadside objects
Ease of access
Rescue facilities
Fire risk
Congestion
BMI Cut off points for under weight, over weight, four levels of obese
SAAL seasonal awareness alert letter, measles and dengue stage measured
in terms of DEWS, DMIS
A child suffering from polio brought for vaccination to clinic which vaccines
to give same as given at stage 0 after looking for BCG scar
Carriers
There is increasing evidence that patients, especially elderly, with several chronic diseases
and elevated BMI may demonstrate lower all-cause and cardiovascular mortality
compared with patients of normal weight.
Obesity paradox in overweight and obese patients with coronary heart disease
Ten years ago, Gruberg and coworkers observed better outcomes in overweight and obese
patients with coronary heart disease undergoing percutaneous coronary intervention compared
with their normal-weight counterparts. This unexpected phenomenon was described as an
obesity paradox (2). Normal-weight patients had higher incidence of major in-hospital
complications, including cardiac death. Moreover, at 1-year follow-up significantly higher
mortality rates were observed in low- and normal-weight patients compared with obese and
overweight.
Obesity paradox in patients with chronic heart failure
Investigations carried out in patients with chronic heart failure show a paradoxical decrease in
mortality in those with higher BMI. This observation has been referred to as a reverse
epidemiology (10). Consequently, several other studies in patients with both chronic and acute
heart failure confirmed lower mortality in those with higher BMI
Enlarged muscle mass and better nutritional status
Body composition
Thromboxane production
Endothelial progenitor cells: Less coronary atherosclerosis demonstrated in
autopsies of severely obese subjects is another example of the obesity paradox
Increased muscle strength
Ghrelin sensitivity
Ghrelin is a gastric peptide hormone, initially described as the endogenous ligand for the growth
hormone secretagogue receptor. Ghrelin stimulates growth hormone release and food intake,
promotes positive energy balance/weight gain, and improves cardiac contractility
Cardiorespiratory fitness obese subjects with an increased cardiorespiratory
fitness have lower all-cause mortality and lower risk of cardiovascular and metabolic
diseases and certain cancers
Conclusions
Despite the fact that obesity is recognized as a major risk factor in the development of
cardiovascular diseases and diabetes, a higher BMI may be associated with a lower mortality and
a better outcome in several chronic diseases and health circumstances. This protective effect of
obesity has been described as the obesity paradox or reverse epidemiology. However, it
should be emphasized that the BMI is a crude and flawed anthropometric biomarker that does
not take into account fat mass/fat-free mass ratio, nutritional status, cardiorespiratory fitness,
body fat distribution, or other factors affecting health risks and the patients mortality
Bioterrorism
According to the U.S. Centers for Disease Control and Prevention a bioterrorism attack is the
deliberate release of viruses, bacteria, toxins or other harmful agents used to cause illness or
death in people, animals, or plants. These agents are typically found in nature, but it is possible
that they could be mutated or altered to increase their ability to cause disease, make them
resistant to current medicines, or to increase their ability to be spread into the environment.
Biological agents can be spread through the air, water, or in food. Terrorists tend to use
biological agents because they are extremely difficult to detect and do not cause illness for
several hours to several days. Some bioterrorism agents, like the smallpox virus, can be spread
from person to person and some, like anthrax, cannot.[1]
Bioterrorism is an attractive weapon because biological agents are relatively easy and
inexpensive to obtain, can be easily disseminated, and can cause widespread fear and panic
beyond the actual physical damage.[2] Military leaders, however, have learned that, as a military
asset, bioterrorism has some important limitations; it is difficult to employ a bioweapon in a way
that only the enemy is affected and not friendly forces. A biological weapon is useful
to terrorists mainly as a method of creating mass panic and disruption to a state or a country.
However, technologists such as Bill Joy have warned of the potential power which genetic
engineering might place in the hands of future bio-terrorists.[3]
The use of agents that do not cause harm to humans but disrupt the economy have been
discussed.[citation needed] A highly relevant pathogen in this context is thefoot-and-mouth
disease (FMD) virus, which is capable of causing widespread economic damage and public
concern (as witnessed in the 2001 and 2007 FMD outbreaks in the UK), whilst having almost no
capacity to infect humans.
Category A
Tularemia
Tularemia, or rabbit fever,[9] has a very low fatality rate if treated, but can
severely incapacitate. The disease is caused by the Francisella
tularensis bacterium, and can be contracted through contact with the fur,
inhalation, ingestion of contaminated water or insect bites. Francisella
tularensis is very infectious. A small number (1050 or so organisms) can
The neurotoxin
[19]
The disease has a history of use in biological warfare dating back many
centuries, and is considered a threat due to its ease of culture and ability to
remain in circulation among local rodents for a long period of time. The
weaponized threat comes mainly in the form of pneumonic plague (infection
by inhalation)[23] It was the disease that caused the Black Death in Medieval
Europe.
Viral hemorrhagic fevers
[24]
Category B[edit]
Category B agents are moderately easy to disseminate and have low mortality rates.
Staphylococcal enterotoxin B
Category C[edit]
Category C agents are emerging pathogens that might be engineered for mass dissemination
because of their availability, ease of production and dissemination, high mortality rate, or ability
to cause a major health impact.
Nipah virus
Hantavirus
SARS
HIV/AIDS
Anthrax
Anthrax can enter the human body through the intestines (ingestion), lungs (inhalation), or skin
(cutaneous) and causes distinct clinical symptoms based on its site of entry. In general, an
infected human will be quarantined. However, anthrax does not usually spread from an infected
human to a noninfected human. But, if the disease is fatal to the person's body, its mass of
anthrax bacilli becomes a potential source of infection to others and special precautions should
be used to prevent further contamination. Inhalational anthrax, if left untreated until obvious
symptoms occur, may be fatal.
Anthrax can be contracted in laboratory accidents or by handling infected animals or their wool
or hides. It has also been used in biological warfare agents and by terrorists to intentionally infect
as exemplified by the 2001 anthrax attacks.
Occupational exposure to infected animals or their products (such as skin, wool, and
meat) is the usual pathway of exposure for humans
Situation analysis
Situation analysis refers to a collection of methods that managers use to analyze
an organization's internal and external environment to understand the
organization's capabilities, customers, and business environment. [1] The situation
analysis consists of several methods of analysis: The 5Cs Analysis, SWOT
analysis and Porter five forces analysis.[2] A Marketing Plan is created to guide
businesses on how to communicate the benefits of their products to the needs of
potential customer. The situation analysis is the second step in the marketing plan
and is a critical step in establishing a long term relationship with customers
The situation analysis looks at both the macro-environmental factors that affect
many firms within the environment and the micro-environmental factors that
specifically affect the firm. The purpose of the situation analysis is to indicate to a
company about the organizational and product position, as well as the overall
survival of the business, within the environment. Companies must be able to
summarize opportunities and problems within the environment so they can
understand their capabilities within the market
SWOT
A SWOT Analysis is another method under the situation analysis that examines
the Strengths and Weaknesses of a company (internal environment) as well as
theOpportunities and Threats within the market (external environment). A SWOT analysis looks
at both current and future situations, where they analyze their current strengths and weaknesses
while looking for future opportunities and threats. The goal is to build on strengths as much as
possible while reducing weaknesses. A future threat can be a potential weakness while a future
opportunity can be a potential strength.[13] This analysis helps a company come up with a plan
that keeps it prepared for a number of potential scenarios.
5c analysis of situational analysis
The 5C analysis is considered the most useful and common way to analyze the market
environment, because of the extensive information it provides.[5]
Company
The company analysis involves evaluation of the company's objectives, strategy, and capabilities.
These indicate to an organization the strength of the business model, whether there are areas for
improvement, and how well an organization fits the external environment.[6]
Goals & Objectives: An analysis on the mission of the business, the industry
of the business and the stated goals required to achieve the mission.
Competitors
The competitor analysis takes into consideration the competitors position within the industry and
the potential threat it may pose to other businesses. The main purpose of the competitor analysis
is for businesses to analyze a competitor's current and potential nature and capabilities so they
can prepare against competition. The competitor analysis looks at the following criteria:
Customers
Customer analysis can be vast and complicated. Some of the important areas that a company
analyzes includes:[5]
Demographics
Collaborators
Collaborators are useful for businesses as they allow for an increase in the creation of ideas,
as well as an increase in the likelihood of gaining more business opportunities.[7] The
following type of collaborators are:
Distributors: Distributors are important as they are the 'holding areas for
inventory'. Distributors can help manage manufacturer relationships as
well as handle vendor relationships.[10]
Businesses must be able to identify whether the collaborator has the capabilities needed
to help run the business as well as an analysis on the level of commitment needed for a
collaborator-business relationship.[6]
Climate
To fully understand the business climate and environment, many factors that can affect
the business must be researched and understood. An analysis on the climate is also
known as the PEST analysis. The types of climate/environment firms have to analyse
are:
Hawthorne effect
The Hawthorne effect is a term referring to the tendency of some people to work
harder and perform better when they are participants in an experiment. Individuals
may change their behavior due to the attention they are receiving from researchers
rather than because of any manipulation of independent variables
A placebo is a simulated or otherwise medically ineffectual treatment for a disease
or other medical condition intended to deceive the recipient. Sometimes patients
given a placebo treatment will have a perceived or actual improvement in a medical
condition, a phenomenon commonly called the placebo effect.
Simpson's paradox
Simpson's paradox, or the YuleSimpson effect, is a paradox in probability and statistics, in
which a trend that appears in different groups of data disappears when these groups are
combined. It is sometimes given the impersonal title reversal paradox or amalgamation
paradox.[1]
This result is often encountered in social-science and medical-science statistics,[2] and is
particularly confounding when frequency data are unduly given causal interpretations.
[3]
Simpson's Paradox disappears when causal relations are brought into consideration. Many
statisticians believe that the mainstream public should be informed of the counter-intuitive
results in statistics such as Simpson's paradox.
Simpson's paradox for continuous data: a positive trend appears for two separate
groups (blue and red), a negative trend (black, dashed) appears when the data are
combined.
Health economics dollars per life years gained, dollars per quality adjusted
life years gained. It is a measure of cost effective analysis which measures
outcome in terms of DALYS
Cost benefit analysis outcome measured in monetary terms- not suitable for
health
MDG
PERT analysis
Personality traits
Psychosomatic disorders
Psychosocial disorders
FCPS
Ob gene
The human obese (OB) gene: RNA expression pattern and mapping on the physical,
cytogenetic, and genetic maps of chromosome 7.
The recently identified mouse obese (ob) gene apparently encodes a secreted protein that may
function in the signaling pathway of adipose tissue. Mutations in the mouse ob gene are
associated with the early development of gross obesity
Furthermore the ob gene expression was increased in human obesity, which led
to postulate the concept of leptin resistance.
Leptin , the "satiety hormone", is a hormone made by fat cells which regulates
the amount of fat stored in the body. It does this by adjusting both the sensation
Ghrelin, the "hunger hormone", is a peptide produced by ghrelin cells in the gastrointestinal
tract[1][2] which functions as a neuropeptide in the central nervous system.[3] Beyond regulating
hunger, ghrelin also plays a significant role in regulating the distribution and rate of use of
energy.
When the stomach is empty, ghrelin is secreted. When the stomach is stretched, secretion stops.
It acts onhypothalamic brain cells both to increase hunger, and to increase gastric acid secretion
and gastrointestinal motility to prepare the body for food intake.[5]
The receptor for ghrelin is found on the same cells in the brain as the receptor for leptin, the
satiety hormone that has opposite effects from ghrelin.[6] Ghrelin also plays an important role in
regulating reward perception in dopamine neurons that link the ventral tegmental area to
the nucleus accumbens[7] (a site that plays a role in processing sexual desire, reward,
and reinforcement, and in developing addictions) through its colocalized receptors and
interaction with dopamine and acetylcholine.[3][8] Ghrelin is encoded by theGHRL gene and is
produced from the presumed cleavage of the prepropeptide ghrelin/obestatin. Full-length
preproghrelin is homologous to promotilin and both are members of the motilin family.
Hidden hunger
The Global Hunger Index (GHI) is a multidimensional statistical tool used to describe
the state of countrieshunger situation. The GHI measures progress and failures in the
global fight against hunger.[1] The GHI is updated once a year.
The Index was adopted and further developed by the International Food Policy Research
Institute (IFPRI), and was first published in 2006 with the Welthungerhilfe, a
German non-profit organization (NGO). Since 2007, the Irish NGO Concern
Worldwide joined the group as co-publisher.[2][3][4][5][6][7][8][9][10]
The 2014 GHI was calculated for 120 developing countries and countries in transition, 55
of which with a serious or worse hunger situation.[11]
In addition to the ranking, the Global Hunger Index report every year focuses on a main
topic: in 2014 the thematic focus was on hidden hunger, a form of undernutrition
characterized by micronutrient deficiencies.[11]
specific life phases, food supplements can be used. In particular the addition of vitamin A, leads
to a better survival rate of children.[GHI2014 5]
Generally, the situation concerning hidden hunger can only be improved, when many measures
intermesh. In addition to the direct measures described above this includes education and
empowerment of women, creation of better sanitation adequate hygiene, access to clean drinking
water and access to health services.
Simply eating until satisfied is not enough. Every woman, man and child have the right to a
culturally adequate amount, but also adequate quality to cover their food needs. The international
community has to ensure that hidden hunger is not overlooked and that the post-2015 agenda
includes a comprehensive goal for the elimination of hunger and malnutrition of any type.[GHI2014 6]
Focus of the GHI 2013: Resilience to build food and nutrition security
Many of the countries, in which the hunger situation is "alarming" or "extremely alarming", are
particularly prone to crises: In the African Sahel people experience yearly droughts. On top of
that, they have to deal with violent conflict and natural calamities. At the same time, the global
context becomes more and more volatile (financial and economic crises, food price crises).
The inability to cope with these crises leads to the destruction of many development successes
that had been achieved over the years. In addition, people have even less resources to withstand
the next shock or crises. 2.6 billion people in the world live with less than 2 USD per day. For
them, a sickness in the family, crop failure after a drought or the interruption of remittances from
relatives who live abroad can set in motion a downward spiral from which they cannot free
themselves on their own.
It is therefore not enough to support people in emergencies and, once the crises is over, to start
longer term development efforts. Instead, emergency and development assistance has to be
conceptualized with the goal of increasing resilience of poor people against these shocks.
The Global Hunger Index differentiates three coping strategies. The lower the intensity of the
crises, the less resources have to be used to cope with the consequences:[GHI2013 1]
Absorption: Skills or resources, which are used to reduce the impact of a crisis
without changing the lifestyle (e.g. selling some livestock)
behavior have to be made (e.g. nomadic tribes become sedentary and become
farmers because they cannot keep their herds).
Scientific monitoring and evaluation of measures and programs with the goal
to increase resilience.
Improvement of food especially of mothers and children through nutritionspecific and sensitive interventions to avoid that short-term crises lead to
nutrition-related problems late in life or across generations.
Focus of the GHI 2012: Pressures on land, water and energy resources
Increasingly, Hunger is related to how we use land, water and energy. The growing scarcity of
these resources puts more and more pressure on food security. Several factors contribute to an
increasing shortage of natural resources:[GHI2012 1]
1. Demographic change: The world population is expected to be over 9 billion by
2050. Additionally, more and more people live in cities. Urban populations
feed themselves differently than inhabitants of rural areas; they tend to
consume less staple foods and more meat and dairy products.
2. Higher income and non-sustainable use of resources: As the global economy
grows, wealthy people consume more food and goods, which have to be
produced with a lot of water and energy. They can afford not to be efficient
and wasteful in their use of resources.
3. Bad policies and weak institutions: When policies, for example energy policy,
are not tested for the consequences they have on the availability of land and
water it can lead to failures. An example are the biofuel policies of
industrialized countries: As corn and sugar are increasingly used for the
production of fuels, there is less land and water for the production of food.
Signs for an increasing scarcity of energy, land and water resources are for example: growing
prices for food and energy, a massive increase of large-scale investment in arable land (so-called
land grabbing), increasing degradation of arable land because of too intensive land use (for
example, increasing desertification), increasing number of people, who live in regions with
lowering ground water levels, and the loss of arable land as a consequence of climate change.
The analysis of the global conditions lead the authors of the GHI 2012 to recommend several
policy actions:[14]
Support for approaches, that lead to a more efficient use of land, water and
energy along the whole value chain
Use of the so-called biofuels, promoted by high oil prices, subsidies in the
United States (over one third of the corn harvest of 2009 and 2010 respectively)
and quota for biofuel in gasoline in the European Union, India and others.
Volatility and prices increases are worsened according to the report by the concentration of staple
foods in a few countries and export restrictions of these goods, the historical low of
worldwide cereal reserves and the lack of timely information on food products, reserves and
price developments. Especially this lack of information can lead to overreactions in the markets.
Moreover, seasonal limitations on production possibilities, limited land for agricultural
production, limited access to fertilizers and water, as well as the increasing demand resulting
from population growth, puts pressure on food prices.
According to the Global Hunger Index 2011 price trends show especially harsh consequences for
poor and under-nourished people, because they are not capable to react to price spikes and price
changes. Reactions, following these developments, can include: reduced calorie intake, no longer
sending children to school, riskier income generation such as prostitution, criminality, or
searching landfills, and sending away household members, who cannot be fed anymore. In
addition, the report sees an all-time high in the instability and unpredictability of food prices,
which after decades of slight decrease, increasingly show price spikes (strong and short-term
increase).[GHI2011 1][GHI2011 2]
At a national level, especially food importing countries (those with a negative food trade balance,
are affected by the changing prices.
Focus of the GHI 2010: Early Childhood Under-nutrition
Under-nutrition among children has reached terrible levels. About 195 million children under the
age of five in the developing world about one in three children - are too small and thus
underdeveloped. Nearly one in four children under age five 129 million is underweight, and
one in 10 is severely underweight. The problem of child under-nutrition is concentrated in a few
countries and regions with more than 90 percent of stunted children living in Africa and Asia.
42% of the worlds undernourished children live in India alone.
The evidence presented in the report[21] [22] shows that the window of opportunity for improving
nutrition spans is the 1,000 days between conception and a childs second birthday (that is the
period from -9 to +24 months). Children who are do not receive adequate nutrition during this
period have increased risks to experiencing lifelong damage, including poor physical and
cognitive development, poor health, and even early death. The consequences of malnutrition that
occurred after 24 months of a child's life are by contrast largely reversible.[6]
Gomez
In 1956, Gmez and Galvan studied factors associated with death in a group of malnourished
children in a hospital in Mexico City, Mexico and defined categories of malnutrition: first,
second, and third degree.[29] The degrees were based on weight below a specified percentage of
median weight for age.[30] The risk of death increases with increasing degree of malnutrition.
[29]
An adaptation of Gomez's original classification is still used today. While it provides a way to
compare malnutrition within and between populations, the classification has been criticized for
being "arbitrary" and for not considering overweight as a form of malnutrition. Also, height
alone may not be the best indicator of malnutrition; children who are born prematurely may be
considered short for their age even if they have good nutrition.[31]
Degree of PEM
Normal
90%-100%
75%-89%
60%-74%
<60%
Waterlow
John Conrad Waterlow established a new classification for malnutrition.[32] Instead of using just
weight for age measurements, the classification established by Waterlow combines weight-forheight (indicating acute episodes of malnutrition) with height-for-age to show the stunting that
results from chronic malnutrition.[33]One advantage of the Waterlow classification over the
Gomez classification is that weight for height can be examined even if ages are not known.[32]
Degree of PEM
age
height
Normal: Grade 0
>95%
>90%
Mild: Grade I
87.5-95%
80-90%
80-87.5%
70-80%
<80%
<70%
Moderate: Grade
II
These classifications of malnutrition are commonly used with some modifications by WHO.[30]
Strategic planning
21.
To create a box-and-whisker plot, you start by ordering your data (putting the
values in numerical order), if they aren't ordered already. Then you find the
median of your data. The median divides the data into two halves. To divide the
data into quarters, you then find the medians of these two halves.
Statistics assumes that your data points (the numbers in your list) are clustered around some central
value. The "box" in the box-and-whisker plot contains, and thereby highlights, the middle half of these
data points.
To create a box-and-whisker plot, you start by ordering your data (putting the values in numerical order), if
they aren't ordered already. Then you find the median of your data. The median divides the data into two
halves. To divide the data into quarters, you then find the medians of these two halves. Note: If you have
an even number of values, so the first median was the average of the two middle values, then you include
the middle values in your sub-median computations. If you have an odd number of values, so the first
median was an actual data point, then you do not include that value in your sub-median computations.
That is, to find the sub-medians, you're only looking at the values that haven't yet been used.
You have three points: the first middle point (the median), and the middle points of the two halves (what I
call the "sub-medians"). These three points divide the entire data set into quarters, called "quartiles". The
top point of each quartile has a name, being a " Q" followed by the number of the quarter. So the top point
of the first quarter of the data points is "Q1", and so forth. Note that Q1 is also the middle number for the
first half of the list, Q2 is also the middle number for the whole list, Q3 is the middle number for the second
half of the list, and Q4 is the largest value in the list.
Once you have these three points, Q1, Q2, and Q3, you have all you need in order to draw a simple boxand-whisker plot. Here's an example of how it works.
5.1, 3.9, 4.5, 4.4, 4.9, 5.0, 4.7, 4.1, 4.6, 4.4, 4.3, 4.8, 4.4, 4.2, 4.5, 4.4
3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first number I need is the median of the entire set. Since there are seventeen values in this
list, I need the ninth value:
3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4, 4.4, 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The median is
Q2 = 4.4.
The next two numbers I need are the medians of the two halves. Since I used the " 4.4" in the
middle of the list, I can't re-use it, so my two remaining data sets are:
3.9, 4.1, 4.2, 4.3, 4.3, 4.4, 4.4, 4.4 and 4.5, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1
The first half has eight values, so the median is the average of the middle two:
Q1 to Q3:
By the way, box-and-whisker plots don't have to be drawn horizontally as I did above; they can be vertical,
too.
22.
In logic, an argument is valid if and only if its conclusion is logically entailed by its
premises. A formula is valid if and only if it is true under every interpretation, and
an argument form (or schema) is valid if and only if every argument of that logical
form is valid.
useful because human observers will not necessarily interpret answers the same way;
raters may disagree as to how well certain responses or material demonstrate
knowledge of the construct or skill being assessed.
Example: Inter-rater reliability might be employed when different judges are
evaluating the degree to which art portfolios meet certain standards. Inter-rater
reliability is especially useful when judgments can be considered relatively
subjective. Thus, the use of this type of reliability would probably be more likely
when evaluating artwork as opposed to math problems.
Internal consistency reliability is a measure of reliability used to evaluate the degree
to which different test items that probe the same construct produce similar results.
Example: When designing a rubric for history one could assess students knowledge
across the discipline. If the measure can provide information that students are lacking
knowledge in a certain area, for instance the Civil Rights Movement, then that
assessment tool is providing meaningful information that can be used to improve the
course or program requirements.
5. Sampling Validity (similar to content validity) ensures that the measure covers the
broad range of areas within the concept under study. Not everything can be covered,
so items need to be sampled from all of the domains. This may need to be completed
using a panel of experts to ensure that the content area is adequately sampled.
Additionally, a panel can help limit expert bias (i.e. a test reflecting what an individual
personally feels are the most important or relevant areas).
Example: When designing an assessment of learning in the theatre department, it
would not be sufficient to only cover issues related to acting. Other areas of theatre
such as lighting, sound, functions of stage managers should all be included. The
assessment should reflect the content area in its entirety.
What are some ways to improve validity?
5. Make sure your goals and objectives are clearly defined and operationalized.
Expectations of students should be written down.
6. Match your assessment measure to your goals and objectives. Additionally, have
the test reviewed by faculty at other schools to obtain feedback from an outside
party who is less invested in the instrument.
7. Get students involved; have the students look over the assessment for
troublesome wording, or other difficulties.
8. If possible, compare your measure with other measures, or data that may be
available
23.
Also, a negatively skewed curves can be of entirely positive numbers and, positively skewed
curves can be of entirely negative numbers. "Positive" and "negative" provides you the direction
of the curves tail and, the direction that numbers are moving on the x-axis.
24.
25.
numbers on the x-axis, under the tail, are less than the numbers under
the hump; negatively skewed curves do NOT necessarily have negative
numbers (as in example below)
26.
27. Positive skew: points in positive direction
28.
numbers on the x-axis, under the tail, are more than the numbers
under the hump; positively skewed curves do NOT necessarily have positive
numbers (as in example below)
29.
30.
31.
32.
33.
34.
What is a Correlation?
A population pyramid, also called an age pyramid or age picture diagram, is a graphical
illustration that shows the distribution of various age groups in a population (typically that of
a country or region of the world), which forms the shape of a pyramid when the population is
growing.[1] It is also used in ecology to determine the overall age distribution of a population;
an indication of the reproductive capabilities and likelihood of the continuation of a species.
It typically consists of two back-to-back bar graphs, with the population plotted on the X-axis
and age on the Y-axis, one showing the number of males and one showing females in a
particular population in five-year age groups (also called cohorts). Males are conventionally
shown on the left and females on the right, and they may be measured by raw number or as a
percentage of the total population.
Population pyramids are often viewed as the most effective way to graphically depict the age
and sex distribution of a population, partly because of the very clear image these pyramids
present.[2]
A great deal of information about the population broken down by age and sex can be read
from a population pyramid, and this can shed light on the extent of development and other
aspects of the population. A population pyramid also tells how many people of each age
range live in the area. There tends to be more females than males in the older age groups, due
to females' longer life expectancy.
A baby boom is any period marked by a greatly increased birth rate. This
demographic phenomenon is usually ascribed within certain geographical
bounds. People born during such a period are often called baby boomers;
however, some experts distinguish between those born during such
demographic baby booms and those who identify with the overlapping
cultural generations. Conventional wisdom states that baby booms signify
good times and periods of general economic growth and stability,[citation needed]
however in circumstances where baby booms lead to very large number of
children per family unit, such as in the case in lower income regions of the
world, the outcome may be different. One common baby boom was right
after WWII during the Cold War
35.
the populations death rates; and nonstable momentum measures deviations between the
observed population age distribution and the implied stable age distribution.
To understand the usefulness of stable and nonstable momentum, consider the case of a
population with unchanging vital rates. Over time, stable momentum remains constant as both
the stable age distribution and the stationary age distribution are unchanging. In this sense we
may consider stable momentum to be the permanent component of population momentum; it
persists as long as mortality and fertility do not change. In contrast, nonstable momentum in this
population gradually becomes weaker and eventually vanishes as the populations age
distribution conforms to the stable age distribution. In this sense we may consider nonstable
momentum to be the temporary or transitory component of population momentum. Of course,
most populations exhibit some year-to-year fluctuation in fertility and mortality, so in empirical
analyses we commonly observe concurrent changes in both the permanent and the temporary
components of momentum. Nevertheless, how overall momentum is composed and what part is
contributed by stable versus nonstable momentum have implications for future population
growth or decline.1
In showing patterns over time in total population momentum, stable momentum, and nonstable
momentum, we pursue three distinct ends. First and most simply, we trace how momentum
dynamics have historically unfolded, not only across demographic transitions but also in the
midst of fertility swings and other demographic cycles. This is a straightforward task that has not
yet been undertaken. Second, we demonstrate some previously ignored empirical regularities of
the demographic transition, as it has occurred around the globe and at various times over the last
three centuries. Third, although population momentum is by definition a static measure, our
results suggest that momentum can also be considered a dynamic process. Across the
demographic transition, momentum typically increases and then decreases as survival first
improves and fertility rates later fall. This dynamic view of momentum is further supported by
trends in stable and nonstable momentum. A change in stable momentum induced by a change in
fertility will initiate a demographic chain reaction that affects nonstable momentum both
immediately and subsequently.
more slowly, so that year after year, for decades if not centuries, the number of births exceeded
the number of deaths by a substantial margin. In 1700 the population of Europe was an estimated
30 million. By 1900 it had more than quadrupled to 127 million (Livi-Bacci 2007). Europeans
also migrated to North America and Australia by the millions. The population continued to grow
despite this out-migration, since most of Europe did not experience substantial declines in the
number of children per woman until sometime in the late nineteenth or early twentieth century.
Fertility reached replacement in many parts of Europe around the mid-twentieth century, and
since then has fallen well below replacement in much of the continent.
Demographic transition has occurred much faster in the developing world than it did in Europe.
In 195055, for example, life expectancy at birth in India was about 38 years for both sexes
combined; 15 years later, life expectancy was nearly 47 (United Nations 2009b). Over the same
period in Kenya, life expectancy rose from 42 to 51 years, while in Mexico it rose from 51 to 60
(United Nations 2009b). This rapid mortality decline, brought about in part by technology
adopted from the West and accompanied initially by little or no decrease in fertility, led not to the
long period of steady population expansion that Europe experienced starting more than a century
earlier, but rather to rapid population growth, especially in the third quarter of the twentieth
century. Following World War II, developing countries grew at an average annual rate of more
than 2 percent, with some countries posting yearly population gains of more than 3 or even 4
percent, as in Ivory Coast, Jordan, and Libya (United Nations 2009b).
Unlike in Europe, rapid fertility decline often followed within just a few decades. Although much
of sub-Saharan Africa still has fertility well above replacement, most of the rest of the world
appears to have completed the demographic transition. Today every country in East Asia has subreplacement fertility, and even in countries like Bangladesh and Indonesia, once the cause of
much hand-wringing among population-control advocates (Connelly 2008: 11, 305), fertility is
now barely above replacement (United Nations 2009b). The concept of a demographic transition
therefore describes developing-world experience about as well as it seems to have portrayed
earlier developed-world experience. The major differences between these two situations are the
speed of mortality decline, the speed of fertility decline, and, as has received most attention both
then and now, the rate of population growth. Today it is very unusual to see the kind of
population doubling timesin some cases less than 20 yearsthat were so alarming to
policymakers and scholars throughout the 1960s and 1970s (Ehrlich 1968).
36.
37.
ROC curves, why used, how AUC is determined what does 1-specificity
means?
ROC analysis provides tools to select possibly optimal models and to discard suboptimal
ones independently from (and prior to specifying) the cost context or the class
distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of
diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during
World War II for detecting enemy objects in battlefields and was soon introduced to
psychology to account for perceptual detection of stimuli. ROC analysis since then has
been used in medicine, radiology, biometrics, and other areas for many decades and is
increasingly used in machine learning and data mining research.
The ROC is also known as a relative operating characteristic curve, because it is a
comparison of two operating characteristics (TPR and FPR) as the criterion changes.[1
38.
Type I error is often referred to as a 'false positive', and is the process of incorrectly
rejecting the null hypothesis in favor of the alternative. In the case above, the null hypothesis
refers to the natural state of things, stating that the patient is not HIV positive.
The alternative hypothesis states that the patient does carry the virus. A Type I error would
indicate that the patient has the virus when they do not, a false rejection of the null.
Type II Error
A Type II error is the opposite of a Type I error and is the false acceptance of the null
hypothesis. A Type II error, also known as a false negative, would imply that the patient is
free of HIV when they are not, a dangerous diagnosis.
In most fields of science, Type II errors are not seen to be as problematic as a Type I error.
With the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is
inferred from a non-rejected null. The Type I error is more serious, because you have
wrongly rejected the null hypothesis.
Medicine, however, is one exception; telling a patient that they are free of disease, when
they are not, is potentially dangerous.
12.
13.
analysis techniques?
14.
What is surveillance, what are the types, what are criteria for
conducting surveillance?
human
health
consequences
continues
to
occur
throughout India.
December 2004 marked the twentieth anniversary of the massive
toxic gas leak from Union Carbide Corporation's chemical plant in
Bhopal in the state of Madhya Pradesh, India that killed more
to close the plant and prepare it for sale in July 1984 due to decreased profitability
[3]. When no ready buyer was found, UCIL made plans to dismantle key
production units of the facility for shipment to another developing country. In the
meantime, the facility continued to operate with safety equipment and procedures
far below the standards found in its sister plant in Institute, West Virginia. The
local government was aware of safety problems but was reticent to place heavy
industrial safety and pollution control burdens on the struggling industry because
it feared the economic effects of the loss of such a large employer [3].
At 11.00 PM on December 2 1984, while most of the one million residents of
Bhopal slept, an operator at the plant noticed a small leak of methyl isocyanate
(MIC) gas and increasing pressure inside a storage tank. The vent-gas scrubber, a
safety device designer to neutralize toxic discharge from the MIC system, had
been turned off three weeks prior [3]. Apparently a faulty valve had allowed one
ton of water for cleaning internal pipes to mix with forty tons of MIC [1]. A 30 ton
refrigeration unit that normally served as a safety component to cool the MIC
storage tank had been drained of its coolant for use in another part of the plant [3].
Pressure and heat from the vigorous exothermic reaction in the tank continued to
build. The gas flare safety system was out of action and had been for three months.
At around 1.00 AM, December 3, loud rumbling reverberated around the plant as a
safety valve gave way sending a plume of MIC gas into the early morning air [4].
Within hours, the streets of Bhopal were littered with human corpses and the
carcasses of buffaloes, cows, dogs and birds. An estimated 3,800 people died
immediately, mostly in the poor slum colony adjacent to the UCC plant [1,5].
Local hospitals were soon overwhelmed with the injured, a crisis further
compounded by a lack of knowledge of exactly what gas was involved and what
its effects were [1]. It became one of the worst chemical disasters in history and
the name Bhopal became synonymous with industrial catastrophe [5].
Estimates of the number of people killed in the first few days by
the plume from the UCC plant run as high as 10,000, with 15,000
to
20,000
premature
deaths
reportedly
occurring
in
the
epidemiological
studies
conducted
soon
after
the
were taken out of the U.S. legal system under the ruling of the
presiding American judge and placed entirely under Indian
jurisdiction much to the detriment of the injured parties.
In a settlement mediated by the Indian Supreme Court, UCC accepted moral
responsibility and agreed to pay $470 million to the Indian government to be
distributed to claimants as a full and final settlement. The figure was partly based
on the disputed claim that only 3000 people died and 102,000 suffered permanent
disabilities [9]. Upon announcing this settlement, shares of UCC rose $2 per share
or 7% in value [1]. Had compensation in Bhopal been paid at the same rate that
asbestosis victims where being awarded in US courts by defendant including UCC
which mined asbestos from 1963 to 1985 the liability would have been greater
than the $10 billion the company was worth and insured for in 1984 [10]. By the
end of October 2003, according to the Bhopal Gas Tragedy Relief and
Rehabilitation Department, compensation had been awarded to 554,895 people for
injuries received and 15,310 survivors of those killed. The average amount to
families of the dead was $2,200 [9].
At every turn, UCC has attempted to manipulate, obfuscate and withhold scientific
data to the detriment of victims. Even to this date, the company has not stated
exactly what was in the toxic cloud that enveloped the city on that December night
[8]. When MIC is exposed to 200 heat, it forms degraded MIC that contains the
more deadly hydrogen cyanide (HCN). There was clear evidence that the storage
tank temperature did reach this level in the disaster. The cherry-red color of blood
and viscera of some victims were characteristic of acute cyanide poisoning [11].
Moreover, many responded well to administration of sodium thiosulfate, an
effective therapy for cyanide poisoning but not MIC exposure [11]. UCC initially
recommended use of sodium thiosulfate but withdrew the statement later
prompting suggestions that it attempted to cover up evidence of HCN in the gas
leak. The presence of HCN was vigorously denied by UCC and was a point of
conjecture among researchers [8,11-13].
As further insult, UCC discontinued operation at its Bhopal plant following the
disaster but failed to clean up the industrial site completely. The plant continues to
leak several toxic chemicals and heavy metals that have found their way into local
aquifers. Dangerously contaminated water has now been added to the legacy left
by the company for the people of Bhopal [1,14].
Lessons learned
The events in Bhopal revealed that expanding industrialization in developing
countries without concurrent evolution in safety regulations could have
catastrophic consequences [4]. The disaster demonstrated that seemingly local
problems of industrial hazards and toxic contamination are often tied to global
market dynamics. UCC's Sevin production plant was built in Madhya Pradesh not
to avoid environmental regulations in the U.S. but to exploit the large and growing
Indian pesticide market. However the manner in which the project was executed
suggests the existence of a double standard for multinational corporations
operating in developing countries [1]. Enforceable uniform international operating
regulations for hazardous industries would have provided a mechanism for
significantly improved in safety in Bhopal. Even without enforcement,
international standards could provide norms for measuring performance of
individual companies engaged in hazardous activities such as the manufacture of
pesticides and other toxic chemicals in India [15]. National governments and
international agencies should focus on widely applicable techniques for corporate
responsibility and accident prevention as much in the developing world context as
in advanced industrial nations [16]. Specifically, prevention should include risk
reduction in plant location and design and safety legislation [17].
Local governments clearly cannot allow industrial facilities to be situated within
urban areas, regardless of the evolution of land use over time. Industry and
government need to bring proper financial support to local communities so they
can provide medical and other necessary services to reduce morbidity, mortality
and material loss in the case of industrial accidents.
Public health infrastructure was very weak in Bhopal in 1984. Tap water was
available for only a few hours a day and was of very poor quality. With no
functioning sewage system, untreated human waste was dumped into two nearby
lakes, one a source of drinking water. The city had four major hospitals but there
was a shortage of physicians and hospital beds. There was also no mass casualty
emergency response system in place in the city [3]. Existing public health
infrastructure needs to be taken into account when hazardous industries choose
sites for manufacturing plants. Future management of industrial development
requires that appropriate resources be devoted to advance planning before any
disaster occurs [18]. Communities that do not possess infrastructure and technical
expertise to respond adequately to such industrial accidents should not be chosen
as sites for hazardous industry.
Since 1984
Following the events of December 3 1984 environmental awareness and activism
in India increased significantly. The Environment Protection Act was passed in
1986, creating the Ministry of Environment and Forests (MoEF) and strengthening
India's commitment to the environment. Under the new act, the MoEF was given
overall responsibility for administering and enforcing environmental laws and
policies. It established the importance of integrating environmental strategies into
all industrial development plans for the country. However, despite greater
government commitment to protect public health, forests, and wildlife, policies
geared to developing the country's economy have taken precedence in the last 20
years [19].
India has undergone tremendous economic growth in the two decades since the
Bhopal disaster. Gross domestic product (GDP) per capita has increased from
$1,000 in 1984 to $2,900 in 2004 and it continues to grow at a rate of over 8% per
year [20]. Rapid industrial development has contributed greatly to economic
growth but there has been significant cost in environmental degradation and
increased public health risks. Since abatement efforts consume a large portion of
India's GDP, MoEF faces an uphill battle as it tries to fulfill its mandate of
reducing industrial pollution [19]. Heavy reliance on coal-fired power plants and
poor enforcement of vehicle emission laws have result from economic concerns
taking precedence over environmental protection [19].
With the industrial growth since 1984, there has been an increase in small scale
industries (SSIs) that are clustered about major urban areas in India. There are
generally less stringent rules for the treatment of waste produced by SSIs due to
less waste generation within each individual industry. This has allowed SSIs to
dispose of untreated wastewater into drainage systems that flow directly into
rivers. New Delhi's Yamuna River is illustrative. Dangerously high levels of heavy
metals such as lead, cobalt, cadmium, chrome, nickel and zinc have been detected
in this river which is a major supply of potable water to India's capital thus posing
a potential health risk to the people living there and areas downstream [21].
Land pollution due to uncontrolled disposal of industrial solid and hazardous
waste is also a problem throughout India. With rapid industrialization, the
generation of industrial solid and hazardous waste has increased appreciably and
the environmental impact is significant [22].
India relaxed its controls on foreign investment in order to accede to WTO
rules and thereby attract an increasing flow of capital. In the process, a number of
environmental regulations are being rolled back as growing foreign investments
continue to roll in. The Indian experience is comparable to that of a number of
developing countries that are experiencing the environmental impacts of structural
adjustment. Exploitation and export of natural resources has accelerated on the
subcontinent. Prohibitions against locating industrial facilities in ecologically
sensitive zones have been eliminated while conservation zones are being stripped
of their status so that pesticide, cement and bauxite mines can be built [23]. Heavy
reliance on coal-fired power plants and poor enforcement of vehicle emission laws
are other consequences of economic concerns taking precedence over
environmental protection [19].
UCC has shrunk to one sixth of its size since the Bhopal disaster in an effort to
restructure and divest itself. By doing so, the company avoided a hostile takeover,
placed a significant portion of UCC's assets out of legal reach of the victims and
gave its shareholder and top executives bountiful profits [1]. The company still
operates under the ownership of Dow Chemicals and still states on its website that
the Bhopal disaster was "cause by deliberate sabotage". [28].
Some positive changes were seen following the Bhopal disaster. The British
chemical company, ICI, whose Indian subsidiary manufactured pesticides,
increased attention to health, safety and environmental issues following the events
of December 1984. The subsidiary now spends 3040% of their capital
expenditures on environmental-related projects. However, they still do not adhere
to standards as strict as their parent company in the UK. [24].
The US chemical giant DuPont learned its lesson of Bhopal in a different way. The
company attempted for a decade to export a nylon plant from Richmond, VA to
Goa, India. In its early negotiations with the Indian government, DuPont had
sought and won a remarkable clause in its investment agreement that absolved it
from all liabilities in case of an accident. But the people of Goa were not willing to
acquiesce while an important ecological site was cleared for a heavy polluting
industry. After nearly a decade of protesting by Goa's residents, DuPont was
forced to scuttle plans there. Chennai was the next proposed site for the plastics
plant. The state government there made significantly greater demand on DuPont
for concessions on public health and environmental protection. Eventually, these
plans were also aborted due to what the company called "financial concerns". [29].
Conclusion
The tragedy of Bhopal continues to be a warning sign at once ignored and heeded.
Bhopal and its aftermath were a warning that the path to industrialization, for
developing countries in general and India in particular, is fraught with human,
environmental and economic perils. Some moves by the Indian government,
including the formation of the MoEF, have served to offer some protection of the
public's health from the harmful practices of local and multinational heavy
industry and grassroots organizations that have also played a part in opposing
rampant development. The Indian economy is growing at a tremendous rate but at
significant cost in environmental health and public safety as large and small
companies throughout the subcontinent continue to pollute. Far more remains to
be done for public health in the context of industrialization to show that the
lessons of the countless thousands dead in Bhopal have truly been heeded.