You are on page 1of 214

Copyright © University Grants Commision (UGC), All Rights Reserved, Developed & Maintained by:

2010 KITINFINET
Margin of safety (financial)
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Margin of safety (safety margin) is the difference between the intrinsic value of a stock and its
market price.
Another definition: In Break even analysis (accounting), margin of safety is how much output or
sales level can fall before a business reaches its breakeven point.

[edit] History
Benjamin Graham and David Dodd, founders of value investing, coined the term margin of safety in
their seminal 1934 book, Security Analysis. The term is also described in Graham's The Intelligent
Investor. Graham said that "the margin of safety is always dependent on the price paid" (The
Intelligent Investor, Benjamin Graham, HarperBusiness Essentials, 2003).
[edit] Application to investing
Using margin of safety, one should buy a stock when it is worth more than its price on the
market. This is the central thesis of value investing philosophy which espouses preservation of
capital as its first rule of investing. Benjamin Graham suggested to look at unpopular or neglected
companies with low P/E and P/B ratios. One should also analyze financial statements and footnotes
to understand whether companies have hidden assets (e.g., investments in other companies) that
are potentially unnoticed by the market.
The margin of safety protects the investor from both poor decisions and downturns in the market.
Because fair value is difficult to accurately compute, the margin of safety gives the investor room
for error.
A common interpretation of margin of safety is how far below intrinsic value one is paying for a
stock. For high quality issues, value investors typically want to pay 90 cents for a dollar (90% of
intrinsic value) while more speculative stocks should be purchased for up to a 50 percent
discount to intrinsic value (pay 50 cents for a dollar).[1]
[edit] Application to accounting
In investing parlance, margin of safety is the difference between the expected (or actual) sales
level and the breakeven sales level. It can be expressed in the equation form as follows:
Margin of Safety = Expected (or) Actual Sales Level (quantity or dollar amount) - Breakeven
sales Level (quantity or dollar amount)
[edit] Rare book
Also, Margin of Safety is a rare and out-of-print book written by Seth Klarman, founder of
Baupost Limited Partners, a value investing focused hedge fund based in Boston, MA. Copies of
his book are considered something of a collector's item and it can regularly be found on eBay or
Amazon.com in the $1200-$2000 price range.

[edit] References
• Graham, Benjamin. Dodd, David. Security Analysis: The Classic 1934 Edition.
McGraw-Hill. 1996. ISBN 0-07-024496-0.
• http://www.businessweek.com/magazine/content/06_32/b3996085.htm
• http://www.worldfinancialblog.com/investing/ben-grahams-margin-of-safety/26/

Break even analysis


From Wikipedia, the free encyclopedia
(Redirected from Margin of safety (accounting))

Jump to:navigation, search

It has been suggested that this article or section be merged into break-even.
(Discuss)

This article needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may
be challenged and removed. (July 2007)

The break-even point for a product is the point where total revenue received equals the total costs
associated with the sale of the product (TR = TC).[1] A break-even point is typically calculated in
order for businesses to determine if it would be profitable to sell a proposed product, as opposed
to attempting to modify an existing product instead so it can be made lucrative. Break even
analysis can also be used to analyze the potential profitability of an expenditure in a sales-based
business.
break even point (for output) = fixed cost / contribution per unit
contribution (p.u) = selling price (p.u.) - variable cost (p.u)
break even point (for sales) = fixed cost / contribution (pu) * selling price (pu)

Contents
[hide]
• 1 Margin of Safety
• 2 In unit sales
• 3 Internet research
• 4 Limitations
• 5 References
• 6 Bibliography
• 7 External links

[edit] Margin of Safety


Margin of safety represents the strength of the business. It enables a business to know what is the
exact amount he/ she has gained or lost and whether they are over or below the break even point.
[2]

margin of safety = (current output - breakeven output)


margin of safety% = (current output - breakeven output)/current output x 100
If P/V ratio is given then profit/ PV ratio
[edit] In unit sales
If the product can be sold in a larger quantity than occurs at the break even point, then the firm
will make a profit; below this point, the firm will make a loss. Break-even quantity is calculated
by:
Total fixed costs / (selling price - average variable costs).
Explanation - in the denominator, "price minus average variable cost" is the
variable profit per unit, or contribution margin of each unit that is sold.
This relationship is derived from the profit equation: Profit = Revenues -
Costs where Revenues = (selling price * quantity of product) and Costs =
(average variable costs * quantity) + total fixed costs.
Therefore, Profit = (selling price * quantity) - (average variable costs *
quantity + total fixed costs).
Solving for Quantity of product at the breakeven point when Profit equals
zero, the quantity of product at break even is Total fixed costs / (selling price
- average variable costs).

Firms may still decide not to sell low-profit products, for example those not fitting well into their
sales mix. Firms may also sell products that lose money - as a loss leader, to offer a complete
line of products, etc. But if a product does not break even, or a potential product looks like it
clearly will not sell better than the break even point, then the firm will not sell, or will stop
selling, that product.
An example:
• Assume we are selling a product for £2 each.
• Assume that the variable cost associated with producing and selling the
product is 60p.
• Assume that the fixed cost related to the product (the basic costs that are
incurred in operating the business even if no product is produced) is £1000.
• In this example, the firm would have to sell (1000 / (2.00 - 0.60) = 715) 715
units to break even.
Total Income (Net profit) = Total expenses (costs)
NI = TC = Fixed cost + Variable cost
Selling Price x Quantity = Fixed cost + Quantity x Variable cost (cost/unit)
SP x Q = FC + Q x VC
Quantity x (SP-V) = Fc
Break Even = FC / (SP − VC)
where FC is Fixed Cost, SP is Selling Price and VC is Variable Cost
[edit] Internet research
By inserting different prices into the formula, you will obtain a number of break even points, one
for each possible price charged. If the firm changes the selling price for its product, from $2 to
$2.30, in the example above, then it would have to sell only (1000/(2.3 - 0.6))= 589 units to
break even, rather than 715.

To make the results clearer, they can be graphed. To do this, you draw the total cost curve (TC in
the diagram) which shows the total cost associated with each possible level of output, the fixed
cost curve (FC) which shows the costs that do not vary with output level, and finally the various
total revenue lines (R1, R2, and R3) which show the total amount of revenue received at each
output level, given the price you will be charging.
The break even points (A,B,C) are the points of intersection between the total cost curve (TC)
and a total revenue curve (R1, R2, or R3). The break even quantity at each selling price can be
read off the horizontal, axis and the break even price at each selling price can be read off the
vertical axis. The total cost, total revenue, and fixed cost curves can each be constructed with
simple formulae. For example, the total revenue curve is simply the product of selling price
times quantity for each output quantity. The data used in these formulae come either from
accounting records or from various estimation techniques such as regression analysis.
[edit] Limitations
• Break-even analysis is only a supply side (i.e. costs only) analysis, as it tells
you nothing about what sales are actually likely to be for the product at these
various prices.
• It assumes that fixed costs (FC) are constant. Although, this is true in the
short run, an increase in the scale of production is likely to cause fixed costs
to rise.
• It assumes average variable costs are constant per unit of output, at least in
the range of likely quantities of sales. (i.e. linearity)
• It assumes that the quantity of goods produced is equal to the quantity of
goods sold (i.e., there is no change in the quantity of goods held in inventory
at the beginning of the period and the quantity of goods held in inventory at
the end of the period).
• In multi-product companies, it assumes that the relative proportions of each
product sold and produced are constant (i.e., the sales mix is constant).

Wikipedia English - The Free Encyclopedia

Instrumental values are values like ambition, courage, persistence, politeness etc.
They are not the end but a mean of achieving Terminal values.
Basic Conviction that a specific mode of conduct is preferable over an opposite or
converse mode of existence. These differ from Terminal values which are convictions
about the end state of existence, rather than the means.

Emotional intelligence
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Emotional intelligence (EI) describes the ability, capacity, skill or, in the case of the trait EI
model, a self-perceived grand ability to identify, assess, manage and control the emotions of one's
self, of others, and of groups.[1] Different models have been proposed for the definition of EI and
disagreement exists as to how the term should be used.[2] Despite these disagreements, which are
often highly technical, the ability EI and trait EI models (but not the mixed models) enjoy
support in the literature and have successful applications in different domains.
The earliest roots of emotional intelligence can be traced to Darwin's work on the importance of
emotional expression for survival and second adaptation.[3] In the 1900s, even though traditional
definitions of intelligence emphasized cognitive aspects such as memory and problem-solving,
several influential researchers in the intelligence field of study had begun to recognize the
importance of the non-cognitive aspects. For instance, as early as 1920, E.L. Thorndike used the
term social intelligence to describe the skill of understanding and managing other people.[4]
Similarly, in 1940 David Wechsler described the influence of non-intellective factors on intelligent
behavior, and further argued that our models of intelligence would not be complete until we can
adequately describe these factors.[3] In 1983, Howard Gardner's Frames of Mind: The Theory of
Multiple Intelligences[5] introduced the idea of multiple intelligences which included both
Interpersonal intelligence (the capacity to understand the intentions, motivations and desires of
other people) and Intrapersonal intelligence (the capacity to understand oneself, to appreciate
one's feelings, fears and motivations). In Gardner's view, traditional types of intelligence, such as
IQ, fail to fully explain cognitive ability.[6] Thus, even though the names given to the concept
varied, there was a common belief that traditional definitions of intelligence are lacking in ability
to fully explain performance outcomes.
The first use of the term "emotional intelligence" is usually attributed to Wayne Payne's doctoral
thesis, A Study of Emotion: Developing Emotional Intelligence from 1985.[7] However, prior to
this, the term "emotional intelligence" had appeared in Leuner (1966). Greenspan (1989) also put
forward an EI model, followed by Salovey and Mayer (1990), and Goleman (1995). The
distinction between trait emotional intelligence and ability emotional intelligence was introduced
in 2000.[8]
As a result of the growing acknowledgement by professionals of the importance and relevance of
emotions to work outcomes,[9] the research on the topic continued to gain momentum, but it
wasn't until the publication of Daniel Goleman's best seller Emotional Intelligence: Why It Can
Matter More Than IQ that the term became widely popularized.[10] Nancy Gibbs' 1995 Time
magazine article highlighted Goleman's book and was the first in a string of mainstream media
interest in EI.
[edit] Defining emotional intelligence
Substantial disagreement exists regarding the definition of EI, with respect to both terminology
and operationalizations. There has been much confusion regarding the exact meaning of this
construct. The definitions are so varied, and the field is growing so rapidly, that researchers are
constantly amending even their own definitions of the construct. At the present time, there are
three main models of EI:
• Ability EI models
• Mixed models of EI
• Trait EI model

[edit] The ability-based model


Salovey and Mayer's conception of EI strives to define EI within the confines of the standard
criteria for a new intelligence. Following their continuing research, their initial definition of EI
was revised to "The ability to perceive emotion, integrate emotion to facilitate thought,
understand emotions and to regulate emotions to promote personal growth."
The ability based model views emotions as useful sources of information that help one to make
sense of and navigate the social environment.[11] The model proposes that individuals vary in their
ability to process information of an emotional nature and in their ability to relate emotional
processing to a wider cognition. This ability is seen to manifest itself in certain adaptive
behaviors. The model claims that EI includes four types of abilities:
1. Perceiving emotions – the ability to detect and decipher emotions in faces,
pictures, voices, and cultural artifacts—including the ability to identify one's
own emotions. Perceiving emotions represents a basic aspect of emotional
intelligence, as it makes all other processing of emotional information
possible.
2. Using emotions – the ability to harness emotions to facilitate various
cognitive activities, such as thinking and problem solving. The emotionally
intelligent person can capitalize fully upon his or her changing moods in order
to best fit the task at hand.
3. Understanding emotions – the ability to comprehend emotion language and
to appreciate complicated relationships among emotions. For example,
understanding emotions encompasses the ability to be sensitive to slight
variations between emotions, and the ability to recognize and describe how
emotions evolve over time.
4. Managing emotions – the ability to regulate emotions in both ourselves and in
others. Therefore, the emotionally intelligent person can harness emotions,
even negative ones, and manage them to achieve intended goals.
The ability-based model has been criticized in the research for lacking face and predictive
validity in the workplace.[12]
[edit] Measurement of the ability-based model
Different models of EI have led to the development of various instruments for the assessment of
the construct. While some of these measures may overlap, most researchers agree that they tap
slightly different constructs. The current measure of Mayer and Salovey's model of EI, the
Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) is based on a series of emotion-
based problem-solving items.[11] Consistent with the model's claim of EI as a type of intelligence,
the test is modeled on ability-based IQ tests. By testing a person's abilities on each of the four
branches of emotional intelligence, it generates scores for each of the branches as well as a total
score.
Central to the four-branch model is the idea that EI requires attunement to social norms.
Therefore, the MSCEIT is scored in a consensus fashion, with higher scores indicating higher
overlap between an individual's answers and those provided by a worldwide sample of
respondents. The MSCEIT can also be expert-scored, so that the amount of overlap is calculated
between an individual's answers and those provided by a group of 21 emotion researchers.[11]
Although promoted as an ability test, the MSCEIT is most unlike standard IQ tests in that its
items do not have objectively correct responses. Among other problems, the consensus scoring
criterion means that it is impossible to create items (questions) that only a minority of
respondents can solve, because, by definition, responses are deemed emotionally "intelligent"
only if the majority of the sample has endorsed them. This and other similar problems have led
cognitive ability experts to question the definition of EI as a genuine intelligence.
In a study by Føllesdal,[13] the MSCEIT test results of 111 business leaders were compared with
how their employees described their leader. It was found that there were no correlations between
a leader's test results and how he or she was rated by the employees, with regard to empathy,
ability to motivate, and leader effectiveness. Føllesdal also criticized the Canadian company
Multi-Health Systems, which administers the MSCEIT test. The test contains 141 questions but
it was found after publishing the test that 19 of these did not give the expected answers. This has
led Multi-Health Systems to remove answers to these 19 questions before scoring, but without
stating this officially.
[edit] Mixed models of EI
The model introduced by Daniel Goleman[14] focuses on EI as a wide array of competencies and
skills that drive leadership performance. Goleman's model outlines four main EI constructs:[1]
1. Self-awareness – the ability to read one's emotions and recognize their
impact while using gut feelings to guide decisions.
2. Self-management – involves controlling one's emotions and impulses and
adapting to changing circumstances.
3. Social awareness – the ability to sense, understand, and react to others'
emotions while comprehending social networks.
4. Relationship management – the ability to inspire, influence, and develop others
while managing conflict.
Goleman includes a set of emotional competencies within each construct of EI. Emotional
competencies are not innate talents, but rather learned capabilities that must be worked on and
can be developed to achieve outstanding performance.[1] Goleman posits that individuals are born
with a general emotional intelligence that determines their potential for learning emotional
competencies.[15] Goleman's model of EI has been criticized in the research literature as mere
"pop psychology" (Mayer, Roberts, & Barsade, 2008).
[edit] Measurement of the Emotional Competencies (Goleman) model
Two measurement tools are based on the Goleman model:
1. The Emotional Competency Inventory (ECI), which was created in 1999, and
the Emotional and Social Competency Inventory (ESCI), which was created in
2007.
2. The Emotional Intelligence Appraisal, which was created in 2001 and which
can be taken as a self-report or 360-degree assessment.[16]

[edit] The Bar-On model of Emotional-Social Intelligence (ESI)


Bar-On[3] defines emotional intelligence as being concerned with effectively understanding
oneself and others, relating well to people, and adapting to and coping with the immediate
surroundings to be more successful in dealing with environmental demands.[17] Bar-On posits that
EI develops over time and that it can be improved through training, programming, and therapy.[3]
Bar-On hypothesizes that those individuals with higher than average EQs are in general more
successful in meeting environmental demands and pressures. He also notes that a deficiency in
EI can mean a lack of success and the existence of emotional problems. Problems in coping with
one's environment are thought, by Bar-On, to be especially common among those individuals
lacking in the subscales of reality testing, problem solving, stress tolerance, and impulse control.
In general, Bar-On considers emotional intelligence and cognitive intelligence to contribute
equally to a person's general intelligence, which then offers an indication of one's potential to
succeed in life.[3] However, doubts have been expressed about this model in the research
literature (in particular about the validity of self-report as an index of emotional intelligence) and
in scientific settings, it is being replaced by the trait EI model discussed below.[18]
[edit] Measurement of the ESI Model
The Bar-On Emotion Quotient Inventory (EQ-i), is a self-report measure of EI developed as a
measure of emotionally and socially competent behavior that provides an estimate of one's
emotional and social intelligence. The EQ-i is not meant to measure personality traits or
cognitive capacity, but rather the mental ability to be successful in dealing with environmental
demands and pressures.[3] One hundred and thirty three items (questions or factors) are used to
obtain a Total EQ (Total Emotional Quotient) and to produce five composite scale scores,
corresponding to the five main components of the Bar-On model. A limitation of this model is
that it claims to measure some kind of ability through self-report items (for a discussion, see
Matthews, Zeidner, & Roberts, 2001). The EQ-i has been found to be highly susceptible to
faking (Day & Carroll, 2008; Grubb & McDaniel, 2007).
[edit] The trait EI model
Petrides and colleagues[19] (see also Petrides, 2009) proposed a conceptual distinction between
the ability based model and a trait based model of EI.[8] Trait EI is "a constellation of emotional
self-perceptions located at the lower levels of personality". In lay terms, trait EI refers to an
individual's self-perceptions of their emotional abilities. This definition of EI encompasses
behavioral dispositions and self perceived abilities and is measured by self report, as opposed to
the ability based model which refers to actual abilities, which have proven highly resistant to
scientific measurement. Trait EI should be investigated within a personality framework.[20] An
alternative label for the same construct is trait emotional self-efficacy.
The trait EI model is general and subsumes the Goleman and Bar-On models discussed above.
The conceptualization of EI as a personality trait leads to a construct that lies outside the
taxonomy of human cognitive ability. This is an important distinction in as much as it bears
directly on the operationalization of the construct and the theories and hypotheses that are
formulated about it.[8]
[edit] Measurement of the trait EI model
There are many self-report measures of EI,[21] including the EQ-i, the Swinburne University
Emotional Intelligence Test (SUEIT),the Schutte Self-Report Emotional Intelligence Test
(SSEIT), a measure by Tett, Fox, and Wang (2005). From the perspective of the trait EI model,
none of these assess intelligence, abilities, or skills (as their authors often claim), but rather, they
are limited measures of trait emotional intelligence (Petrides, Furnham, & Mavroveli, 2007). One
of the more comprehensive and widely researched measures of this construct is the Trait
Emotional Intelligence Questionnaire (TEIQue), which is an open-access measure that was
specifically designed to measure the construct comprehensively and is currently available in
many languages.
The TEIQue provides an operationalization for Petrides and colleagues' model that
conceptualizes EI in terms of personality.[22] The test encompasses 15 subscales organized under
four factors: Well-Being, Self-Control, Emotionality, and Sociability. The psychometric properties of
the TEIQue were investigated in a study on a French-speaking population, where it was reported
that TEIQue scores were globally normally distributed and reliable.[23]
The researchers also found TEIQue scores were unrelated to nonverbal reasoning (Raven's
matrices), which they interpreted as support for the personality trait view of EI (as opposed to a
form of intelligence). As expected, TEIQue scores were positively related to some of the Big Five
personality traits (extraversion, agreeableness, openness, conscientiousness) as well as inversely
related to others (alexithymia, neuroticism). A number of quantitative genetic studies have been
carried out within the trait EI model, which have revealed significant genetic effects and
heritabilities for all trait EI scores.[24].
[edit] Alexithymia and EI
Alexithymia from the Greek words "λέξις" (lexis) and "θυμός" (thumos) (literally "lack of words
for emotions") is a term coined by Peter Sifneos in 1973[25][26] to describe people who appeared to
have deficiencies in understanding, processing, or describing their emotions. Viewed as a
spectrum between high and low EI, the alexithymia construct is strongly inversely related to EI,
representing its lower range.[27] The individual's level of alexithymia can be measured with self-
scored questionnaires such as the Toronto Alexithymia Scale (TAS-20) or the Bermond-Vorst
Alexithymia Questionnaire (BVAQ)[28] or by observer rated measures such as the Observer
Alexithymia Scale (OAS).
[edit] Criticism of the theoretical foundation of EI
[edit] EI cannot be recognized as a form of intelligence
Goleman's early work has been criticized for assuming from the beginning that EI is a type of
intelligence. Eysenck ‹See Tfd› (2000) writes that Goleman's description of EI contains assumptions
about intelligence in general, and that it even runs contrary to what researchers have come to
expect when studying types of intelligence:
"Goleman exemplifies more clearly than most the fundamental absurdity of the tendency to class
almost any type of behaviour as an 'intelligence'... If these five 'abilities' define 'emotional
intelligence', we would expect some evidence that they are highly correlated; Goleman admits
that they might be quite uncorrelated, and in any case if we cannot measure them, how do we
know they are related? So the whole theory is built on quicksand: there is no sound scientific
basis."
Similarly, Locke (2005)[29] claims that the concept of EI is in itself a misinterpretation of the
intelligence construct, and he offers an alternative interpretation: it is not another form or type of
intelligence, but intelligence—the ability to grasp abstractions—applied to a particular life
domain: emotions. He suggests the concept should be re-labeled and referred to as a skill.
The essence of this criticism is that scientific inquiry depends on valid and consistent construct
utilization, and that in advance of the introduction of the term EI, psychologists had established
theoretical distinctions between factors such as abilities and achievements, skills and habits,
attitudes and values, and personality traits and emotional states.[30] The term EI is viewed by
some as having merged and conflated accepted concepts and definitions.
[edit] EI has no substantial predictive value
Landy (2005)[31] has claimed that the few incremental validity studies conducted on EI have
demonstrated that it adds little or nothing to the explanation or prediction of some common
outcomes (most notably academic and work success). Landy proposes that the reason some
studies have found a small increase in predictive validity is in fact a methodological fallacy—
incomplete consideration of alternative explanations:
"EI is compared and contrasted with a measure of abstract intelligence but not with a personality
measure, or with a personality measure but not with a measure of academic intelligence." Landy
(2005)
In accordance with this suggestion, other researchers have raised concerns about the extent to
which self-report EI measures correlate with established personality dimensions. Generally, self-
report EI measures and personality measures have been said to converge because they both
purport to measure traits, and because they are both measured in the self-report form.[32]
Specifically, there appear to be two dimensions of the Big Five that stand out as most related to
self-report EI – neuroticism and extraversion. In particular, neuroticism has been said to relate to
negative emotionality and anxiety. Intuitively, individuals scoring high on neuroticism, are likely
to score low on self-report EI measures.[32]
The interpretations of the correlations between EI questionnaires and personality have been
varied, with the trait EI view that re-intrprets EI as a collection of personality traits being
prominent in the scientific literature.[33][34][35]
[edit] Criticism on measurement issues
[edit] Ability based measures are measuring conformity, not ability
One criticism of the works of Mayer and Salovey comes from a study by Roberts et al. (2001),[36]
which suggests that the EI, as measured by the MSCEIT, may only be measuring conformity.
This argument is rooted in the MSCEIT's use of consensus-based assessment, and in the fact that
scores on the MSCEIT are negatively distributed (meaning that its scores differentiate between
people with low EI better than people with high EI).
[edit] Ability based measures are measuring knowledge (not actual ability)
Further criticism has been offered by Brody (2004),[37] who claimed that unlike tests of cognitive
ability, the MSCEIT "tests knowledge of emotions but not necessarily the ability to perform
tasks that are related to the knowledge that is assessed". The main argument is that even though
someone knows how he should behave in an emotionally laden situation, it doesn't necessarily
follow that he could actually carry out the reported behavior.
[edit] Self report measures are susceptible to faking good
More formally termed socially desirable responding (SDR), faking good is defined as a response
pattern in which test-takers systematically represent themselves with an excessive positive bias
(Paulhus, 2002). This bias has long been known to contaminate responses on personality
inventories (Holtgraves, 2004; McFarland & Ryan, 2000; Peebles & Moore, 1998; Nichols &
Greene, 1997; Zerbe & Paulhus, 1987), acting as a mediator of the relationships between self-
report measures (Nichols & Greene, 1997; Gangster et al., 1983).
It has been suggested that responding in a desirable way is a response set, which is a situational
and temporary response pattern (Pauls & Crost, 2004; Paulhus, 1991). This is contrasted with a
response style, which is a more long-term trait-like quality. Considering the contexts some self-
report EI inventories are used in (e.g., employment settings), the problems of response sets in
high-stakes scenarios become clear (Paulhus & Reid, 2001).
There are a few methods to prevent socially desirable responding on behavior inventories. Some
researchers believe it is necessary to warn test-takers not to fake good before taking a personality
test (e.g., McFarland, 2003). Some inventories use validity scales in order to determine the
likelihood or consistency of the responses across all items.
[edit] Claims for the predictive power of EI are too extreme
Landy[31] distinguishes between the "commercial wing" and "the academic wing" of the EI
movement, basing this distinction on the alleged predictive power of EI as seen by the two
currents. According to Landy, the former makes expansive claims on the applied value of EI,
while the latter is trying to warn users against these claims. As an example, Goleman (1998)
asserts that "the most effective leaders are alike in one crucial way: they all have a high degree of
what has come to be known as emotional intelligence. ...emotional intelligence is the sine qua
non of leadership". In contrast, Mayer (1999) cautions "the popular literature's implication—that
highly emotionally intelligent people possess an unqualified advantage in life—appears overly
enthusiastic at present and unsubstantiated by reasonable scientific standards."
Landy further reinforces this argument by noting that the data upon which these claims are based
are held in "proprietary databases", which means they are unavailable to independent researchers
for reanalysis, replication, or verification.[31] Thus, the credibility of the findings cannot be
substantiated in a scientific manner, unless those datasets are made public and available for
independent analysis.
[edit] EI, IQ and job performance
Research of EI and job performance show mixed results: a positive relation has been found in
some of the studies, in others there was no relation or an inconsistent one. This led researchers
Cote and Miners (2006)[38] to offer a compensatory model between EI and IQ, that posits that the
association between EI and job performance becomes more positive as cognitive intelligence
decreases, an idea first proposed in the context of academic performance (Petrides, Frederickson,
& Furnham, 2004). The results of the former study supported the compensatory model:
employees with low IQ get higher task performance and organizational citizenship behavior
directed at the organization, the higher their EI (Emotional Intelligence

Succession planning is a critical part of the human resources planning process. Human
resources planning (HRP) is the process of having the right number of employees in the
right positions in the organization at the time that they are needed. HRP involves
forecasting, or predicting, the organization's needs for labor and supply of labor and
then taking steps to move people into positions in which they are needed.

Succession planning is the systematic process of defining future management


requirements and identifying candidates who best meet those requirements. Succession
planning involves using the supply of labor within the organization for future staffing
needs. With succession planning, the skills and abilities of current employees are
assessed to see which future positions they may take within the organization when other
employees leave their positions. Succession planning is typically used in higher-level
organizational positions, such as executive-level positions. For instance, if a company
predicts that its Chief Executive Officer will retire in the near future, the organization
may begin looking months or even years in advance to determine which current
employee might be capable of taking over the position of the CEO.
Succession planning is aimed at promoting individuals within the organization and thus
makes use of internal selection. Internal selection, as opposed to hiring employees from
outside the organization, has a number of benefits and drawbacks. With internal
selection, the organization is aware of current employees' skills and abilities, and
therefore is often better able to predict future performance than when hiring from the
outside. Because of access to annual performance appraisals and the opinions of the
employee's current managers, the company can have a fairly accurate assessment of the
employee's work capabilities. Additionally, the organization has trained and socialized
the employee for a period of time already, so the employee is likely to be better prepared
for a position within the organization than someone who does not have that
organizational experience. Finally, internal selection is often motivating to others in the
organization—opportunities for advancement may encourage employees to perform at a
high level.

Despite its many advantages, internal selection can also have some drawbacks. While
the opportunities for advancement may be motivating to employees who believe that
they can move up within the organization at a future date, those employees who feel that
they have been passed over for promotion or are at a career plateau are likely to become
discouraged and may choose to leave the organization. Having an employee who has
been trained and socialized by the organization may limit the availability of skills,
innovation, or creativity that may be found when new employees are brought in from
the outside. Finally, internal selection still leaves a position at a lower level that must be
staffed from the outside, which may not reduce recruitment and selection costs.

Many companies organize their management training and development efforts around
succession planning. However, not all organizations take a formal approach to it, and
instead do so very informally, using the opinions of managers as the basis for
promotion, with little consideration of the actual requirements of future positions.
Informal succession planning is likely to result in managers who are promoted due to
criteria that are unrelated to performance, such as networking within and outside of the
organization. Organizations would be better served by promoting managers who were
able to successfully engage in human resource management activities and communicate
with employees. Poor succession planning, such as just described, can have negative
organizational consequences. Research indicates that poor preparation for advancement
into managerial positions leaves almost one-third of new executives unable to meet
company expectations for job performance. This may have negative repercussions for
the newly promoted manager, the other employees, and the company's bottom line.

STEPS IN SUCCESSION PLANNING


There are several steps in effective succession planning: human resources planning,
assessing needs, developing managers, and developing replacement charts and
identifying career paths.

HUMAN RESOURCES PLANNING.

Engaging in human resources planning by forecasting the organization's needs for


employees at upper levels is the first step in succession planning. Some staffing needs
can be anticipated, such as a known upcoming retirement or transfer. However, staffing
needs are often less predictable—organizational members may leave for other
companies, retire unexpectedly, or even die, resulting in a need to hire from outside or
promote from within. The organization should do its best to have staff available to move
up in the organization even when unexpected circumstances arise. Thus, accurate and
timely forecasting is critical.

ASSESSING NEEDS AND DEVELOPING REPLACEMENT CHARTS.

The second major step for succession planning is to define and measure individual
qualifications needed for each targeted position. Such qualifications should be based on
information from a recent job analysis. Once these qualifications are defined, employees
must be evaluated on these qualifications to identify those with a high potential for
promotion. This may involve assessing both the abilities and the career interests of
employees. If a lower-level manager has excellent abilities but little interest in
advancement within the organization, then development efforts aimed at promotion will
be a poor investment.

To determine the level of abilities of employees within the organization, many of the
same selection tools that are used for assessing external candidates can be used, such as
general mental ability tests, personality tests, and assessment centers. However, when
selecting internally, the company has an advantage in that it has much more data on
internal candidates, such as records of an employee's career progress, experience, past
performance, and self-reported interests regarding future career steps.

DEVELOPING MANAGERS.

The third step of succession planning, which is actually ongoing throughout the process,
is the development of the managers who are identified as having promotion potential. In
order to prepare these lower-level managers for higher positions, they need to engage in
development activities to improve their skills. Some of these activities may include:

• Job rotation through key executive positions. By working in different executive positions
throughout the organization, the manager gains insight into the overall strategic
workings of the company. Additionally, the performance of this manager at the executive
level can be assessed before further promotions are awarded.

• Overseas assignments. Many multinational companies now include an overseas


assignment as a way for managers to both learn more about the company and to test
their potential for advancement within the company. Managers who are successful at
leading an overseas branch of the company are assumed to be prepared to take an
executive position in the home country.

• Education. Formal courses may improve managers' abilities to understand the financial
and operational aspects of business management. Many companies will pay for
managers to pursue degrees such as Masters in Business Administration (MBAs), which
are expected to provide managers with knowledge that they could not otherwise gain
from the company's own training and development programs.

• Performance-related training and development for current and future roles. Specific
training and development provided by the company may be required for managers to
excel in their current positions and to give them skills that they need in higher-level
positions.

DEVELOPING REPLACEMENT CHARTS AND IDENTIFYING CAREER PATHS.

In the final step of succession planning, the organization identifies a career path for each
high-potential candidate—those who have the interest and ability to move upward in the
organization. A career path is the typical set of positions that an employee might hold in
the course of his or her career. In succession planning, it is a road map of positions and
experiences designed to prepare the individual for an upper-level management position.
Along with career paths, the organization should develop replacement charts,
which indicate the availability of candidates and their readiness to step into the
various management positions. These charts are depicted as organizational charts in
which possible candidates to replacement others are listed in rank order for each
management position. These rank orders are based on the candidates' potential scores,
which are derived on the basis of their past performance, experience, and other relevant
qualifications. The charts indicate who is currently ready for promotion and who needs
further grooming to be prepared for an upper-level position.

PROBLEMS WITH SUCCESSION PLANNING


Succession planning is typically useful to the organization in its human resource
planning, and when done properly, can be beneficial to organizational performance.
However, there are potential problems associated with the use of succession planning:
the crowned prince syndrome, the talent drain, and difficulties associated with
managing large amounts of human resources information.

CROWNED PRINCE SYNDROME.

The first potential problem in succession planning is the crowned prince syndrome,
which occurs when upper management only considers for advancement, those
employees who have become visible to them. In other words, rather than looking at a
wider array of individual employees and their capabilities, upper management focuses
only on one person—the "crowned prince." This person is often one who has been
involved in high-profile projects, has a powerful and prominent mentor, or has
networked well with organizational leaders. There are often employees throughout the
organization who are capable of and interested in promotion who may be overlooked
because of the more visible and obvious "crowned prince," who is likely to be promoted
even if these other employees are available. Not only are performance problems a
potential outcome of this syndrome, but also the motivation of current employees may
suffer if they feel that their high performance has been overlooked. This may result in
turnover of high quality employees who have been overlooked for promotion.
TALENT DRAIN.

The talent drain is the second potential problem that may occur in succession planning.
Because upper management must identify only a small group of managers to receive
training and development for promotion, those managers who are not assigned to
development activities may feel overlooked and therefore leave the organization. This
turnover may reduce the number of talented managers that the organization has at the
lower and middle levels of the hierarchy. Exacerbating this problem is that these
talented managers may work for a competing firm or start their own business, thus
creating increased competition for their former company.

MANAGING HUMAN RESOURCE INFORMATION.

The final problem that can occur in succession planning is the concern with managing
large amounts of human resources information. Because succession planning requires
retention of a great deal of information, it is typically best to store and manage it on a
computer. Attempting to maintain such records by hand may prove daunting. Even on
the computer, identifying and evaluating many years' worth of information about
employees' performance and experiences may be difficult. Add to that the challenges of
comparing distinct records of performance to judge promotion capability, and this
information overload is likely to increase the difficulty of successful succession
planning.

Succession planning, which is identifying and preparing managers for future


promotions within the organization is one element of successful human resource
planning. Unfortunately, many organizations do a poor job of succession planning. Even
when it is done properly, succession planning has some potential problems that can
harm employee motivation and the company's bottom line. Effective succession
planning, however, is likely to improve overall firm performance and to reward and
motivate employees within the organization.
The operative functions of personnel management are related to
specific activities of personnel management viz., employment, development, compensation and relations. All these functions are interacted
by managerial functions. Further these functions are to be performed in conjunction with management functions.
Employment

It is the first operative function of HRM. Employment is concerned with securing and employing the people possessing required kind and
level of human resources necessary to achieve the organizational objectives. It covers the functions such as job analysis, human resources
planning, recruitment, selection, placement, induction and internal mobility.

Job Analysis: It is the process of study and collection of information relating to the operations and responsibilities of a specific job. It
includes:

1.Collection of data, information, facts and ideas relating to various aspects of jobs including men, machines and materials.

2.Preparation of job description, job specification, job requirements and employee specification which help in identifying the nature, levels
and quantum of human resources.

3.Providing the guides, plans and basis for job design and for all operative functions of HRM.

Human Resources Planning:

It is a process for determination and assuring that the organization will have an adequate number of qualified persons, available at proper
times, performing jobs which would meet the needs of the organization and which would provide satisfaction for the individuals involved. It
involves

*Estimation of present and future requirement and supply of human resources basing on objectives and long range plans of the organization.

*Calculation of net human resources requirement based on present inventory of human resources.

*Taking steps to mould, change, and develop the strength of existing employees in the organization so as to meet the future human
resources requirements.

*Preparation of action programs to get the rest of human resources from outside the organization and to develop the human resources of
existing employees.

Recruitment:

It is the process of searching for prospective employees and stimulating them to apply for jobs in an organization. It deals with:

(a)Identification of existing sources of applicants and developing them.

(b)Creation / Identification of new sources of applicants.

(c)Stimulating the candidates to apply for jobs in the organization.

(d)Striking a balance between internal and external sources.


Selection:

It is the process of ascertaining the qualifications, experience, skill, knowledge etc., of an applicant with a view to appraising his / her
suitability to a job appraising.

This function includes:

(a)Framing and developing application blanks.


(b)Creating and developing valid and reliable testing techniques.
(c)Formulating interviewing techniques.
(d)Checking of references.
(e)Setting up medical examination policy and procedure.
(f)Line manager’s decision.
(g)Sending letters of appointment and rejection.
(h)Employing the selected candidates who report for duty.

Placement: It is the process of assigning the selected candidate with the most suitable job in terms of job requirements. It is matching of
employees specifications with job requirements. This function includes:

(a)Counseling the functional managers regarding placement.

(b)Conducting follow-up study, appraising employee performance in order to determine employee’s adjustment with the job.

(c)Correcting misplacements, if any.

Induction and Orientation: Induction and orientation are the techniques by which a new employee is rehabilitated in the changed
surroundings and introduced to the practices, policies, purposes and people etc., of the organization.

(a)Acquaint the employee with the company philosophy, objectives, policies, career planning and development, opportunities, product,
market share, social and community standing, company history, culture etc.

(b)Introduce the employee to the people with whom he has to work such as peers, supervisors and subordinates.

(c)Mould the employee attitude by orienting him to the new working and social environment.

Human resource Management

Human resource management deals with the management of people in an organization it is assessed and accepted that Human resource
are the main components of an organization and the human or failure of an organization depends on how effectively this components is
managed. This is the concept, which is integrated and involving the entire human force of that organization to work together with a sense of
common purpose that how to be infused to the organization.

Human resource management is dedicated to develop a suitable corporate, culture programs or design and implement to reflect core
values, of the enterprise., "Human resource management is proactive rather than waiting to be told what to do about recruiting, payments or
training the people "Human resource…………

Management is related to the continues process of man power planning selection performance appraisal, administration, training and
development. Human resource management always is deep rooted comprehensive activity taken up to improve that quality of human beings
who are vital assets of the organizations competence and capability of the employees will be improved by adopting scientific methods which
enable them to play their assigned roles effectively.
Today management techniques in corporate enterprise are changing very fast it is more so in Human resource management. Human
resource development manager has to actuate every being that works in the organizations. His job is to created a team spirit in the minds of
the His job is to create a team spirit in the minds of the corporate enterprise before has actuates his workers he should be able to self
actuate and work with his group of workers. "Actuating is getting all the members of the group to work and to strive to archive objectives of
the enterprises.

MEANING
In simple sense Human resource management means employing people and development their resource utilizing maintaining people and
compensation their services in tune with the jobs and organization requirements with a view to achieve the goal of the organization individual
and the society.

Human resource management functions helps in the managers in recruiting, relations, training and development the members for an
organization. Human resource management is also concerned with hiring motivating, and people in an organization it forces on people in
organization.

DEFINITIONS
According to Leon C meginson Human resource management is "The total knowledge of skills, creative abilities, talents and aptitudes of
an organization workforce as well as the value attitude and beliefs of the individuals involved.

According to Flippo Human resource management is "The planning organizing, directing, and co-ordinating and controlling of the
procurement,

DEFINITION
According to Leon C meginson Human resource management is "The total knowledge of skills, creative abilities, talents and aptitudes of
an organization workforce as well as the value attitude and beliefs of the individuals involved".

According to Flippo Human resource management is "The planning, organizing, directing, and coordinating and controlling of the
procurement, development, compensation, integration, maintenance, organization and social objective are accomplished.

Human resource management can be defined as managing production organizing and controlling the functions of employing, development
and compensation, Human resource resulting in the creations and development of human relations resulting with a view to contribute
proportionately to the organizational, individual and social goals.

SCOPE OF HUMAN RESOURCE MANAGEMENT


Scope of Human resource is indeed vast. All major activities in the working life of worker from the time of his or her entry into an
organization until he specifically the activities included are Human resource planning job analysis and design, recruitment selection,
organization, and placement, training and development, performance appraisal and job evaluations.

• Setting general and specific management policy for organizations relationship and establishing and maintaining a suitable
organization for better cooperation.
• Collective bargaining contract negotiation, contract administration and grievance handling.
• Aiding in the self-development of employees at all levels providing opportunities for personal development and growth as
for acquiring requisite skill and experience.
• Developing and maintaining motivating for workers by providing certain incentives.

OBJECTIVES
Primary objective of Human resource management is to ensure the availability of a component and willing workforce to an organization.
Beyond this there are another objective too specifically Human resource management objective are divided into four fractions i.e., social
organizational, functional and personal.

SOCIAL OBJECTIVE
Every organization has to set objective keeping the society in mind. Along with the organizational objectives it has to set certain other
social objectives in order to help the society the primary objective of the organization is to be ethically and socially responsible to meet the
need and challenges of the society while minimizing the negative impact of such demands upon the organization the failure of organizations
to use there resource for the society benefits in ethical ways may led to restrictions.
ORGANIZATIONAL OBJECTIVES
To recognize the role of human resource management is burning about organizational effectiveness. Human resource management is not an
end in itself it is only a mean to assist the organization with its primary objectives the department exists to serve the rest of the organization.

FUNCTIONAL OBJECTIVES
Functional objective should not become too expensive at the cost of the organization it serves while personal objectives assist employees
in achieving their personal goals to maintain the departments contributed at the level appropriate to the organization needs resource are
wasted when HRM is either more or less sophisticated to suit the organizations demands.

PERSONAL OBJECTIVES
To assist employees in achieving their personnel goals which enhance the individual contribution to the organization personal objective of
employees have to meet if workers are to be maintained, retained and otherwise employee performance and satisfactions may decline and
the employee may try to leave the organization.

IMPORTANCE OF HUMAN RESOURCE MANAGEMENT


Yodder" and other discuss the importance of Human resource management from three standpoints.

1. SOCIAL SIGNIFICANCE: Proper management is that which enhance their dignity by satisfying their social needs it is
done by… Maintaining balance between the jobs available and the job seekers according to their needs and qualification.
Providing suitable and most productive employment. Eliminating waste or unwanted Human resource. By helping people
to make their own decisions that are in their interests.
2. PROFESSIONAL SIGNIFICANCE: By providing a healthy working environment it promotes team works in the employees,
this is done by maintaining the dignity of the employees as a Human resource being.

Providing maximum opportunity for personal development. Improving the employees skills and capacity. Correcting the
errors and reallocation of work.
3. SIGNIFICANCE OF INDIVIDUAL ENTERPRISE: It can help the organization in accomplishing its goals by. Creating right
attitude among the employees Utilizing the Human resource to the maximum extent Attaining cooperation among the
employees.

FUNCTIONS OF HUMAN RESOURCE MANAGEMENT


The functions of human resource management can be broadly classified into two categories viz.
1. Managerial functions.
2. Operational functions.

1. MANAGERIAL FUNCTION: Managerial functions of personnel management involves


planning, organizing, directing, and controlling all these functions influences the operative
functions,
• Planning; It is a pre-determined course of actions planning pertains to formulating
strategies of personnel programs and changes in advance that will contribute to the
organization goals.
• Organizing; organizing is essential to carry out the determined course of action.
Organizing is a structure and a process by which a cooperative group of human beings
allocates its task among its members identifies relationships and integrates its activities
towards common objective.
• Directing; The next logical function after completing planning and organizing is the
execution of the plan. The basic function of personal management at any level is
motivating, commanding, lending, and activating people the willing and effective
cooperation of employees for the attainment of organization goals is possible through
proper directing.
• Controlling; After planning organizing and directing various activities of personal
management is to be verified in order to know that the personal functions are performed in
conformity with the plans and directing of an organization.
1. OPERATIVE FUNCTION: The operative functional human resource management are
related to specific activities of personnel management viz employment, human resource
development compensation and relations all these functions are interacted with managerial
functions.
• Employment: It is the operative functions of Human resource management, employment is
concerned with securing the employing the people possessing the required kind and level
of Human resource necessary to archive the organizational objective.
• Human resource development: It is the process of improving, mounding and changing
the skills, knowledge, creative ability, attitude, values, commitment etc on present and
future job and organizational requirements.
• Compensation: It is the process of providing adequate, equitable and fair remuneration,
wages and salary administration, bonus, fringe benefits social security measure etc.
1.2 HUMAN RESOURCE DEVELOPMENT
The concept of human resource development is a new concept of the 20 th century and is called as human resource development, which
emerged as a planned and systematic functions of the HRM.

Human resource development is mainly concerned with developing the skills, knowledge and competencies of people and it is people
oriented concept.

Human resource and human resource management are related to human resource development, human resource are simply people,
human resource management is the activity of managing people and the business of an organization. Human resource development appears
to the systematic process of changing within as organization it is a specialized process that assists people to reach their potential and further
strengths the goals of a organization. Human resource development can be applied both for the organizational and national level the
concepts of human resource development was formally introduced by LEONARD NADLER in 1989 in a conference organized by the
American society for training and development.

1.3. INTRODUCTION EMPLOYEE COMPENSATION


Compensation is what employees receive in exchange for their contribution to the organization. Generally, employees offer their service
for three types of rewards. Pay refers to the base wages and salaries employees normally receive. Compensation forms such as bonuses,
commissions and profit sharing plans are incentives designed to encourage employees to produce results beyond normal expections.
Benefits such as insurance, medical, recreational, retirement etc. represent a more indirect type of compensation. So, the term compensation
is a comprehensive one including pay, incentives, and benefits offered by employers for hiring the services of employees. In addition to
these, managers have to observe legal formalities for offering physical as well as financial security to employees. All these play an important
role in any HR department efforts to obtain, maintain and retain an effective workforce.

1.3.1 NATURE OF COMPENSATION.


Compensation offered by an organization can come both directly through base pay and variable pay and indirectly through benefits. Base
pay; it is basic compensation an employee gets, usually as a wage or salary.

VARIABLE PAY: it is the compensation that is linked directly to perform accomplishment (bonuses, incentives, stock options)
BENEFITS: These are indirect rewards given to an employee or group employees as a part organization membership (health insurance
vacantion pay, pension etc)

1.3.2 OBJECTIVES OF COMPENSATION AND COMPENSATION PLANNING.


1. To analyze various compensation methods existing in the organization.
2. To identify the relationships of performance with compensation methods in the organizations.
3. To check the level of motivation the employees get through compensation provided.
4. To analysis employees level of satisfaction towards compensation provided by company.
5. To identify process of interlink between performance and compensation method.

1.3.3 COMPENSATION PLANNING


In addition there are other objectives also they are.

1. Attract talent; compensation needs to be high enough to attract talented people since many firms compete to
hire the services of competent people, the salaries offered must be high enough to motivate them to apply.
2. RETAIN TALENTS: If compensation levels fall below the expectations of employees or are not competitive,
employees may quit in frustration.
3. ENSURE EQUITY: Pay should equal the worth of a job, similar get similar pay, likewise, more qualified people
should get better wages.
4. CONTROL COSTS: The cost of hiring people should not be too high, Effective compensation management
ensures that worker are nether over paid nor underpaid.
5. EASE OF OPERATION: The compensation management system should be easy to understand and operate.
Then only will it promote understand regarding pay-related matters between employees union and
management.
1.3.4 COMPONENTS OF PAY STRUCTURE IN INDIA.
WAGES. In India, different Acts include different items under wages. Under the workman's compensation act 1923 wages for holiday pay,
overtime, bonus attendance bonus, and good conduct bonus "form part of wages.

1. Bonus or other payments under a profit-sharing schemes which do not form a part of contract of employees.
2. Value of any house accommodation supply of light, water, medical attendance, traveling allowance, or payment in lieu
thereof or any other concession.
3. Any sum paid to defray special expenses entailed by the nature of security and social insurance benefits.
4. Any contribution to pension, provided fund or a scheme of social securities and social insurance benefits.
BASIC WAGE: The basic wage in India corresponds with what has been recommended by the fair wages committee (1948) and the 15 th
Indian labor conference (1957) The various awards by wages tribunals wages board, pay commission reports and job evolutions also serve
as guiding principle in determining basic wage they are:

1. Skill need of the job:


2. Experience needed;
3. Difficulty of work mentally as well as physical;
4. Training needed;
5. Responsibilities involved;
6. Hazardous nature of job;

DEARNESS ALLOWANCE: It is the allowance paid to employee in order to enable them to face the increasing dearness of essential
commodities. It service as a cushion, as sort of insurance against increasing in price levels of commodities. Instead of increasing wages
every time there is a rise in price levels, DA is paid to neutralize the effects of inflation; when prices go down DA can be always be reduced.
These has, however, remained a hypothetical situation as price never come down to necessitate a cut in dearness allowance payable to
employees.
WAGE AND SALARY ADMINISTRATION.
Employees compensation may be classified into two types basic compensation and supplementary compensation. Basic compensation
refers to monetary payments in the form of wages and salaries the term wage implies remuneration to works doing manual work. The term
salaries is usually defined to mean compensation to office, managerial, technical and professional staff.
1.3.5 OBJECTIVES: A sound plan of compensation administration seeks to achieve the following objectives;

• To attract qualified and competent personnel.


• To control labour and administrative costs in line with the ability of
the organization units.
• To improve motivation and moral of employees and to improve union-
management to pay.
• To retain the present employees by keeping wages levels in tune with
competing units.

PRINCIPLES OF WAGE AND SALARY ADMINISTRATION


• Wage and salary plans should be sufficiently flexible.
• Job evaluation must be done scientifically.
• Wages and salary administration plans and programmers should be
responsive to the changing local and national conditions.
• These plans should simplify and expedite other administrative
processes.

FACTORS INFLUENCING COMPENSATION LEVELS


The amount of compensation received by an employee should reflect the
efforts put in by the employee, the degree of difficulty and the demand –
supply position within the country, etc. These are discussed below.

1. JOB NEEDS: Jobs vary greatly in their difficulty, complexity and


challenges. Some need high levels of skills and knowledge
while other can be handled by almost anyone simple, routine
task that can be done by many people with minimal skills
received relatively low pay… on the other hand, complex,
challenging tasks that can be done by few people with high skill
levels generally received high pay.
2. Ability to pay: Inflation reduces the purchasing power of
employees. To overcome this, unions and workers prefer to link
wages to that cost of living index.. When the index rises due to
raising prices, wages follow suit.
3. PREVAILING WAGS RATES: This wages rates in competing
firms within an industry are taken into account while fixing
wages. Accompany that does not pay comparable wages may
find it difficult to attract and retain talented employees.
4. UNIONS: High unionized sector generally have high because
well organized unions can exert presence on management and
obtain all sort of benefits and concessions to workers.
5. STATE REGULATION: The legal stipulations in respect of
minimum wages, bonus DA, etc: determine the wages structure
in an industry.
1.3.8. WAGS POLICY IN INDIA
Minimum wages: minimum wages is that wage which must invariably be
paid whether the company, big or small, makes profits or not. It is the bare
minimum that a worker can expect to get for services rendered by him.

Bonus: An important components the employees earning, besides salary.


Stating as an adhoc and excretion payments, bonus was claimed as DA
during world war. In the course of labor history, it has metamorphosed from
a reward or an incentives for good work, into a defendable right and a just
claim.

1.3.9 CHOICE IN DESIGNING A COMPENSATION SYSTEM:


The compensation system that is followed by a firm should be in tune with
its own unique character and culture and allow the firm to achieve its
strategic objectives A wider variety of options a firm while designing such a
system.

1. INTERAL AND EXTERAL PAY: Pay equity, as started pensively, is


achieved when the compensation received is equal to the value of
the work done, compensation policies are internally equitable
when employee believe that the wage rates for their approximate
job's worth to the organization.
2. FIXED VARIABLE PAY: Now a days variable pay programs are
widely followed through out many organizations and for all levels
of employees widespread use various incentives plans team
bonuses, profit sharing programmes have been implemented with
a view to link growth in compensation results.
3. PERFORMANCE MEMBERSHIP: Konwledge-based organization
these days follow a performance – based payment plans offering
awards to employees for cost saving suggestions, bonus, for
perfect attendance or merits payment based on supervisory
appraisal.
4. JOB INDIVIDUAL PAY: most tradition organization even today –
decide the minimum and maximum values of each job
independently of individual worker, ignoring their abilities,
potential and to take up multiple job.
5. BWLOW MARKET AND ABOVE MARKET COMPENSATION: In
higher terms R & D workers might be paid better than
counterparts in the manufacturing divisions. Blue chips such as
HLL, NESTLE, PROCTER in order to attract (and retain)" the crop To
grow rapidly and to get ahead of other in the race, especially in
knowledge – based industries most companies prefer to pay
above-market salaries.
6. OPEN SECRET PAY: In the real world, the issue of paying
compensation openly or in a secret way may often become a bone
of contention between employer current research evidence
indicates that pay openness is likely to be more successful in
organization.

RESEARCH DESIGN

RESEARCH DESIGN
Research designs decides the fate decides the fact of the proposal and its
outcome if the designs is defective the whole outcome and report will be
faulty and undependable. Designing is preliminary step in every activity it is
at designing stage that the purpose for which research is to be used will also
have to be decided to the designing stage. Designing thus provides a picture
for the whole before starting the work.

Research design is the conceptual structure of research project. It


constitutes the blue print for the collection and analysis of data in a manner
that aims to combine relevance to research purpose with economy in
procedure.

Keeping the objective hypothesis of the study descriptive research has


been adopted descriptive reach studies are those studies which are
concerned with descriptive research studies are those studies which are
concerned with descriptive research studies are those studies which are
concerned with describing the characteristic of a particular divided or group
so the research has adopted this design to known the age, sex, and
education background, nature of work and year of experience of the
respondent this research design is also useful to analyze the level of job
satisfactions and its impact on work performance of the respondent.
STATEMETN OF PROBLEM.
This project is entitled with effectiveness of employee compensation of
employee in BANGALORE MILK UNION (BAMUL). The state of problem is the
whether the company is providing effective compensation for their
employees and to check whether employees are satisfied with compensation
provided to them.

SCOPE OF THE STUDY


This project research selected the company within the location Bangalore.
Respondents for the study are only the employees of the company and the
time involved in this project were as prescribed by the university.

The present study title employee compensation covers BANGALORE MILK


UNION (BAMUL)

The study comprises middle and contract level of employees


The survey was conducted on emplyee's compensation.

OBJECTIVES OF STUDY.
1. To analyse various compensation methods existing in the organization.
2. To identify the relationships of performance with compensation methods
in the organization.
3. To check the level of motivation the employees get through
compensation provided.
4. To analyse employees, level of satisfaction towards compensation
provided by company.
5. To identify process of interlink between performance and compensation
method.
METHODOLOGY
Research methodology may be under started as science of studying how
research is done scientifically. It is way to systematically solve research
problem.
RESEARCH DESIGN IN BAMUL.
"A research design is the arrangement of condition for collection and
analysis of data in a manner that aims to combine relevance to the
research purpose with the economy in procedure Intact the research
design is the conceptual structure within which research is conducted
it constitutes the blue prints for the collection, measurement and
analysis of data. As such the design include an outline of what the
researcher will do from writing the hypothesis and its operational
implication to the final analysis of data.

For this research work descriptive design is used, it includes survey


and fact-findings, enquires of different kinds. The major purpose of
descriptive research is description of the state of affairs, as it exists
at preset. In this method the reach has no control over the variable
he/she can only report what had happened and what is happening
this study tries to describe the effectiveness of job satisfaction in
Bangalore milk union (bamul).

Business Definition for: Termination Interview


• a meeting between an employee and a management representative in order to dismiss the
employee. A termination interview should be brief, explaining the reasons for the dismissal, and
giving details of whether a notice period should be worked and whetherâ€"especially in the case
of a layoffâ€"additional assistance will be forthcoming from the employer.

Employee Termination: Exit Interviews


An exit interview is separate from the termination meeting. Information exchanged at exit
interviews may benefit both the company and employee. For example, you may learn that a
supervisor is not leading employees as well as you thought. Or, you may find that your
employees need more training in a particular area.
A representative from the human resources department typically conducts exit interviews. If you
don't have a formal human resources department, a senior manager other than the employee's
immediate supervisor should conduct the interview. Of course, the objectivity desired from an
exit interview is lost if you have also conducted the termination meeting. Thus, in a very small
company, you might want to provide the employee with a simple exit interview survey, and ask
them to complete and mail it back to you.
In the case of a terminated employee, discuss or clarify the reasons for the termination. For
employees that have resigned, the employer may learn about the reasons leading to the employee's
decision.
Listed below are steps to guide you through the exit interview process.
• Prepare for the interview by briefly talking with the employee's manager and reading the
employee's personnel file, performance appraisals, and other documents.
• Set a meeting agenda. Allow enough time for discussion.
• Prepare questions similar to those of an employee attitude survey such as:
Do you feel management communicates well?
What changes would help employees do their jobs better?

• Schedule the meeting as close as possible to the employee's departure from the company.
Many companies plan this as the last stop for departing employees.
• Explain the purpose of the interview to the employee that is to gather information about
the employee's perception of the company and how it treats employees.
• Assure the employee that comments made during the exit interview will remain
anonymous except in the case of allegations of misconduct.
• Be prepared to answer employee's questions.
• Set the right tone. Be warm, receptive and interested in what the employee has to say.
Listen. Don't insert personal comments, provide opinions or defend the company and its
actions. Your role is to gather information and stay objective.
• Review any noncompetition or nondisclosure agreements they may have signed.
• Gather or verify that all company property and material has been returned.
• Document the exit interview.
Many companies develop an exit interview form that is completed by the interviewer.

Dividend Irrelevance Theory

What Does Dividend Irrelevance Theory Mean?


A theory that investors are not concerned with a company's dividend policy since they can sell a portion of their portfolio of equities if
they want cash.

Investopedia explains Dividend Irrelevance Theory


The dividend irrelevance theory essentially indicates that an issuance of dividends should have little to no impact on stock price.

Modigliani-Miller theorem
From Wikipedia, the free encyclopedia
Jump to:navigation, search
The Modigliani-Miller theorem (of Franco Modigliani, Merton Miller) forms the basis for modern
thinking on capital structure. The basic theorem states that, under a certain market price process
(the classical random walk), in the absence of taxes, bankruptcy costs, and asymmetric information,
and in an efficient market, the value of a firm is unaffected by how that firm is financed.[1] It does
not matter if the firm's capital is raised by issuing stock or selling debt. It does not matter what
the firm's dividend policy is. Therefore, the Modigliani-Miller theorem is also often called the
capital structure irrelevance principle.
Modigliani was awarded the 1985 Nobel Prize in Economics for this and other contributions.
Miller was awarded the 1990 Nobel Prize in Economics, along with Harry Markowitz and William
Sharpe, for their "work in the theory of financial economics," with Miller specifically cited for
"fundamental contributions to the theory of corporate finance."

[edit] Historical background


Miller and Modigliani derived the theorem and wrote their groundbreaking article when they
were both professors at the Graduate School of Industrial Administration (GSIA) of Carnegie Mellon
University. The story goes that Miller and Modigliani were set to teach corporate finance for
business students despite the fact that they had no prior experience in corporate finance. When
they read the material that existed they found it inconsistent so they sat down together to try to
figure it out. The result of this was the article in the American Economic Review and what has
later been known as the M&M theorem.
[edit] Propositions
This article or section reads like a textbook and may need a cleanup.
Please help to improve this article to meet Wikipedia's quality standards.

The theorem was originally proven under the assumption of no taxes. It is made up of two
propositions which can also be extended to a situation with taxes.
Consider two firms which are identical except for their financial structures. The first (Firm U) is
unlevered: that is, it is financed by equity only. The other (Firm L) is levered: it is financed
partly by equity, and partly by debt. The Modigliani-Miller theorem states that the value of the
two firms is the same.

[edit] Without taxes

Proposition I: where VU is the value of an unlevered firm = price of buying a firm


composed only of equity, and VL is the value of a levered firm = price of buying a firm that is
composed of some mix of debt and equity. Another word for levered is geared, which has the
same meaning[2].
To see why this should be true, suppose an investor is considering buying one of the two firms U
or L. Instead of purchasing the shares of the levered firm L, he could purchase the shares of firm
U and borrow the same amount of money B that firm L does. The eventual returns to either of
these investments would be the same. Therefore the price of L must be the same as the price of U
minus the money borrowed B, which is the value of L's debt.
This discussion also clarifies the role of some of the theorem's assumptions. We have implicitly
assumed that the investor's cost of borrowing money is the same as that of the firm, which need
not be true in the presence of asymmetric information or in the absence of efficient markets.
Proposition II:

Proposition II with risky debt. As leverage (D/E) increases, the WACC (k0) stays
constant.

• ke is the required rate of return on equity, or cost of equity.


• k0 is the company unlevered cost of capital (ie assume no leverage).
• kd is the required rate of return on borrowings, or cost of debt.
• D / E is the debt-to-equity ratio.
A higher debt-to-equity ratio leads to a higher required return on equity, because of the higher
risk involved for equity-holders in a company with debt. The formula is derived from the theory
of weighted average cost of capital (WACC).
These propositions are true assuming the following assumptions:
• no taxes exist,
• no transaction costs exist, and
• individuals and corporations borrow at the same rates.
These results might seem irrelevant (after all, none of the conditions is met in the real world), but
the theorem is still taught and studied because it tells something very important. That is, capital
structure matters precisely because one or more of these assumptions is violated. It tells where to
look for determinants of optimal capital structure and how those factors might affect optimal
capital structure.
[edit] With taxes
Proposition I:

where
• VL is the value of a levered firm.
• VU is the value of an unlevered firm.
• TCD is the tax rate (TC) x the value of debt (D)
• the term TCD assumes debt is perpetual
This means that there are advantages for firms to be levered, since corporations can deduct
interest payments. Therefore leverage lowers tax payments. Dividend payments are non-
deductible.
Proposition II:

where
• rE is the required rate of return on equity, or cost of levered equity = unlevered equity
+ financing premium.

• r0 is the company cost of equity capital with no leverage(unlevered cost of


equity, or return on assets with D/E = 0).
• rD is the required rate of return on borrowings, or cost of debt.
• D / E is the debt-to-equity ratio.
• Tc is the tax rate.
The same relationship as earlier described stating that the cost of equity rises with leverage,
because the risk to equity rises, still holds. The formula however has implications for the
difference with the WACC. Their second attempt on capital structure included taxes has identified
that as the level of gearing increases by replacing equity with cheap debt the level of the WACC
drops and an optimal capital structure does indeed exist at a point where debt is 100%
The following assumptions are made in the propositions with taxes:
• corporations are taxed at the rate TC on earnings after interest,
• no transaction costs exist, and
• individuals and corporations borrow at the same rate
Miller and Modigliani published a number of follow-up papers discussing some of these issues.
The theorem was first proposed by F. Modigliani and M. Miller in 1958.

marketing myopia
Hide links within definitionsShow links within definitions
Definition

Short sighted and inward looking approach to marketing that focuses on the needs of the firm instead
of defining the firm and its products in terms of the customers' needs and wants. Such self-centered
firms fail to see and adjust to the rapid changes in their markets and, despite their previous eminence,
falter, fall, and disappear. This concept was discussed in an article (titled 'Marketing Myopia,' in July-
August 1960 issue of Harvard Business Review) by Harvard Business School emeritus professor of
marketing, Theodore C. Levitt (1925-), who suggests that firms get trapped in this bind because they
omit to ask the vital question, "What business are we in?"

marketing myopia

Dictionary of Marketing Terms

marketing myopia

narrow-minded approach to a marketing situation where only short-range goals are considered or where the
marketing focuses on only one aspect out of many possible marketing attributes. Because of its
shortsightedness, marketing myopia is an inefficient marketing approach.

See also marketing concept

Related Terms:
Dictionary of Marketing Terms

marketing concept

goal-oriented, integrated philosophy practiced by producers of goods and services that focuses on satisfying
the needs of consumers over the needs of the producing company. The marketing concept holds that the
desires and needs of the target market must be determined and satisfied in order to successfully achieve the
goals of the producer.
Product Mix
Product mix is a combination of products manufactured or traded by the same business
house to reinforce their presence in the market, increase market share and increase the
turnover for more profitability. Normally the product mix is within the synergy of other
products for a medium size organization. However large groups of Industries may have
diversified products within core competency. Larsen & Toubro Ltd, Godrej, Reliance in India
are some of the examples.

One of the realities of business is that most firms deal with multi-products .This helps a firm
diffuse its risk across different product groups/Also it enables the firm to appeal to a much
larger group of customers or to different needs of the same customer group .So when
Videocon chose to diversify into other consumer durables like music systems ,washing
machines and refrigerators ,it sought to satisfy the needs of the middle and upper middle
income group of consumers.

Likewise , Bajaj Electricals.a household name in India, has almost ninety products in i8ts
portfolio ranging from low value items like bulbs to high priced consumer durables like
mixers and luminaires and lighting projects .The number of products carried by a firm at a
given point of time is called its product mix. This product mix contains product lines and
product items .In other words it’s a composite of products offered for sale by a firm.

Product Mix Decisions

Often firms take decisions to change their product mix. These decisions are dictated by the
above factors and also by the changes occurring in the market place. Like the changing life-
styles of Indian consumers led BPL-Sanyo to launch an entire range of white goods like
refrigerators , washing machines, and microwave ovens .It also motivate the firm to launch
other entertainment electronics. Rahejas, a well-known builders firm in Bombay, took a
major decision to convert one of its theatre buildings in the western suburbs of Bombay into
a large garments and accessories store for men ,women and children, perhaps the first of
its kind in India to have almost all products required by these customer groups Competition
from low priced washing powders (mainly Nirma) forced Hindustan Levers to launch
different brands of detergent powder at different price levels positioned at different market
segments .Customer preferences for herbs, mainly shikakai motivated Lever to launch black
Sunsilk Shampoo ,which has shikakai .Also ,low purchasing power. and cultural bias against
shampoo market made Hindustan Lever consider smaller packaging mainly sachets , for
single use .So, it is the changes or anticipated changes in the market place that motivates a
firm to consider changes in its product mix.

The product mix of a company, which is generally defined as the total composite of products
offered by a particular organization, consists of both product lines and individual products. A
product line is a group of products within the product mix that are closely related, either because
they function in a similar manner, are sold to the same customer groups, are marketed through
the same types of outlets, or fall within given price ranges. A product is a distinct unit within the
product line that is distinguishable by size, price, appearance, or some other attribute. For
example, all the courses a university offers constitute its product mix; courses in the marketing
department constitute a product line; and the basic marketing course is a product item. Product
decisions at these three levels are generally of two types: those that involve width (variety) and
depth (assortment) of the product line and those that involve changes in the product mix occur
over time.
The depth (assortment) of the product mix refers to the number of product items offered
Hypothetical State University Product Mix
WIDE WIDTH, AVERAGE DEPTH
Political Mathematic
Education
Science s

Political Elementary Calculus I

Theory Teaching

American Secondary Calculus II

Government Teaching

International Teaching Trigonometry

Relations Internship

Post
State Math Theory
Secondary

Government Teaching

Engineerin
Nursing English
g

English
Biology Physics
Literature

Advanced European
Chemistry
Math Writers

Organic Electrical Hemingway

Chemistry Concepts Seminar

Statistics Logic Design Creative


Political Mathematic
Education
Science s

Writing

within each line; the width (variety) refers to the number of product lines a company carries. For
example, Table 1 illustrates the hypothetical product mix of a major state university.
The product lines are defined in terms of academic departments. The depth of each line is shown
by the number of different product items—course offerings—offered within each product line.
(The examples represent only a partial listing of what a real university would offer.) The state
university has made the strategic decision to offer a diverse market mix. Because the university
has numerous academic departments, it can appeal to a large cross-section of potential students.
This university has decided to offer a wide product line (academic departments), but the depth of
each department (course offerings) is only average.
In order to see the difference in product mix, product line, and products, consider a smaller
college that focuses on the sciences represented in Table 2. This college has decided to
concentrate its resources in a few departments (again, this is
Hypothetical Small College Product Mix
NARROW WIDTH, LARGE DEPTH
Mathematics Physics

Geometric
Intermediate Physics
Concepts

Analytic
Advanced Physics
Geometry

and Calculus

Calculus II Topics on Physics

and Astronomy

Calculus III Thermodynamics

Numerical Condensed Matter


Analysis Physics II

Differential Electromagnetic
Equations Theory

Matrix Theory Quantum Mechanics II


only a partial listing); that is, it has chosen a concentrated market strategy (focus on limited
markets). This college offers narrow product line (academic departments) with a large product
depth (extensive course offerings within each department). This product mix would most likely
appeal to a much narrower group of potential students—those students who are interested in
pursuing intensive studies in math and science.
PRODUCT-MIX MANAGEMENT AND
RESPONSIBILITIES
It is extremely important for any organization to have a well-managed product mix. Most
organizations break down managing the product mix, product line, and actual product into three
different levels.
Product-mix decisions are concerned with the combination of product lines offered by the
company. Management of the companies' product mix is the responsibility of top management.
Some basic product-mix decisions include: (1) reviewing the mix of existing product lines; (2)
adding new lines to and deleting existing lines from the product mix; (3) determining the relative
emphasis on new versus existing product lines in the mix; (4) determining the appropriate
emphasis on internal development versus external acquisition in the product mix; (5) gauging the
effects of adding or deleting a product line in relationship to other lines in the product mix; and
(6) forecasting the effects of future external change on the company's product mix.
Product-line decisions are concerned with the combination of individual products offered within
a given line. The product-line manager supervises several product managers who are responsible
for individual products in the line. Decisions about a product line are usually incorporated into a
marketing plan at the divisional level. Such a plan specifies changes in the product lines and
allocations to products in each line. Generally, product-line managers have the following
responsibilities: (1) considering expansion of a given product line; (2) considering candidates for
deletion from the product line; (3) evaluating the effects of product additions and deletions on
the profitability of other items in the line; and (4) allocating resources to individual products in
the line on the basis of marketing strategies recommended by product managers.
Decisions at the first level of product management involve the marketing mix for an individual
brand/product. These decisions are the responsibility of a brand manager (sometimes called a
product manager). Decisions regarding the marketing mix for a brand are represented in the
product's marketing plan. The plan for a new brand would specify price level, advertising
expenditures for the coming year, coupons, trade discounts, distribution facilities, and a five-year
statement of projected sales and earnings. The plan for an existing product would focus on any
changes in the marketing strategy. Some of these changes might include the product's target
market, advertising and promotional expenditures, product characteristics, price level, and
recommended distribution strategy.
GENERAL MANAGEMENT WORKFLOW
Top management formulates corporate objectives that become the basis for planning the product
line. Product-line managers formulate objectives for their line to guide brand managers in
developing the marketing mix for individual brands. Brand strategies are then formulated and
incorporated into the product-line plan, which is in turn incorporated into the corporate plan. The
corporate plan details changes in the firm's product lines and specifies strategies for growth.
Once plans have been formulated, financial allocations flow from top management to product
line and then to brand management for implementation. Implementation of the plan requires
tracking performance and providing data from brand to product line to top management for
evaluation and control. Evaluation of the current plan then becomes the first step in the next
planning cycle, since it provides a basis for examining the company's current offerings and
recommending modifications as a result of past performance.
PRODUCT-MIX ANALYSIS
Since top management is ultimately responsible for the product mix and the resulting profits or
losses, they often analyze the company product mix. The first assessment involves the area of
opportunity in a particular industry or market. Opportunity is generally defined in terms of
current industry growth or potential attractiveness as an investment. The second criterion is the
company's ability to exploit opportunity, which is based on its current or potential position in the
industry. The company's position can be measured in terms of market share if it is currently in
the market, or in terms of its resources if it is considering entering the market. These two factors
—opportunity and the company's ability to exploit it—provide four different options for a
company to follow.
1. High opportunity and ability to exploit it result in the firm's introducing new
products or expanding markets for existing products to ensure future growth.
2. Low opportunity but a strong current market position will generally result in
the company's attempting to maintain its position to ensure current
profitability.
3. High opportunity but a lack of ability to exploit it results in either (a)
attempting to acquire the necessary resources or (b) deciding not to further
pursue opportunity in these markets.
4. Low opportunity and a weak market position will result in either (a) avoiding
these markets or (b) divesting existing products in them.
These options provide a basis for the firm to evaluate new and existing products in an attempt to
achieve balance between current and future growth. This analysis may cause the product mix to
change, depending on what management decides.
The most widely used approach to product portfolio analysis is the model developed by the
Boston Consulting Group (BCG). The BCG analysis emphasizes two main criteria in evaluating
the firm's product mix: the market growth rate and the product's relative market share. BCG uses
these two criteria because they are closely related to profitability, which is why top management
often uses the BCG analysis. Proper analysis and conclusions may lead to significant changes to
the company's product mix, product line, and product offerings.
The market growth rate represents the products' category position in the product life cycle.
Products in the introductory and growth phases require more investment because of research and
development and initial marketing costs for advertising, selling, and distribution. This category is
also regarded as a high-growth area (e.g., the Internet). Relative market share represents the
company's competitive strength (or estimated strength for a new entry). Market share is
compared to that of the leading competitor. Once the analysis has been done using the market
growth rate and relative market share, products are placed into one of four categories.
• Stars: Products with high growth and market share are know as stars.
Because these products have high potential for profitability, they should be
given top priority in financing, advertising, product positioning, and
distribution. As a result, they need significant amounts of cash to finance
rapid growth and frequently show an initial negative cash flow.
• Cash cows: Products with a high relative market share but in a low growth
position are cash cows. These are profitable products that generate more
cash than is required to produce and market them. Excess cash should be
used to finance high-opportunity areas (stars or problem children). Strategies
for cash cows should be designed to sustain current market share rather than
to expand it. An expansion strategy would require additional investment, thus
decreasing the existing positive cash flow.
• Problem children: These products have low relative market share but are in a
high-growth situation. They are called "problem children" because their
eventual direction is not yet clear. The firm should invest heavily in those
that sales forecasts indicate might have a reasonable chance to become
stars. Otherwise divestment is the best course, since problem children may
become dogs and thereby candidates for deletion.
• Dogs: Products in the category are clearly candidates for deletion. Such
products have low market shares and unlike problem children, have no real
prospect for growth. Eliminating a dog is not always necessary, since there
are strategies for dogs that could make them profitable in the short term.
These strategies involve "harvesting" these products by eliminating
marketing support and selling the product only to intensely loyal consumers
who will buy in the absence of advertising. However, over the long term
companies will seek to eliminate dogs.
As can be seen from the description of the four BCG alternatives, products are evaluated as
producers or users of cash. Products with a positive cash flow will finance high-opportunity
products that need cash. The emphasis on cash flow stems from management's belief that it is
better to finance new entries and to support existing products with internally produced funds than
to increase debt or equity in the company.
Based on this belief, companies will normally take money from cash cows and divert it to stars
and to some problem children. The hope is that the stars will turn into cash cows and the problem
children will turn into stars. The dogs will continue to receive lower funding and eventually be
dropped.
CONCLUSION
Managing the product mix for a company is very demanding and requires constant attention. Top
management must provide accurate and timely analysis (BCG) of their company's product mix
so the appropriate adjustments can be made to the product line and individual products.

CORPORATE-LEVEL STRATEGY
Corporate-level strategies address the entire strategic scope of the enterprise. This is the
"big picture" view of the organization and includes deciding in which product or service
markets to compete and in which geographic regions to operate. For multi-business
firms, the resource allocation process—how cash, staffing, equipment and other
resources are distributed—is typically established at the corporate level. In addition,
because market definition is the domain of corporate-level strategists, the responsibility
for diversification, or the addition of new products or services to the existing
product/service line-up, also falls within the realm of corporate-level strategy. Similarly,
whether to compete directly with other firms or to selectively establish cooperative
relationships—strategic alliances—falls within the purview corporate-level strategy,
while requiring ongoing input from

Table 1
Corporate, Business, and Functional Strategy

Level of
Definition Example
Strategy

Corporate Diversification into new product or geographic


Market definition
strategy markets

Business Attempts to secure competitive advantage in


Market navigation
strategy existing product or geographic markets

Information systems, human resource


Support of corporate
Functional practices, and production processes that
strategy and business
strategy facilitate achievement of corporate and
strategy
business strategy

business-level managers. Critical questions answered by corporate-level strategists thus


include:

1. What should be the scope of operations; i.e.; what businesses should the firm be in?

2. How should the firm allocate its resources among existing businesses?
3. What level of diversification should the firm pursue; i.e., which businesses represent the
company's future? Are there additional businesses the firm should enter or are there
businesses that should be targeted for termination or divestment?

4. How diversified should the corporation's business be? Should we pursue related
diversification; i.e., similar products and service markets, or is unrelated diversification;
i.e., dissimilar product and service markets, a more suitable approach given current and
projected industry conditions? If we pursue related diversification, how will the firm
leverage potential cross-business synergies? In other words, how will adding new
product or service businesses benefit the existing product/service line-up?

5. How should the firm be structured? Where should the boundaries of the firm be drawn
and how will these boundaries affect relationships across businesses, with suppliers,
customers and other constituents? Do the organizational components such as research
and development, finance, marketing, customer service, etc. fit together? Are the
responsibilities or each business unit clearly identified and is accountability established?

6. Should the firm enter into strategic alliances—cooperative, mutually-beneficial


relationships with other firms? If so, for what reasons? If not, what impact might this
have on future profitability?

As the previous questions illustrate, corporate strategies represent the long-term


direction for the organization. Issues addressed as part of corporate strategy include
those concerning diversification, acquisition, divestment, strategic alliances, and
formulation of new business ventures. Corporate strategies deal with plans for the entire
organization and change as industry and specific market conditions warrant.

Top management has primary decision making responsibility in developing corporate


strategies and these managers are directly responsible to shareholders. The role of the
board of directors is to ensure that top managers actually represent these shareholder
interests. With information from the corporation's multiple businesses and a view of the
entire scope of operations and markets, corporate-level strategists have the most
advantageous perspective for assessing organization-wide competitive strengths and
weaknesses, although as a subsequent section notes, corporate strategists are paralyzed
without accurate and up-to-date information from managers at the business-level.

CORPORATE PORTFOLIO ANALYSIS


One way to think of corporate-level strategy is to compare it to an individual managing a
portfolio of investments. Just as the individual investor must evaluate each individual
investment in the portfolio to determine whether or not the investment is currently
performing to expectations and what the future prospects are for the investment,
managers must make similar decisions about the current and future performances of
various businesses constituting the firm's portfolio. The Boston Consulting Group (BCG)
matrix is a relatively simple technique for assessing the performance of various
segments of the business.

The BCG matrix classifies business-unit performance on the basis of the unit's relative
market share and the rate of market growth as shown in Figure 1.

Figure 1
BCG Model of Portfolio Analysis

Products and their respective strategies fall into one of four quadrants. The typical starting point
for a new business is as a question mark. If the product is new, it has no market share, but the
predicted growth rate is good. What typically happens in an organization is that management is
faced with a number of these types of products but with too few resources to develop all of them.
Thus, the strategic decision-maker must determine which of the products to attempt to develop
into commercially viable products and which ones to drop from consideration. Question marks
are cash users in the organization. Early in their life, they contribute no revenues and require
expenditures for market research, test marketing, and advertising to build consumer awareness.

If the correct decision is made and the product selected achieves a high market share, it
becomes a BCG matrix star. Stars have high market share in high-growth markets. Stars
generate large cash flows for the business, but also require large infusions of money to
sustain their growth. Stars are often the targets of large expenditures for advertising and
research and development to improve the product and to enable it to establish a
dominant position in the industry.

Cash cows are business units that have high market share in a low-growth market. These
are often products in the maturity stage of the product life cycle. They are usually well-
established products with wide consumer acceptance, so sales revenues are usually high.
The strategy for such products is to invest little money into maintaining the product and
divert the large profits generated into products with more long-term earnings potential,
i.e., question marks and stars.

Dogs are businesses with low market share in low-growth markets. These are often cash
cows that have lost their market share or question marks the company has elected not to
develop. The recommended strategy for these businesses is to dispose of them for
whatever revenue they will generate and reinvest the money in more attractive
businesses (question marks or stars).

Despite its simplicity, the BCG matrix suffers from limited variables on which to base
resource allocation decisions among the business making up the corporate portfolio.
Notice that the only two variables composing the matrix are relative market share and
the rate of market growth. Now consider how many other factors contribute to business
success or failure. Management talent, employee commitment, industry forces such as
buyer and supplier power and the introduction of strategically-equivalent substitute
products or services, changes in consumer preferences, and a host of others determine
ultimate business viability. The BCG matrix is best used, then, as a beginning point, but
certainly not as the final determination for resource allocation decisions as it was
originally intended. Consider, for instance, Apple Computer. With a market share for its
Macintosh-based computers below ten percent in a market notoriously saturated with a
number of low-cost competitors and growth rates well-below that of other technology
pursuits such as biotechnology and medical device products, the BCG matrix would
suggest Apple divest its computer business and focus instead on the rapidly growing
iPod business (its music download business). Clearly, though, there are both
technological and market synergies between Apple's Macintosh computers and its fast-
growing iPod business. Divesting the computer business would likely be tantamount to
destroying the iPod business.

A more stringent approach, but still one with weaknesses, is a competitive assessment. A
competitive assessment is a technique for ranking an organization relative to its peers in
the industry. The advantage of a competitive assessment over the BCG matrix for
corporate-level strategy is that the competitive assessment includes critical success
factors, or factors that are crucial for an organizational to prevail when all organizational
members are competing for the same customers. A six-step process that allows
corporate strategist to define appropriate variables, rather than being locked into the
market share and market growth variables of the BCG matrix, is used to develop a table
that shows a businesses ranking relative to the critical success factors that managers
identify as the key factors influencing failure or success. These steps include:

1. Identifying key success factors. This step allows managers to select the most appropriate
variables for its situation. There is no limit to the number of variables managers may
select; the idea, however, is to use those that are key in determining competitive
strength.

2. Weighing the importance of key success factors. Weighting can be on a scale of 1 to 5, 1 to


7, or 1 to 10, or whatever scale managers believe is appropriate. The main thing is to
maintain consistency across organizations. This step brings an element of realism to the
analysis by recognizing that not all critical success factors are equally important.
Depending on industry conditions, successful advertising campaigns may, for example,
be weighted more heavily than after-sale product support.

3. Identifying main industry rivals. This step helps managers focus on one of the most
common external threats; competitors who want the organization's market share.

4. Managers rating their organization against competitors.

5. Multiplying the weighted importance by the key success factor rating.

6. Adding the values. The sum of the values for a manager's organization versus
competitors gives a rough idea if the manager's firm is ahead or behind the competition
on weighted key success factors that are critical for market success.
A competitive strength assessment is superior to a BCG matrix because it adds more
variables to the mix. In addition, these variables are weighted in importance in contrast
to the BCG matrix's equal weighting of market share and market growth. Regardless of
these advantages, competitive strength assessments are still limited by the type of data
they provide. When the values are summed in step six, each organization has a number
assigned to it. This number is compared against other firms to determine which is
competitively the strongest. One weakness is that these data are ordinal: they can be
ranked, but the differences among them are not meaningful. A firm with a score of four
is not twice as good as one with a score of two, but it is better. The degree of
"betterness," however, is not known.

CORPORATE GRAND STRATEGIES


As the previous discussion implies, corporate-level strategists have a tremendous
amount of both latitude and responsibility. The myriad decisions required of these
managers can be overwhelming considering the potential consequences of incorrect
decisions. One way to deal with this complexity is through categorization; one
categorization scheme is to classify corporate-level strategy decisions into three different
types, or grand strategies. These grand strategies involve efforts to expand business
operations (growth strategies), decrease the scope of business operations (retrenchment
strategies), or maintain the status quo (stability strategies).

GROWTH STRATEGIES
Growth strategies are designed to expand an organization's performance, usually as
measured by sales, profits, product mix, market coverage, market share, or other
accounting and market-based variables. Typical growth strategies involve one or more
of the following:

1. With a concentration strategy the firm attempts to achieve greater market penetration by
becoming highly efficient at servicing its market with a limited product line (e.g.,
McDonalds in fast foods).

2. By using a vertical integration strategy, the firm attempts to expand the scope of its
current operations by undertaking business activities formerly performed by one of its
suppliers (backward integration) or by undertaking business activities performed by a
business in its channel of distribution (forward integration).
3. A diversification strategy entails moving into different markets or adding different
products to its mix. If the products or markets are related to existing product or service
offerings, the strategy is called concentric diversification. If expansion is into products or
services unrelated to the firm's existing business, the diversification is called
conglomerate diversification.

STABILITY STRATEGIES
When firms are satisfied with their current rate of growth and profits, they may decide
to use a stability strategy. This strategy is essentially a continuation of existing
strategies. Such strategies are typically found in industries having relatively stable
environments. The firm is often making a comfortable income operating a business that
they know, and see no need to make the psychological and financial investment that
would be required to undertake a growth strategy.

RETRENCHMENT STRATEGIES
Retrenchment strategies involve a reduction in the scope of a corporation's activities,
which also generally necessitates a reduction in number of employees, sale of assets
associated with discontinued product or service lines, possible restructuring of debt
through bankruptcy proceedings, and in the most extreme cases, liquidation of the firm.

• Firms pursue a turnaround strategy by undertaking a temporary reduction in operations


in an effort to make the business stronger and more viable in the future. These moves are
popularly called downsizing or rightsizing. The hope is that going through a temporary
belt-tightening will allow the firm to pursue a growth strategy at some future point.

• A divestment decision occurs when a firm elects to sell one or more of the businesses in
its corporate portfolio. Typically, a poorly performing unit is sold to another company
and the money is reinvested in another business within the portfolio that has greater
potential.

• Bankruptcy involves legal protection against creditors or others allowing the firm to
restructure its debt obligations or other payments, typically in a way that temporarily
increases cash flow. Such restructuring allows the firm time to attempt a turnaround
strategy. For example, since the airline hijackings and the subsequent tragic events of
September 11, 2001, many of the airlines based in the U.S. have filed for bankruptcy to
avoid liquidation as a result of stymied demand for air travel and rising fuel prices. At
least one airline has asked the courts to allow it to permanently suspend payments to its
employee pension plan to free up positive cash flow.

• Liquidation is the most extreme form of retrenchment. Liquidation involves the selling
or closing of the entire operation. There is no future for the firm; employees are released,
buildings and equipment are sold, and customers no longer have access to the product or
service. This is a strategy of last resort and one that most managers work hard to avoid.

BUSINESS-LEVEL STRATEGIES
Business-level strategies are similar to corporate-strategies in that they focus on overall
performance. In contrast to corporate-level strategy, however, they focus on only one
rather than a portfolio of businesses. Business units represent individual entities
oriented toward a particular industry, product, or market. In large multi-product or
multi-industry organizations, individual business units may be combined to form
strategic business units (SBUs). An SBU represents a group of related business
divisions, each responsible to corporate head-quarters for its own profits and losses.
Each strategic business unit will likely have its' own competitors and its own unique
strategy. A common focus of business-level strategies are sometimes on a particular
product or service line and business-level strategies commonly involve decisions
regarding individual products within this product or service line. There are also
strategies regarding relationships between products. One product may contribute to
corporate-level strategy by generating a large positive cash flow for new product
development, while another product uses the cash to increase sales and expand market
share of existing businesses. Given this potential for business-level strategies to impact
other business-level strategies, business-level managers must provide ongoing, intensive
information to corporate-level managers. Without such crucial information, corporate-
level managers are prevented from best managing overall organizational direction.
Business-level strategies are thus primarily concerned with:

1. Coordinating and integrating unit activities so they conform to organizational strategies


(achieving synergy).

2. Developing distinctive competencies and competitive advantage in each unit.


3. Identifying product or service-market niches and developing strategies for competing in
each.

4. Monitoring product or service markets so that strategies conform to the needs of the
markets at the current stage of evolution.

In a single-product company, corporate-level and business-level strategies are the same.


For example, a furniture manufacturer producing only one line of furniture has its
corporate strategy chosen by its market definition, wholesale furniture, but its business
is still the same, wholesale furniture. Thus, in single-business organizations, corporate
and business-level strategies overlap to the point that they should be treated as one
united strategy. The product made by a unit of a diversified company would face many
of the same challenges and opportunities faced by a one-product company. However, for
most organizations, business-unit strategies are designed to support corporate
strategies. Business-level strategies look at the product's life cycle, competitive
environment, and competitive advantage much like corporate-level strategies, except
the focus for business-level strategies is on the product or service, not on the corporate
portfolio.

Business-level strategies thus support corporate-level strategies. Corporate-level


strategies attempt to maximize the wealth of shareholders through profitability of the
overall corporate portfolio, but business-level strategies are concerned with (1) matching
their activities with the overall goals of corporate-level strategy while simultaneously (2)
navigating the markets in which they compete in such a way that they have a financial or
market edge-a competitive advantage-relative to the other businesses in their industry.

ANALYSIS OF BUSINESS-LEVEL
STRATEGIES
PORTER'S GENERIC STRATEGIES.

Harvard Business School's Michael Porter developed a framework of generic strategies


that can be applied to strategies for various products and services, or the individual
business-level strategies within a corporate portfolio. The strategies are (1) overall cost
leadership, (2) differentiation, and (3) focus on a particular market niche. The generic
strategies provide direction for business units in designing incentive systems, control
procedures, operations, and interactions with suppliers and buyers, and with making
other product decisions.

Cost-leadership strategies require firms to develop policies aimed at becoming and


remaining the lowest cost producer and/or distributor in the industry. Note here that
the focus is on cost leadership, not price leadership. This may at first appear to be only a
semantic difference, but consider how this fine-grained definition places emphases on
controlling costs while giving firms alternatives when it comes to pricing (thus
ultimately influencing total revenues). A firm with a cost advantage may price at or near
competitors prices, but with a lower cost of production and sales, more of the price
contributes to the firm's gross profit margin. A second alternative is to price lower than
competitors and accept slimmer gross profit margins, with the goal of gaining market
share and thus increasing sales volume to offset the decrease in gross margin. Such
strategies concentrate on construction of efficient-scale facilities, tight cost and
overhead control, avoidance of marginal customer accounts that cost more to maintain
than they offer in profits, minimization of operating expenses, reduction of input costs,
tight control of labor costs, and lower distribution costs. The low-cost leader gains
competitive advantage by getting its costs of production or distribution lower than the
costs of the other firms in its relevant market. This strategy is especially important for
firms selling unbranded products viewed as commodities, such as beef or steel.

Cost leadership provides firms above-average returns even with strong competitive
pressures. Lower costs allow the firm to earn profits after competitors have reduced
their profit margin to zero. Low-cost production further limits pressures from customers
to lower price, as the customers are unable to purchase cheaper from a competitor. Cost
leadership may be attained via a number of techniques. Products can be designed to
simplify manufacturing. A large market share combined with concentrating selling
efforts on large customers may contribute to reduced costs. Extensive investment in
state-of-the-art facilities may also lead to long run cost reductions. Companies that
successfully use this strategy tend to be highly centralized in their structure. They place
heavy emphasis on quantitative standards and measuring performance toward goal
accomplishment.
Efficiencies that allow a firm to be the cost leader also allow it to compete effectively
with both existing competitors and potential new entrants. Finally, low costs reduce the
likely impact of substitutes. Substitutes are more likely to replace products of the more
expensive producers first, before significantly harming sales of the cost leader unless
producers of substitutes can simultaneously develop a substitute product or service at a
lower cost than competitors. In many instances, the necessity to climb up the experience
curve inhibits a new entrants ability to pursue this tactic.

Differentiation strategies require a firm to create something about its product that is
perceived as unique within its market. Whether the features are real, or just in the mind
of the customer, customers must perceive the product as having desirable features not
commonly found in competing products. The customers also must be relatively price-
insensitive. Adding product features means that the production or distribution costs of a
differentiated product will be somewhat higher than the price of a generic, non-
differentiated product. Customers must be willing to pay more than the marginal cost of
adding the differentiating feature if a differentiation strategy is to succeed.

Differentiation may be attained through many features that make the product or service
appear unique. Possible strategies for achieving differentiation may include warranty
(Sears tools have lifetime guarantee against breakage), brand image (Coach handbags,
Tommy Hilfiger sportswear), technology (Hewlett-Packard laser printers), features
(Jenn-Air ranges, Whirlpool appliances), service (Makita hand tools), and dealer
network (Caterpillar construction equipment), among other dimensions. Differentiation
does not allow a firm to ignore costs; it makes a firm's products less susceptible to cost
pressures from competitors because customers see the product as unique and are willing
to pay extra to have the product with the desirable features.

Differentiation often forces a firm to accept higher costs in order to make a product or
service appear unique. The uniqueness can be achieved through real product features or
advertising that causes the customer to perceive that the product is unique. Whether the
difference is achieved through adding more vegetables to the soup or effective
advertising, costs for the differentiated product will be higher than for non-
differentiated products. Thus, firms must remain sensitive to cost differences. They
must carefully monitor the incremental costs of differentiating their product and make
certain the difference is reflected in the price.

Focus, the third generic strategy, involves concentrating on a particular customer,


product line, geographical area, channel of distribution, stage in the production process,
or market niche. The underlying premise of the focus strategy is that the firm is better
able to serve its limited segment than competitors serving a broader range of customers.
Firms using a focus strategy simply apply a cost-leader or differentiation strategy to a
segment of the larger market. Firms may thus be able to differentiate themselves based
on meeting customer needs through differentiation or through low costs and
competitive pricing for specialty goods.

A focus strategy is often appropriate for small, aggressive businesses that do not have
the ability or resources to engage in a nation-wide marketing effort. Such a strategy may
also be appropriate if the target market is too small to support a large-scale operation.
Many firms start small and expand into a national organization. Wal-Mart started in
small towns in the South and Midwest. As the firm gained in market knowledge and
acceptance, it was able to expand throughout the South, then nationally, and now
internationally. The company started with a focused cost-leader strategy in its limited
market and was able to expand beyond its initial market segment.

Firms utilizing a focus strategy may also be better able to tailor advertising and
promotional efforts to a particular market niche. Many automobile dealers advertise
that they are the largest-volume dealer for a specific geographic area. Other dealers
advertise that they have the highest customer-satisfaction scores or the most awards for
their service department of any dealer within their defined market. Similarly, firms may
be able to design products specifically for a customer. Customization may range from
individually designing a product for a customer to allowing the customer input into the
finished product. Tailor-made clothing and custom-built houses include the customer in
all aspects of production from product design to final acceptance. Key decisions are
made with customer input. Providing such individualized attention to customers may
not be feasible for firms with an industry-wide orientation.
FUNCTIONAL-LEVEL STRATEGIES.

Functional-level strategies are concerned with coordinating the functional areas of the
organization (marketing, finance, human resources, production, research and
development, etc.) so that each functional area upholds and contributes to individual
business-level strategies and the overall corporate-level strategy. This involves
coordinating the various functions and operations needed to design, manufacturer,
deliver, and support the product or service of each business within the corporate
portfolio. Functional strategies are primarily concerned with:

• Efficiently utilizing specialists within the functional area.

• Integrating activities within the functional area (e.g., coordinating advertising,


promotion, and marketing research in marketing; or purchasing, inventory control, and
shipping in production/operations).

• Assuring that functional strategies mesh with business-level strategies and the overall
corporate-level strategy.

Functional strategies are frequently concerned with appropriate timing. For example,
advertising for a new product could be expected to begin sixty days prior to shipment of
the first product. Production could then start thirty days before shipping begins. Raw
materials, for instance, may require that orders are placed at least two weeks before
production is to start. Thus, functional strategies have a shorter time orientation than
either business-level or corporate-level strategies. Accountability is also easiest to
establish with functional strategies because results of actions occur sooner and are more
easily attributed to the function than is possible at other levels of strategy. Lower-level
managers are most directly involved with the implementation of functional strategies.

Strategies for an organization may be categorized by the level of the organization


addressed by the strategy. Corporate-level strategies involve top management and
address issues of concern to the entire organization. Business-level strategies deal with
major business units or divisions of the corporate portfolio. Business-level strategies are
generally developed by upper and middle-level managers and are intended to help the
organization achieve its corporate strategies. Functional strategies address problems
commonly faced by lower-level managers and deal with strategies for the major
organizational functions (e.g., marketing, finance, production) considered relevant for
achieving the business strategies and supporting the corporate-level strategy. Market
definition is thus the domain of corporate-level strategy, market navigation the domain
of business-level strategy, and support of business and corporate-level strategy by
individual, but integrated, functional level strategies
Guide on How to Write University Essay

McKinsey 7S Framework

Introduction
This paper discusses McKinsey's 7S Model that
was created by the consulting company
McKinsey and Company in the early 1980s.
Since then it has been widely used by
practitioners and academics alike in analysing
hundreds of organisations. The paper explains
each of the seven components of the model
and the links between them. It also includes
practical guidance and advice for the students
to analyse organisations using this model. At
the end, some sources for further information
on the model and case studies available on this
website are mentioned.
The McKinsey 7S model was named after a
consulting company, McKinsey and Company,
which has conducted applied research in
business and industry (Pascale & Athos, 1981;
Peters & Waterman, 1982). All of the authors
worked as consultants at McKinsey and
Company; in the 1980s, they used the model to
analyse over 70 large organisations. The
McKinsey 7S Framework was created as a
recognisable and easily remembered model in
business. The seven variables, which the
authors term "levers", all begin with the letter
"S":
These seven variables include structure,
strategy, systems, skills, style, staff and shared
values. Structure is defined as the skeleton of
the organisation or the organisational chart.
The authors describe strategy as the plan or
course of action in allocating resources to
achieve identified goals over time. The systems
are the routine processes and procedures
followed within the organisation. Staff are
described in terms of personnel categories
within the organisation (e.g. engineers),
whereas the skills variable refers to the
capabilities of the staff within the organisation
as a whole. The way in which key managers
behave in achieving organisational goals is
considered to be the style variable; this variable
is thought to encompass the cultural style of the
organisation. The shared values variable,
originally termed superordinate goals, refers to
the significant meanings or guiding concepts
that organisational members share (Peters and
Waterman, 1982).
The shape of the model (as shown in figure 1)
was also designed to illustrate the
interdependency of the variables. This is
illustrated by the model also being termed as
the "Managerial Molecule". While the authors
thought that other variables existed within
complex organisations, the variables
represented in the model were considered to be
of crucial importance to managers and
practitioners (Peters and Waterman, 1982).
The analysis of several organisations using the
model revealed that American companies tend
to focus on those variables which they feel they
can change (e.g. structure, strategy and
systems) while neglecting the other variables.
These other variables (e.g. skills, style, staff
and shared values) are considered to be "soft"
variables. Japanese and a few excellent
American companies are reportedly successful
at linking their structure, strategy and systems
with the soft variables. The authors have
concluded that a company cannot merely
change one or two variables to change the
whole organisation.
For long-term benefit, they feel that the
variables should be changed to become more
congruent as a system. The external
environment is not mentioned in the McKinsey
7S Framework, although the authors do
acknowledge that other variables exist and that
they depict only the most crucial variables in
the model. While alluded to in their discussion
of the model, the notion of performance or
effectiveness is not made explicit in the model.
Description of 7 Ss
Strategy: Strategy is the plan of action an
organisation prepares in response to, or
anticipation of, changes in its external
environment. Strategy is differentiated by
tactics or operational actions by its nature of
being premeditated, well thought through and
often practically rehearsed. It deals with
essentially three questions (as shown in figure
2): 1) where the organisation is at this moment
in time, 2) where the organisation wants to be
in a particular length of time and 3) how to get
there. Thus, strategy is designed to transform
the firm from the present position to the new
position described by objectives, subject to
constraints of the capabilities or the potential
(Ansoff, 1965).
Structure: Business needs to be organised in a
specific form of shape that is generally referred
to as organisational structure. Organisations
are structured in a variety of ways, dependent
on their objectives and culture. The structure of
the company often dictates the way it operates
and performs (Waterman et al., 1980).
Traditionally, the businesses have been
structured in a hierarchical way with several
divisions and departments, each responsible for
a specific task such as human resources
management, production or marketing. Many
layers of management controlled the
operations, with each answerable to the upper
layer of management. Although this is still the
most widely used organisational structure, the
recent trend is increasingly towards a flat
structure where the work is done in teams of
specialists rather than fixed departments. The
idea is to make the organisation more flexible
and devolve the power by empowering the
employees and eliminate the middle
management layers (Boyle, 2007).

Systems: Every organisation has some


systems or internal processes to support and
implement the strategy and run day-to-day
affairs. For example, a company may follow a
particular process for recruitment. These
processes are normally strictly followed and are
designed to achieve maximum effectiveness.
Traditionally the organisations have been
following a bureaucratic-style process model
where most decisions are taken at the higher
management level and there are various and
sometimes unnecessary requirements for a
specific decision (e.g. procurement of daily use
goods) to be taken. Increasingly, the
organisations are simplifying and modernising
their process by innovation and use of new
technology to make the decision-making
process quicker. Special emphasis is on the
customers with the intention to make the
processes that involve customers as user
friendly as possible (Lynch, 2005).
Style/Culture: All organisations have their own
distinct culture and management style. It
includes the dominant values, beliefs and
norms which develop over time and become
relatively enduring features of the
organisational life. It also entails the way
managers interact with the employees and the
way they spend their time. The businesses
have traditionally been influenced by the
military style of management and culture where
strict adherence to the upper management and
procedures was expected from the lower-rank
employees. However, there have been
extensive efforts in the past couple of decades
to change to culture to a more open, innovative
and friendly environment with fewer hierarchies
and smaller chain of command. Culture
remains an important consideration in the
implementation of any strategy in the
organisation (Martins and Terblanche, 2003).
Staff: Organisations are made up of humans
and it's the people who make the real difference
to the success of the organisation in the
increasingly knowledge-based society. The
importance of human resources has thus got
the central position in the strategy of the
organisation, away from the traditional model of
capital and land. All leading organisations such
as IBM, Microsoft, Cisco, etc put extraordinary
emphasis on hiring the best staff, providing
them with rigorous training and mentoring
support, and pushing their staff to limits in
achieving professional excellence, and this
forms the basis of these organisations' strategy
and competitive advantage over their
competitors. It is also important for the
organisation to instil confidence among the
employees about their future in the organisation
and future career growth as an incentive for
hard work (Purcell and Boxal, 2003).
Shared Values/Superordinate Goals: All
members of the organisation share some
common fundamental ideas or guiding
concepts around which the business is built.
This may be to make money or to achieve
excellence in a particular field. These values
and common goals keep the employees
working towards a common destination as a
coherent team and are important to keep the
team spirit alive. The organisations with weak
values and common goals often find their
employees following their own personal goals
that may be different or even in conflict with
those of the organisation or their fellow
colleagues (Martins and Terblanche, 2003).
Using the 7S Model to Analyse an
Organisation
A detailed case study or comprehensive
material on the organisation under study is
required to analyse it using the 7S model. This
is because the model covers almost all aspects
of the business and all major parts of the
organisation. It is therefore highly important to
gather as much information about the
organisation as possible from all available
sources such as organisational reports, news
and press releases although primary research,
e.g. using interviews along with literature review
is more suited. The researcher also needs to
consider a variety of facts about the 7S model.
Some of these are detailed in the paragraphs to
follow.
The seven components described above are
normally categorised as soft and hard
components. The hard components are the
strategy, structure and systems which are
normally feasible and easy to identify in an
organisation as they are normally well
documented and seen in the form of tangible
objects or reports such as strategy statements,
corporate plans, organisational charts and other
documents. The remaining four Ss, however,
are more difficult to comprehend. The
capabilities, values and elements of corporate
culture, for example, are continuously
developing and are altered by the people at
work in the organisation. It is therefore only
possible to understand these aspects by
studying the organisation very closely, normally
through observations and/or through
conducting interviews. Some linkages,
however, can be made between the hard and
soft components. For example, it is seen that a
rigid, hierarchical organisational structure
normally leads to a bureaucratic organisational
culture where the power is centralised at the
higher management level.
It is also noted that the softer components of
the model are difficult to change and are the
most challenging elements of any change-
management strategy. Changing the culture
and overcoming the staff resistance to
changes, especially the one that alters the
power structure in the organisation and the
inherent values of the organisation, is generally
difficult to manage. However, if these factors
are altered, they can have a great impact on
the structure, strategies and the systems of the
organisation. Over the last few years, there has
been a trend to have a more open, flexible and
dynamic culture in the organisation where the
employees are valued and innovation
encouraged. This is, however, not easy to
achieve where the traditional culture is been
dominant for decades and therefore many
organisations are in a state of flux in managing
this change. What compounds their problems is
their focus on only the hard components and
neglecting the softer issues identified in the
model which is without doubt a recipe for
failure. Similarly, when analysing an
organisation using the 7S model, it is important
for the researcher to give more time and effort
to understanding the real dynamics of the
organisation's soft aspects as these underlying
values in reality drive the organisations by
affecting the decision-making at all levels. It is
too easy to fall into the trap of only
concentrating on the hard factors as they are
readily available from organisations' reports etc.
However, to achieve higher marks, students
must analyse in depth the cultural dimension of
the structure, processes and decision made in
an organisation.
For even advanced analysis, the student should
not just write about these components
individually but also highlight how they interact
and affect each other. Or in other words, how
one component is affected by changes in the
other. Especially the "cause and effect"
analyses of soft and hard components often
yield a very interesting analysis and provides
readers with an in-depth understanding of what
caused the change.
Sources for Data on McKinsey's 7S Model
The main source of academic work on the 7S
model has to be the writings of Waterman et al.
(1980; 1982), and Pascale and Athos (1981)
who came up with the idea and applied it to
analyse over 70 large organisations. Since
then, it has been used by hundreds of
organisations and academics for analytical
purposes. Many such case studies can be
obtained from the academic journals and the
books written on the topic. A few case studies,
for example the analyses of Coca-Cola and
energy giant Centrica (Owner of British Gas),
are also available at this website.
File:McKinsey 7S framework
From Wikipedia, the free encyclopedia

Downsizing and Rightsizing


Downsizing refers to the permanent reduction of a company's workforce and is generally
associated with corporate reorganization, or creating a "leaner, meaner" company. For example,
the database developer Oracle Corporation reduced its number of employees by 5,000 after
acquiring rival PeopleSoft. Downsizing is certainly not limited to the U.S.; Jamaica Air cut 15
percent of its workforce in an effort to trim expenses and anticipated revenue shortfalls.
Downsizings such as these are also commonly called reorganizing, reengineering, restructuring,
or rightsizing. Regardless of the label applied, however, downsizing essentially refers to layoffs
that may or may not be accompanied by systematic restructuring programs, such as staff
reductions, departmental consolidations, plant or office closings, or other forms of reducing
payroll expenses. Corporate downsizing results from both poor economic conditions and
company decisions to eliminate jobs in order to cut costs and maintain or achieve specific levels
of profitability. Companies may lay off a percentage of their employees in response to these
changes: a slowed economy, merging with or acquiring other companies, the cutting of product
or service lines, competitors grabbing a higher proportion of market share, distributors forcing
price concessions from suppliers, or a multitude of other events that have a negative impact on
specific organizations or entire industries. In addition, downsizing may stem from restructuring
efforts to maximize efficiency, to cut corporate bureaucracy and hierarchy and thereby reduce
costs, to focus on core business functions and outsource non-core functions, and to use part-time
and temporary workers to complete tasks previously performed by full-time workers in order to
trim payroll costs.
The following sections discuss trends in downsizing, the growth of downsizing, downsizing and
restructuring, criticisms of downsizing, support for downsizing, and downsizing and
management.
TRENDS IN DOWNSIZING
As a major trend among U.S. businesses, downsizing began in the 1980s and continued through
the 1990s largely unabated and even growing. During this time, many of the country's largest
corporations participated in the trend, including General Motors, AT&T, Delta Airlines, Eastman
Kodak, IBM, and Sears, Roebuck and Company. In the twenty-first century, downsizing
continued after a sharp decline in the stock market early in the century and followed by
continued pressure on corporate earnings in the aftermath of the September 11, 2002, terrorist
attacks. Downsizing affects most sectors of the labor market, including retail, industrial,
managerial, and office jobs, impacting workers in a wide range of income levels. Table 1
compares the number of temporarily downsized workers with the number of permanently
downsized workers.
While layoffs are a customary measure for companies to help compensate for the effects of
recessions, downsizing also occurs during periods of economic prosperity, even when companies
themselves are doing well. Consequently, downsizing is a controversial corporate practice that
receives support and even praise from executives, shareholders, and some economists, and
criticism from employees, unions, and community activists. Reports of executive salaries
growing in the face of downsizing and stagnant wages for retained employees only fan the
flames of this criticism. In contrast, announcements of downsizing are well received in the stock
markets. It is not uncommon for a company's stock value to rise following a downsizing
announcement.
Type of October November December January February
Downsizing 2004 2004 2004 2005 2005

Temporary
947,000 941,000 965,000 966,000 965,000
downsizing

Permanent 3,082,00
3,127,000 3,124,000 3,144,000 3,015,000
downsizing 0
However, economists remain optimistic about downsizing and the effects of downsizing on the
economy when the rate of overall job growth outpaces the rate of job elimination. A trend toward
outsourcing jobs overseas to countries with lower labor costs is a form of downsizing that affects
some U.S. employees. These jobs are not actually eliminated, but instead moved out of reach of
the employees who lose their jobs to outsourcing. Some economists, however, suggest that the
overall net effect of such outsourced jobs will actually be an increase in U.S. jobs as resulting
corporate operating efficiencies allow for more employment of higher-tier (and thus higher-
wage) positions. Regardless of whether downsizing is good or bad for the national economy,
companies continue to downsize and the trend shows few signs of slowing down. For some
sectors, this trend is projected to be particularly prevalent through 2012, as shown in Table 2.
Projected
Occupation
Decline

Chemical plant and system -12%


operators

Travel agents -14%

Brokerage clerks -15%

Fisheries workers -27%

Textile workers -34%

Word processors and


-39%
typists

Telephone operators -56%

THE GROWTH OF DOWNSIZING


The corporate downsizing trend grew out of the economic conditions of the late 1970s, when
direct international competition began to increase. The major industries affected by this stiffer
competition included the automotive, electronics, machine tool, and steel industries. In contrast
to their major competitors—Japanese manufacturers—U.S. companies had significantly higher
costs. For example, U.S. automobile manufacturers had approximately a $1,000 cost
disadvantage for their cars compared to similar classes of Japanese cars. Only a small percentage
of this cost difference could be attributed to labor costs, however, but labor costs were among the
first to be cut despite other costs associated with the general structure of the auto companies and
their oversupply of middle managers and engineers. Auto workers were among the first to be laid
off during the initial wave of downsizing. Other U.S. manufacturing industries faced similar
competitive problems during this period, as did some U.S. technology industries. Companies in
these industries, like those in the auto industry, suffered from higher per-unit costs and greater
overhead than their Japanese counterparts due to lower labor productivity and a glut of white-
collar workers in many U.S. companies.
To remedy these problems, U.S. companies implemented a couple of key changes: they formed
partnerships with Japanese companies to learn the methods behind their cost efficiencies and
they strove to reduce costs and expedite decision-making by getting rid of unnecessary layers of
bureaucracy and management. Nevertheless, some companies began simply to cut their
workforce without determining whether or not it was necessary and without any kind of
accompanying strategy. In essence, they downsized because they lacked new products that would
have stimulated growth and because their existing product markets were decreasing.
DOWNSIZING AND RESTRUCTURING
Downsizing generally accompanies some kind of restructuring and reorganizing, either as part of
the downsizing plan or as a consequence of downsizing. Since companies frequently lose a
significant amount of employees when downsizing, they usually must reallocate tasks and
responsibilities. In essence, restructuring efforts attempt to increase the amount of work output
relative to the amount of work input. Consequently, downsizing often accompanies corporate
calls for concentration on "core capabilities" or "core businesses," which refers to the interest in
focusing on the primary revenue-generating aspects of a business. The jobs and responsibilities
that are not considered part of the primary revenue-generating functions are the ones that are
frequently downsized. These jobs might then be outsourced or handled by outside consultants
and workers on a contract basis.
Eliminating non-core aspects of a business may also include the reduction of bureaucracy and the
number of corporate layers. Since dense bureaucracy frequently causes delays in communication
and decision-making, the reduction of bureaucracy may help bring about a more efficient and
responsive corporate structure that can implement new ideas more quickly.
Besides laying off workers, restructuring efforts may involve closing plants, selling non-core
operations, acquiring or merging with related companies, and over-hauling the internal structure
of a company. The seminal work on restructuring or reengineering, Reinventing the Corporation,
by Michael Hammer and James Champy, characterizes the process as the "fundamental
rethinking and radical redesign of business processes to achieve dramatic improvements in
critical, contemporary measures of performance such as cost, quality, service, and speed." While
discussion of reengineering is common and reengineering is often associated with downsizing,
Hammer and Champy argue that reengineering efforts are not always as profound. Hence, these
efforts frequently have mixed results.
Downsizing and reengineering programs may result from the implementation of new, labor-
saving technology. For example, the introduction of the personal computer into the office has
facilitated instantaneous communication and has thus reduced the need for office support
positions, such as secretaries.
CRITICISM OF DOWNSIZING
While companies frequently implement downsizing plans to increase profitability and
productivity, downsizing does not always yield these results. Although critics of downsizing do
not rule out the benefits in all cases, they contend that downsizing is over-applied and often used
as a quick fix without sufficient planning to bring about long-term benefits. Moreover,
downsizing can lead to additional problems, such as poor customer service, low employee
morale, and bad employee attitudes. Laying workers off to improve competitiveness often fails
to produce the intended results because downsizing can lead to the following unforeseen
problems and difficulties:
• The loss of highly-skilled and reliable workers and the added expense of
finding new workers.
• An increase in overtime wages.
• A decline in customer service because workers feel they lack job security
after layoffs.
• Employee attitudes that may change for the worse, possibly leading to
tardiness, absenteeism, and reduced productivity.
• An increase in the number of lawsuits and disability claims, which tends to
occur after downsizing episodes.
• Restructuring programs sometimes take years to bear fruit because of
ensuing employee confusion and the amount of time it takes for employees
to adjust to their new roles and responsibilities.
Some studies have indicated that the economic advantages of downsizing have failed to come
about in many cases, and that downsizing may have had a negative impact on company
competitiveness and profitability in some cases.
Downsizing has repercussions that extend beyond the companies and their employees. For
example, governments must sometimes enact programs to help displaced workers obtain training
and receive job placement assistance. Labor groups have reacted to the frequency and magnitude
of downsizing, and unions have taken tougher stances in negotiations because of it.
Instead of laying employees off, critics recommend that companies eliminate jobs only as a last
resort; not as a quick fix when profits fail to meet quarterly projections. Suggested alternatives to
downsizing include early retirement packages and voluntary severance programs. Furthermore,
some analysts suggest that companies can improve their efficiency, productivity, and
competitiveness through quality initiatives such as Six Sigma, empowering employees through
progressive human resource strategies that encourage employee loyalty and stability, and other
such techniques.
SUPPORT FOR DOWNSIZING
Advocates of downsizing counter critics' claims by arguing that, through downsizing, the United
States has maintained its position as one of the world's leading economies. Economists point out
that despite the downsizing that has become commonplace since the 1970s, overall U.S.
standards of living, productivity, and corporate investment have grown at a healthy pace. They
reason that without downsizing, companies would not remain profitable and hence would go
bankrupt when there is fierce competition and slow growth. Therefore, some executives and
economists see downsizing as a necessary albeit painful task, and one that ultimately saves the
larger number of jobs that would be lost if a company went out business.
Advocates of downsizing also argue that job creation from technological advances offsets job
declines from downsizing. Hence, displaced workers are able find new jobs relatively easily,
especially if those workers have skills that enhance the technological competence of prospective
employers. In other words, despite the admitted discomfort and difficulties that downsizing has
on displaced workers, some workers are able to locate new jobs and companies are able to
achieve greater efficiency, competitiveness, and profitability. Moreover, even though downsizing
may not solve all of a company's competitive problems or bolster a company's profits
indefinitely, downsizing can help reduce costs, which can lead to greater short-term profitability.
In addition, advocates of downsizing contend that staff-reduction efforts help move workers
from mature, moribund, and obsolete industries to emerging and growing industries, where they
are needed. Economists argue that this process strengthens the economy and helps it grow. This
process also enables companies with growing competitive advantages to maintain their positions
in the market in the face of greater domestic and global competition, and it is the difficult but
necessary result of the transition toward a global economy.
DOWNSIZING AND MANAGEMENT
Downsizing poses the immediate managerial problem of dismissing a large number of employees
in a dignified manner in order to help minimize the trauma associated with downsizing.
Employees who are laid off tend to suffer from depression, anxiety, insomnia, high blood
pressure, marital discord, and a host of other problems. Thus, when companies decide that
downsizing is the best course of action, managers should do so in a way that does the least harm
to employees and their families. This includes taking the time to allow dismissed employees to
air their thoughts, instead of laying them off quickly and impersonally, and providing assistance
in finding new jobs.
Because of the possible negative effects that occur after downsizing, managers may have to
implement measures to counteract employee apathy, improve customer service, and restore
employee trust. Analysts of downsized companies argue that managers should take steps
immediately after workforce reductions to provide the remaining workers with the support and
guidance they need. This involves providing employees with clear indications of what is
expected of them and how they can meet increased productivity goals. Managers should confer
with employees regularly to discuss performance and strategies for meeting the goals. In
addition, managers should encourage employee initiative and communication and provide
employees with rewards for excellent work. By promoting employee initiative and even
employee involvement in decision-making, managers can help restore employee trust and
commitment and help increase employee motivation.
The aftermath of downsizing also places greater demands on managers to make do with less. In
other words, managers must strive to maintain or increase productivity and quality levels despite
having a smaller workforce. Since downsizing often brings about a flatter corporate structure, the
flow of information and communication no longer requires the effort needed prior to
restructuring. Therefore, reports used for communication between layers of the old corporate
hierarchy, for example, can be eliminated. If redundant but nonessential work cannot be
completely eliminated, it perhaps can be reduced. By studying particular tasks and determining
their essential components, managers can get rid of unnecessary tasks and eliminate unnecessary
jobs altogether.
Downsizing appears to be an ongoing practice for the foreseeable future. Top managers with
responsibility for making downsizing decisions are in a difficult predicament. Failure to
downsize may result in inefficiencies, while downsizing clearly has a number of potentially
negative effects on individuals and communities. Finding the balance between these outcomes is
the primary challenge facing these managers.

par value
Hide links within definitionsShow links within definitions

Definition
The nominal dollar amount assigned to a security by the issuer. For an equity security, par value is usually a very
small amount that bears no relationship to its market price, except for preferred stock, in which case par value is
used to calculate dividend payments. For a debt security, par value is the amount repaid to the investor when the
bond matures (usually, corporate bonds have a par value of $1000, municipal bonds $5000, and federal bonds
$10,000). In the secondary market, a bond's price fluctuates with interest rates. If interest rates are higher than
the coupon rate on a bond, the bond will be sold below par value (at a "discount"). If interest rates have fallen, the
price will be

Par value
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Par value, in finance and accounting, means stated value or face value. From this comes the
expressions at par (at the par value), over par (over par value) and under par (under par value).
The term "par value" has several meanings depending on context and geography.
[edit] Bonds
In the U.S. bond markets, the Par Value (as stated on the face of the bond) is the amount that the
issuing firm is to pay to the bond holder at the maturity date. The present value of the Par Value
plus the present value of annuity of the interest payments equal the bond price.
A bond is worth its par value when the price is equal to the face value. When a bond is worth less
than its par value, it is priced at a discount; conversely when a bond is valued above its par value,
the bond is priced at a premium.
[edit] Stock
Par value stock has no relation to market value and, as a concept, is somewhat archaic. The par
value of a stock was the share price upon initial offering; the issuing company promised not to
issue further shares below par value, so investors could be confident that no one else was
receiving a more favorable issue price. Thus, Par Value is a nominal value of a security which
is determined by an issuing company as a minimum price. This was far more important in
unregulated equity markets than in the regulated markets that exist today.
Par value also has bookkeeping purposes. It allows the company to put a de minimis value for the
stock on the company's financial statement.
Many common stocks issued today do not have par values; those that do (usually only in
jurisdictions where par values are required by law) have extremely low par values (often the
smallest unit of currency in circulation), for example a penny par value on a stock issued at
USD$25/share. Most states do not allow a company to issue stock below par value.
No-par stocks have "no par value" printed on their certificates. Instead of par value, some U.S.
states allow no-par stocks to have a stated value, set by the board of directors of the corporation,
which serves the same purpose as par value in setting the minimum legal capital that the
corporation must have after paying any dividends or buying back its stock.
Preferred stockpar value remains relevant, and tends to reflect issue price. Dividends on preferred
stocks are calculated as a percentage of par value.
Also, par value still matters for a callable common stock: the call price is usually either par value
or a small fixed percentage over par value.
In the United States, it is legal for a corporation to issue "watered" shares below par value.
However, the purchasers of "watered" shares incur an accounting liability to the corporation for
the difference between the par value and the price they paid. Today, in many jurisdictions, par
values are no longer required for common stocks.
[edit] Currency
The term "at par" is also used when two currencies are exchanged at equal value (for instance, in
1964, Trinidad and Tobago switched from British West Indies dollar to the new Trinidad and Tobago
dollar, and that switch was "at par", meaning that the Central Bank of Trinidad and Tobago replaced
each old dollar with a new).

Coupon (bond)
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Uncut bond coupons on 1922 Mecca Temple (NY, NY, U.S.A.) construction bond

The coupon or coupon rate of a bond is the amount of interest paid per year expressed as a
percentage of the face value of the bond. It is the interest rate that a bond issuer will pay to a
bondholder.[1]
[edit] Overview
For example if you hold $10,000 nominal of a bond described as a 4.5% loan stock, you will
receive $450 in interest each year (probably in two installments of $225 each; a semi-annual
payment).
Not all bonds have coupons. Zero-coupon bonds are those which do not pay interest, but are sold
at the initial offering to investors at a price less than the par value. When held to maturity, the
bond is redeemed for par value.
The origin of the expression "coupon" is that bonds were historically issued as bearer certificates,
so that possession of the certificate was conclusive proof of ownership. Several coupons, one for
each scheduled interest payment covering a number of years, were printed on the certificate. At
the due date the owner would physically detach the coupon and present it for payment of the
interest (known as "clipping the coupon").[2]
Between the issue date and the redemption date, the price of a bond will be determined by the
market, taking into account among other things:
• The amount and date of the redemption payment at maturity;
• The amounts and dates of the coupons;
• The ability of the issuer to pay interest and repay the principal at maturity;
• The yield offered by other similar bonds in the market.

Principal value
The amount that the issuer of a bond agrees to repay the bondholder at the
maturity date. The principal is also referred to redemption value, maturity value,
par value or face value.

Similar financial terms


Principal Orders
Principal orders refers to hte activity by a broker or dealer who buys or sells for his or her own account and risk.

Principal
The par or face value of a debt instrument

Value additivity principal


Prevails when the value of a whole group of assets exactly equals the sum of the values of the individual assets that make
up the group of assets. Stated differently, the principle that the net present value of a set of independent projects is just
the sum of the net present values of the individual projects.

Remaining principal balance


The amount of principal dollars remaining to be paid under the mortgage as of a given point in time.

Principal only (PO)


A mortgage-backed security (MBS) in which the holder receives only principal cash flows on the underlying mortgage pool.
The principal-only portion of a stripped MBS. For PO securities, all of the principal distribution due from the underlying
collateral pool is paid to the registered holder of the stripped MBS based on the current face value of the underlying
collateral pool.

Principal-agent relationship
A situation that can be modeled as one person, an agent, who acts on the behalf of another person, the principal.

Principal of diversification
Highly diversified portfolios will have negligible unsystematic risk. In other words, unsystematic risks disappear in
portfolios, and only systematic risks survive.

Notional principal amount


In an interest rate swap, the predetermined dollar principal on which the exchanged interest payments are based.
Original principal balance
The total amount of principal owed on a mortgage before any payments are made.

Back-end value
The amount paid to remaining shareholders in the second stage of a two-tier or partial tender offer.

Going-concern value
The value of a company as a whole over and above the sum of the values of each of its parts; the value of organization
learning and reputation.

Terminal value
The value at maturity.

Book value per share


The intrinsic value of a company's stock. BVPS is calculated by dividing tangible capital dollar value by the number of
outstanding shares of common stock.

Face value
Alternative name for par value.

Adjusted present value (APV)


The net present value analysis of an asset if financed solely by equity (present value of un-levered cash flows), plus the
present value of any financing decisions (levered cash flows). In other words, the various tax shields provided by the
deductibility of interest and the benefits of other investment tax credits are calculated separately. This analysis is often
used for highly leveraged transactions such as a leverage buy-out.

Value manager
A manager who seeks to buy stocks that are at a discount to their "fair value" and sell them at or in excess of that value.
Often a value stock is one with a low price to book value ratio.

Value dating
Refers to when value or credit is given for funds transferred between banks.

Value date
In the market for eurodollar deposits and foreign exchange, value date refers to the delivery date of funds traded. Normally
it is on spot transactions two days after a transaction is agreed upon and the future date in the case of a forward foreign
exchange trade.

Value-at-Risk
A value-at-risk (VAR) model is a procedure for estimating the probability of portfolio losses exceeding some specified
proportion based on a statistical analysis of historical market price trends, correlations, and volatilities.

Value-added tax
Value-added tax (VAT) is a method of indirect taxation whereby a tax is levied at each stage of production on the value
added at that specific stage.

Utility value
The welfare a given investor assigns to an investment with a particular return and risk.

Time value of money


The idea that a dollar today is worth more than a dollar in the future, because the dollar received today can earn interest
up until the time the future dollar is received.

Time value of an option


The portion of an option's premium that is based on the amount of time remaining until the expiration date of the option
contract, and that the underlying components that determine the value of the option may change during that time. Time
value is generally equal to the difference between the premium and the intrinsic value.

Straight value
Also called investment value, the value of a convertible security without the con-version option.

Standardized value
Also called the normal deviate, the distance of one data point from the mean, divided by the standard deviation of the
distribution.
Salvage value
Scrap value of plant and equipment.

Residual value
Usually refers to the value of a lessor's property at the time the lease expires.

Replacement value
Current cost of replacing the firm's assets.

Relative value
The attractiveness measured in terms of risk, liquidity, and return of one instrument relative to another, or for a given
instrument, of one maturity relative to another.

Price value of a basis point (PVBP)


Also called the dollar value of a basis point, a measure of the change in the price of the bond if the required yield changes
by one basis point.

Present value of growth opportunities (PVGO)


The net present value (NPV) of investments the firm is expected to make in the future.

Present value factor


Factor used to calculate an estimate of the present value of an amount to be received in a future period.

Present value
The amount of cash today that is equivalent in value to a payment, or to a stream of payments, to be received in the
future.

Par value
Also called the maturity value or face value, the amount that the issuer agrees to pay at the maturity date.

Original face value


The principal amount of the mortgage as of its issue date.

Net salvage value


The after-tax net cash flow for terminating the project.

Net present value rule


An investment is worth making if it has a positive NPV. Projects with negative NPVs should be rejected.

Net present value of future investments


The present value of the total sum of NPVs expected to result from all of the firm's future investments.

Net present value of growth opportunities


A model valuing a firm in which net present value of new investment opportunities is explicitly examined.

Net present value (NPV)


The present value of the expected future cash flows minus the cost.

Net book value


The current book value of an asset or liability; that is, its original book value net of any accounting adjustments such as
depreciation.

Net asset value (NAV)


The value of a fund's investments. For a mutual fund, the net asset value per share usually represents the fund's market
price, subject to a possible sales or redemption charge. For a closed end fund, the market price may vary significantly from
the net asset value.

Net adjusted present value


The adjusted present value minus the initial cost of an investment.

Market value-weighted index


An index of a group of securities computed by calculating a weighted average of the returns on each security in the index,
with the weights proportional to outstanding market value.
Market value ratios
Ratios that relate the market price of the firm's common stock to selected financial statement items.

Market value
(a) The price at which a security is trading and could presumably be purchased or sold. (b) The value investors believe a
firm is worth; calculated by multiplying the number of shares outstanding by the current market price of a firm's shares.

Loan value
The amount a policyholder may borrow against a whole life insurance policy at the interest rate specified in the policy.

Liquidation value
Net amount that could be realized by selling the assets of a firm after paying the debt.

Bond value
With respect to convertible bonds, the value the security would have if it were not convertible apart from the conversion
option.

Book value
A company's book value is its total assets minus intangible assets and liabilities, such as debt. A company's book value
might be more or less than its market value.

Cash-surrender value
An amount the insurance company will pay if the policyholder ends a whole life insurance policy.

Conversion value
Also called parity value, the value of a convertible security if it is converted immediately.

Embedded value
A methodology that reflects future shareholder profits in the life insurance business. Embedded value equals the free
surplus plus the value of inforce business. Embedded value is hard to compare with different companies since each
company determines its own input parameters, for example the level of target surplus.

Salvage Value
Is the amount remaining after a depreciated useful life. It refers to the residual or recoverable value of a depreciated asset.
It should be noted that the gross salvage value may be adjusted by a removal or disposal cost. This adjustment would
lower the gross salvage value.

Extrinsic Value
The time value component of an option premium.

Termbox

Popular terms

Present value of growth opportunit...

Times-interest-earned ratio

Return on equity (ROE)

Long-term debt ratio

BIS ratio

Long-term debt to equity ratio

Internal Rate of Return (IRR)

Weighted average portfolio yield

Payment-In-Kind (PIK) bond


Risk-adjusted return on capital (R...

Herfindahl index

Money market hedge

Dept/equity ratio

Plain vanilla bond

Realized compound yield

Flat Organisational Structure

Price value of a basis point (PVBP...

Security market line

NPVGO

Total return swap

Trade credit
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Trade credit For example, Wal-Mart, the largest retailer in the world, has used trade credit as a
larger source of capital than bank borrowings; trade credit for Wal-Mart is 8 times the amount of
capital invested by shareholders.[1]
There are many forms of trade credit in common use. Various industries use various specialized
forms. They all have, in common, the collaboration of businesses to make efficient use of capital
to accomplish various business objectives.
[edit] Example
The operator of an ice cream stand may sign a franchising agreement, under which the distributor
agrees to provide ice cream stock under the terms "Net 60" with a ten percent discount on
payment within 30 days, and a 20% discount on payment within 10 days. This means that the
operator has 60 days to pay the invoice in full. If sales are good within the first week, the
operator may be able to send a cheque for all or part of the invoice, and make an extra 20% on
the ice cream sold. However, if sales are slow, leading to a month of low cash flow, then the
operator may decide to pay within 30 days, obtaining a 10% discount, or use the money another
30 days and pay the full invoice amount within 60 days.
The ice cream distributor can do the same thing. Receiving trade credit from milk and sugar
suppliers on terms of Net 30, 2% discount if paid within ten days, means they are apparently
taking a loss or disadvantageous position in this web of trade credit balances. Why would they
do this? First, they have a substantial markup on the ingredients and other costs of production of
the ice cream they sell to the operator. There are many reasons and ways to manage trade credit
terms for the benefit of a business. The ice cream distributor may be well-capitalized either from
the owners' investment or from accumualated profits, and may be looking to expand his markets.
They may be aggressive in attempting to locate new customers or to help them get established. It
is not on their interests for customers to go out of business from cash flow instabilities, so their
financial terms aim to accomplish two things:
1. Allow startup ice cream parlors the ability to mismanage their investment in
inventory for a while, while learning their markets, without having a dramatic
negative balance in their bank account which could put them out of business.
This is in effect, a short term business loan made to help expand the
distributor's market and customer base.
2. By tracking who pays, and when, the distributor can see potential problems
developing and take steps to reduce or increase the allowed amount of trade
credit he extends to prospering or faltering businesses. This limits the
exposure to losses from customers going bankrupt who would never pay for the
ice cream delivered.

Trade Credit
Definition: An arrangement to buy goods or services on account, that is, without making immediate cash
payment
For many businesses, trade credit is an essential tool for financing growth. Trade credit is the credit
extended to you by suppliers who let you buy now and pay later. Any time you take delivery of materials,
equipment or other valuables without paying cash on the spot, you're using trade credit.
When you're first starting your business, however, suppliers most likely aren't going to offer you trade credit. They're going to want to
make every order c.o.d. (cash or check on delivery) or paid by credit card in advance until you've established that you can pay your bills
on time. While this is a fairly normal practice, you can still try and negotiate trade credit with suppliers. One of the things that will help
you in these negotiations is a properly prepared financial plan.

When you visit your supplier to set up your order during your startup period, ask to speak directly to the owner of the business if it's a
small company. If it's a larger business, ask to speak to the CFO or any other person who approves credit. Introduce yourself. Show the
officer the financial plan you've prepared. Tell the owner or financial officer about your business, and explain that you need to get your
first orders on credit in order to launch your venture.

Depending on the terms available from your suppliers, the cost of trade credit can be quite high. For example, assume you make a
purchase from a supplier who decides to extend credit to you. The terms the supplier offers you are two-percent cash discount with 10
days and a net date of 30 days. Essentially, the suppliers is saying that if you pay within 10 days, the purchase price will be discounted
by two percent. On the other hand, by forfeiting the two-percent discount, you're able to use your money for 20 more days. On an
annualized basis, this is actually costing you 36 percent of the total cost of the items you are purchasing from this supplier! (360 ( 20
days = 18 times per year without discount; 18 ( 2 percent discount = 36 percent discount missed.)

Cash discounts aren't the only factor you have to consider in the equation. There are also late-payment or delinquency penalties should
you extend payment beyond the agreed-upon terms. These can usually run between one and two percent on a monthly basis. If you miss
your net payment date for an entire year, that can cost you as much as 12 to 24 percent in penalty interest.

Effective use of trade credit requires intelligent planning to avoid unnecessary costs through forfeiture of cash discounts or the incurring
of delinquency penalties. But every business should take full advantage of trade that is available without additional cost in order to
reduce its need for capital from other sources.
Classical conditioning
From Wikipedia, the free encyclopedia
Jump to:navigation, search

This article needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may
be challenged and removed. (May 2010)

Classical conditioning (also Pavlovian or respondent conditioning, Pavlovian


reinforcement) is a form of associative learning that was first demonstrated by Ivan Pavlov.[1] The
typical procedure for inducing classical conditioning involves presentations of a neutral stimulus
along with a stimulus of some significance. The neutral stimulus could be any event that does not
result in an overt behavioral response from the organism under investigation. Pavlov referred to
this as a conditioned stimulus (CS). Conversely, presentation of the significant stimulus
necessarily evokes an innate, often reflexive, response. Pavlov called these the unconditioned
stimulus (US) and unconditioned response (UR), respectively. If the CS and the US are
repeatedly paired, eventually the two stimuli become associated and the organism begins to
produce a behavioral response to the CS. Pavlov called this the conditioned response (CR).
Popular forms of classical conditioning that are used to study neural structures and functions that
underlie learning and memory include fear conditioning, eyeblink conditioning, and the foot
contraction conditioning of Hermissenda crassicornis.

One of Pavlov’s dogs with a surgically implanted cannula to measure salivation, Pavlov
Museum, 2005

The original and most famous example of classical conditioning involved the salivary
conditioning of Pavlov's dogs. During his research on the physiology of digestion in dogs,
Pavlov noticed that, rather than simply salivating in the presence of meat powder (an innate
response to food that he called the unconditioned response), the dogs began to salivate in the
presence of the lab technician who normally fed them. Pavlov called these psychic secretions.
From this observation he predicted that, if a particular stimulus in the dog’s surroundings were
present when the dog was presented with meat powder, then this stimulus would become
associated with food and cause salivation on its own. In his initial experiment, Pavlov used a
metronome to call the dogs to their food and, after a few repetitions, the dogs started to salivate
in response to the metronome.
[edit] Types

Diagram representing forward conditioning. The time interval increases from left to
right.

Forward conditioning: During forward conditioning the onset of the CS precedes the onset of
the US. Two common forms of forward conditioning are delay and trace conditioning.
Delay Conditioning: In delay conditioning the CS is presented and is overlapped by the
presentation of the US
Trace conditioning: During trace conditioning the CS and US do not overlap. Instead, the CS is
presented, a period of time is allowed to elapse during which no stimuli are presented, and then
the US is presented. The stimulus free period is called the trace interval. It may also be called the
"conditioning interval"
Simultaneous conditioning: During simultaneous conditioning, the CS and US are presented
and terminated at the same time.
Backward conditioning: Backward conditioning occurs when a conditioned stimulus
immediately follows an unconditioned stimulus. Unlike traditional conditioning models, in
which the conditioned stimulus precedes the unconditioned stimulus, the conditioned response
tends to be inhibitory. This is because the conditioned stimulus serves as a signal that the
unconditioned stimulus has ended, rather than a reliable method of predicting the future
occurrence of the unconditioned stimulus.
Temporal conditioning: The US is presented at regularly timed intervals, and CR acquisition is
dependent upon correct timing of the interval between US presentations. The background, or
context, can serve as the CS in this example.
Unpaired conditioning: The CS and US are not presented together. Usually they are presented
as independent trials that are separated by a variable, or pseudo-random, interval. This procedure
is used to study non-associative behavioral responses, such as sensitization.
CS-alone extinction:The CS is presented in the absence of the US. This procedure is usually
done after the CR has been acquired through Forward conditioning training. Eventually, the CR
frequency is reduced to pre-training levels.
[edit] Procedure variations
In addition to the simple procedures described above, some classical conditioning studies are
designed to tap into more complex learning processes. Some common variations are discussed
below.
[edit] Classical discrimination/reversal conditioning
In this procedure, two CSs and one US are typically used. The CSs may be the same modality
(such as lights of different intensity), or they may be different modalities (such as auditory CS
and visual CS). In this procedure, one of the CSs is designated CS+ and its presentation is always
followed by the US. The other CS is designated CS- and its presentation is never followed by the
US. After a number of trials, the organism learns to discriminate CS+ trials and CS- trials such
that CRs are only observed on CS+ trials.
During Reversal Training, the CS+ and CS- are reversed and subjects learn to suppress
responding to the previous CS+ and show CRs to the previous CS-.
[edit] Classical ISI discrimination conditioning
This is a discrimination procedure in which two different CSs are used to signal two different
interstimulus intervals. For example, a dim light may be presented 30 seconds before a US, while a
very bright light is presented 2 minutes before the US. Using this technique, organisms can learn
to perform CRs that are appropriately timed for the two distinct CSs.
[edit] Latent inhibition conditioning
In this procedure, a CS is presented several times before paired CS-US training commences. The
pre-exposure of the subject to the CS before paired training slows the rate of CR acquisition
relative to organisms that are not CS pre-exposed. Also see Latent inhibition for applications.
[edit] Conditioned inhibition conditioning
Three phases of conditioning are typically used:
Phase 1:

A CS (CS+) is not paired with a US until asymptotic CR levels are reached.

Phase 2:

CS+/US trials are continued, but interspersed with trials on which the CS+ in
compound with a second CS, but not with the US (i.e., CS+/CS- trials).
Typically, organisms show CRs on CS+/US trials, but suppress responding on
CS+/CS- trials.

Phase 3:

In this retention test, the previous CS- is paired with the US. If conditioned
inhibition has occurred, the rate of acquisition to the previous CS- should be
impaired relative to organisms that did not experience Phase 2.

[edit] Blocking
Main article: Blocking effect
This form of classical conditioning involves two phases.
Phase 1:

A CS (CS1) is paired with a US.

Phase 2:

A compound CS (CS1+CS2) is paired with a US.

Test:

A separate test for each CS (CS1 and CS2) is performed. The blocking effect
is observed in a lack of conditioned response to CS2, suggesting that the first
phase of training blocked the acquisition of the second CS.

[edit] Applications
[edit] Little Albert
Main article: Little Albert experiment

John B. Watson, founder of behaviourism, demonstrated classical conditioning empirically


through experimentation using the Little Albert experiment in which a child ("Albert") was
presented with a white rat (CS). After a control period in which the child reacted normally to the
presence of the rat, the experimentors paired the presence of the rat with a loud, jarring noise
caused by clanging two pipes together behind the child's head (US). As the trials progressed, the
child began showing signs of distress at the sight of the rat, even when unaccompanied by the
frightening noise. Furthermore, the child demonstrated generalization of stimulus associations,
and showed distress when presented with any white, furry object–even such things as a rabbit,
dog, a fur coat, and a Santa Claus mask with hair.
[edit] Behavioral therapies
Main article: Behaviour therapy

In human psychology, implications for therapies and treatments using classical conditioning
differ from operant conditioning. Therapies associated with classical conditioning are aversion
therapy, flooding and systematic desensitization.
Classical conditioning is short-term, usually requiring less time with therapists and less effort
from patients, unlike humanistic therapies.[citation needed] The therapies mentioned are designed to
cause either aversive feelings toward something, or to reduce unwanted fear and aversion.
[edit] Theories of classical conditioning
There are two competing theories of how classical conditioning works. The first, stimulus-
response theory, suggests that an association to the unconditioned stimulus is made with the
conditioned stimulus within the brain, but without involving conscious thought. The second
theory stimulus-stimulus theory involves cognitive activity, in which the conditioned stimulus is
associated to the concept of the unconditioned stimulus, a subtle but important distinction.
Stimulus-response theory, referred to as S-R theory, is a theoretical model of behavioral
psychology that suggests humans and other animals can learn to associate a new stimulus — the
conditioned stimulus (CS) — with a pre-existing stimulus — the unconditioned stimulus (US),
and can think, feel or respond to the CS as if it were actually the US.
The opposing theory, put forward by cognitive behaviorists, is stimulus-stimulus theory (S-S
theory). Stimulus-stimulus theory, referred to as S-S theory, is a theoretical model of classical
conditioning that suggests a cognitive component is required to understand classical conditioning
and that stimulus-response theory is an inadequate model. It proposes that a cognitive component
is at play. S-R theory suggests that an animal can learn to associate a conditioned stimulus (CS)
such as a bell, with the impending arrival of food termed the unconditioned stimulus, resulting in
an observable behavior such as salivation. Stimulus-stimulus theory suggests that instead the
animal salivates to the bell because it is associated with the concept of food, which is a very fine
but important distinction.
To test this theory, psychologist Robert Rescorla undertook the following experiment [2]. Rats
learned to associate a loud noise as the unconditioned stimulus, and a light as the conditioned
stimulus. The response of the rats was to freeze and cease movement. What would happen then if
the rats were habituated to the US? S-R theory would suggest that the rats would continue to
respond to the CS, but if S-S theory is correct, they would be habituated to the concept of a loud
sound (danger), and so would not freeze to the CS. The experimental results suggest that S-S was
correct, as the rats no longer froze when exposed to the signal light.[3] His theory still continues
and is applied in everyday life.[1]

Plant layout study


From Wikipedia, the free encyclopedia
Jump to:navigation, search

A plant layout study is an engineering study used to analyze different physical configurations for
an industrial plant.[1]
[edit] General
Modern industrial manufacturing plants involve a complex mix of functions and operations.
Various techniques exist, but general areas of concern include the following:[2]
• Space (adequate area to house each function)
• Affinity (functions located in close proximity to other related functions)
• Material handling
• Communications (telephone, data, telemetry, and other signal items)
• Utilities (electrical, gas, steam, water, sewer, and other utility services)
• Buildings (structural and architectural forms; sitework)

[edit] Product Considerations


The intended products to be manufactured have an impact on the choice of layout.
• A fixed position layout would be chosen where large or unique items
are worked on individually, such as ship building or construction of a
bridge.
• A functional layout is a multiple purpose layout designed to facilitate a
variety of products, a typical example of this is a hospital.
• A product layout focuses on maximising plant efficiency through
techniques such as mass production.
• A cellular layout seeks to gain the benefits of both the flexibility of a
functional layout and the efficiency product layout by grouping
machines into autonomous work groups. This is particularly utilised
along side Just In Time systems.

Acceptable quality limit


From Wikipedia, the free encyclopedia
Jump to:navigation, search

The acceptable quality limit (AQL) is the worst tolerable process average in percentage or
ratio, that is still considered acceptable: that is, it is at an acceptable quality level.[1] Closely
related terms are the rejectable quality limit and level (RQL).[1][2] In a quality control procedure, a
process is said to be at an acceptable quality level if the appropriate statistic used to construct a
control chart does not fall outside the bounds of the acceptable quality limits. Otherwise, the
process is said to be at a rejectable control level.
The usage of the abbreviation AQL for the term Acceptable Quality Level has recently been
changed in the standards issued by at least one national standards organization (ANSI/ASQ) to
relate to the term Acceptance Quality Level.[3][4] It is unclear whether this interpretation will be
brought into general usage, but the underlying meaning remains the same.
An acceptable quality level is an inspection standard describing the maximum number of defects
that could be considered acceptable during the random sampling of an inspection. The defects
found during inspection are sometimes classified into three levels: critical, major and minor.
Critical defects are those that render the product unsafe or hazardous for the end user or that
contravene mandatory regulations. Major defects can result in the product's failure, reducing its
marketability, usability or saleability. Lastly, minor defects do not affect the product's
marketability or usability, but represent workmanship defects that make the product fall short of
defined quality standards. Different companies maintain different interpretations of each defect
type. In order to avoid argument, buyers and sellers agree on an AQL standard, chosen according
to the level of risk each party assumes, which they use as a reference during pre-shipment
inspection.
Profiteering (business)
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Profiteering is a pejorative term for the act of making a profit by methods considered unethical.
Business owners may be accused of profiteering when they raise prices during an emergency
(especially a war). The term is also applied to businesses that play on political corruption to obtain
government contracts.
Some types of profiteering are illegal, such as price fixing syndicates and other anti-competitive
behaviour, for example on fuel subsidies (see British Airways price-fixing allegations), or restricted by
industry codes of conduct such as aggressive marketing of products in the third world such as baby
milk (see Nestlé boycott).

Contents
[hide]
• 1 Types of profiteering
• 2 Laws
• 3 See also
○ 3.1 Example cases

[edit] Types of profiteering


• price fixing
• price gouging

Michael Porter
From Wikipedia, the free encyclopedia
Jump to:navigation, search

For the English footballer, see Mick Porter.

Michael Porter
Born 1947 (1947)

Occupation Author, Management Consultant

Michael Eugene Porter (born 1947) is the Bishop William Lawrence University Professor at
Harvard Business School. He is a leading authority on company strategy and the competitiveness
of nations and regions. Michael Porter’s work is recognized in many governments, corporations
and academic circles globally. He chairs Harvard Business School's program dedicated for newly
appointed CEOs of very large corporations.

Contents
[hide]

[edit] Early life


Michael Eugene Porter received a B.S.E. with high honors in aerospace and mechanical
engineering from Princeton University in 1969, where he was elected to Phi Beta Kappa and Tau
Beta Pi. He received an M.B.A. with high distinction in 1971 from the Harvard Business School,
where he was a George F. Baker Scholar, and a Ph.D. in Business Economics from Harvard
University in 1973.
Porter was an outstanding intercollegiate golfer while at Princeton.
[edit] Career
Michael Porter is the author of 18 books and numerous articles including Competitive Strategy,
Competitive Advantage, Competitive Advantage of Nations, and On Competition. A six-time
winner of the McKinsey Award for the best Harvard Business Review article of the year, Professor
Porter is the most cited author in business and economics.[citation needed]
Michael Porter’s core field is competition and company strategy. He is generally recognized as
the father of the modern strategy field, and his ideas are taught in virtually every business school
in the world. His work has also re-defined thinking about competitiveness, economic
development, economically distressed urban communities, environmental policy, and the role of
corporations in society.
Recently, Porter has devoted considerable attention to understanding and addressing the pressing
problems in health care delivery in the United States and other countries. His book, Redefining
Health Care (written with Elizabeth Teisberg), develops a new strategic framework for
transforming the value delivered by the health care system, with implications for providers,
health plans, employers, and government, among other actors. The book received the James A.
Hamilton award of the American College of Healthcare Executives in 2007 for book of the year.
His New England Journal of Medicine research article, “A Strategy for Health Care Reform—
Toward a Value-Based System” (June 2009), lays out a health reform strategy for the U.S. His
work on health care is being extended to address the problems of health care delivery in
developing countries, in collaboration with Dr. Jim Yong Kim and the Harvard Medical School and
Harvard School of Public Health.
In addition to his research, writing, and teaching, Porter serves as an advisor to business,
government, and the social sector. He has served as strategy advisor to numerous leading U.S.
and international companies, including Caterpillar, Procter & Gamble, Scotts Miracle-Gro, Royal
Dutch Shell, and Taiwan Semiconductor. Professor Porter serves on two public boards of
directors, Thermo Fisher Scientific and Parametric Technology Corporation. Professor Porter
also plays an active role in U.S. economic policy with the Executive Branch and Congress, and
has led national economic strategy programs in numerous countries. He is currently working
with the Presidents of Rwanda and South Korea.
Michael Porter has founded three major non-profit organizations: Initiative for a Competitive Inner
City - ICIC in 1994, which addresses economic development in distressed urban communities; the
Center for Effective Philanthropy, which creates rigorous tools for measuring foundation
effectiveness; and FSG-Social Impact Advisors, a leading non-profit strategy firm serving
NGOs, corporations, and foundations in the area of creating social value. He also currently
serves on the Board of Trustees of Princeton University.
In 2000, Michael Porter was appointed a Harvard University Professor, the highest professional
recognition that can be awarded to a Harvard faculty member.
Michael Porter is one of the co-founders of The Monitor Group consultancy.
His main academic objectives focus on how a firm or a region can build a competitive advantage
and develop competitive strategy. He is also a Fellow Member of the Strategic Management
Society. One of his most significant contributions is the five forces. Porter's strategic system
consists primarily of:
• Competitive advantage
• Porter's Five Forces Analysis
• strategic groups (also called strategic sets)
• the value chain
• the generic strategies of cost leadership, product differentiation, and focus
• the market positioning strategies of variety based, needs based, and access
based market positions
• global strategy
• Porter's clusters of competence for regional economic development
• Diamond model

[edit] Works
Competititve Strategy
• Porter, M.E. (1979) "How competitive forces shape strategy", Harvard business
Review, March/April 1979.
• Porter, M.E. (1980) Competitive Strategy, Free Press, New York, 1980.
• Porter, M.E. (1985) Competitive Advantage, Free Press, New York, 1985.
• Porter, M.E. (ed.) (1986) Competition in Global Industries, Harvard Business
School Press, Boston, 1986.
• Porter, M.E. (1987) "From Competitive Advantage to Corporate Strategy", Harvard
Business Review, May/June 1987, pp 43-59.
• Porter, M.E. (1996) "What is Strategy", Harvard Business Review, Nov/Dec 1996.
• Porter, M.E. (1998) On Competition, Boston: Harvard Business School, 1998.
• Porter, M.E. (1990, 1998) "The Competitive Advantage of Nations", Free Press, New
York, 1990.
• Porter, M.E. (1991) "Towards a Dynamic Theory of Strategy", Strategic Management
Journal, 12 (Winter Special Issue), pp. 95-117.
• McGahan, A.M. & Porter, M.E. Porter. (1997) "How Much Does Industry Matter,
Really?" Strategic Management Journal, 18 (Summer Special Issue), pp. 15-30.
• Porter, M.E. (2001) "Strategy and the Internet", Harvard Business Review, March 2001,
pp. 62-78.
• Porter, M.E. & Kramer, M.R. (2006) "Strategy and Society: The Link Between
Competitive Advantage and Corporate Social Responsibility", Harvard Business Review,
December 2006, pp. 78-92.
Domestic Health Care
• Porter, M.E. & Teisberg, E.O. (2006) "Redefining Health Care: Creating Value-Based
Competition On Results", Harvard Business School Press, 2006.
Global Health Care
• Jain SH, Weintraub R, Rhatigan J, Porter ME, Kim JY. Delivering Global Health.
Student British Medical Journal 2008; 16:27.[1]
• Kim JY, Rhatigan J, Jain SH, Weintraub R, Porter ME. From a declaration of
values to the creation of value in global health: a report from Harvard
University's Global Health Delivery Project. Glob Public Health. 2010
Mar;5(2):181-8.
• Rhatigan, Joseph, Sachin H Jain, Joia S. Mukherjee, and Michael E. Porter.
"Applying the Care Delivery Value Chain: HIV/AIDS Care in Resource Poor
Settings." Harvard Business School Working Paper, No. 09-093, February
2009.

[edit] Criticisms
Porter has been criticized by some academics for inconsistent logical argument in his assertions.
[1]
Critics have also labeled Porter's conclusions as lacking in empirical support and as justified
with selective case studies.[2][3][4][5]

Marketing Introduction – “Marketing is the process of identifying and translating consumer’s


needs and wants into products and services and then satisfying the same”.
Marketing is indeed an ancient art, it has been practiced in one form or the other since the days
of Adam & Eve. Its emergence as a management discipline, however, is of relatively recent
origin, and within this relatively short period, it has gained so much importance & stature that
today most management thinkers & practitioners throughout the world view it as the most
important of all management functions in any business.
Famous saying, “Supply creates its own demand” has lost its relevance now. Earlier market used
to be “Seller centred” but now it is “Consumer or buyer centred”. We now witness dynamism in
many similar products in market, each product is branded & there is a cut throat competition to
grab the market share advertising, application of various new & innovative sales promotion
activities. At present all producers concentrate their efforts on evolving various marketing
strategies, so as to grab the maximum marked share. As a result of it marketing has become a
necessity as well as a challenge.
Meaning & Definitions of Marketing
The evolution of word “market” is from Latin word “Marcatus”. In Latin, “Marcatus” means a
special place where business related activities take place.
For a layman, Marketing means, “Sale and Purchase of goods and services”, but this is very
narrow concept of marketing which has lost its relevance in today’s time.
So we can see that word marketing has got different meaning in traditional and modern concept.
Marketing has evolved into a vast concept from “Sale-purchase of goods and services”. Now its
scope is so wide that it encompasses all the activities from production to consumption.
Marketing functions start much before production process and carries over after sales also.
The definitions of Marketing in two types:
1. Traditional / Narrow or old Concept of Marketing
2. Modern / Broad or New Concept of Marketing
a) Traditional / Narrow or old Concept of Marketing –
1. According to Edward and David-
“Marketing is the economic process by means of which goods and services are exchanged and
their values determined in terms of money.”
2. According to J.F Pyle—
“Marketing comprises both and selling activities”.
3. According to American Marketing Association (AMA)—
“Marketing is the performance of business activities that direct the flow of goods and service
from producers to consumers or users.”
4. According to Connerse and Michell—
“Marketing includes activities involved in the flow of goods and services from production to
consumption.”
5. According to Duddy and Reuzon –
“Marketing is the economic process by means of which goods and services art exchanged and
their values determined in terms of money.”
6. According to Clark and Clark—
“Marketing consists of these efforts which effect transfers in the ownership of goods and services
and which provide for their physical distribution.”
b) Modern / Broad or New Concept of Marketing—
1. According to William J. Stanton—
“Marketing is the total system of interacting business activities designed to plan, price, promote
and distribute want satisfying products to present and potential customers.”
2. According to Philip Kotler—
“Marketing is human activity directed at satisfying needs and wants through exchange
processes.”
3. According to Paul Mazur—
“Marketing is the delivery of standard of living.”
4. According to Buskirk—
“Marketing is the integrated system of action that creates values in goods by creating form,
place, time and ownership utility.”
5. According to Malcom McNair—
“Marketing is the creation and delivery of a standard of living.”
6. According to AMA’s Improved definition—
“Marketing is the process and plan of action of a business connected with distribution and
exchange of goods through marketing research, advertising, distribution, pricing, sales displays
and other marketing systems to effectuate consumer satisfaction.”
8. According to E. Jerome Mc Carthy—
“Marketing is the response of the business to the need to adjust production capabilities to the
requirements of consumer’s demands.”
9. According to Cundiff, Still and Govoni—
“Marketing is the managerial process by which products are matched with market and through
which transfers of ownership are affected.”
So we can say that marketing is the process of discovering bundle of utilities for the consumer
and exchanging, it for a price in the monetary terms through proper channels and after that
developing a life long relationship. After sale service and customer satisfaction are the focal
point of modern view of marketing.
Marketing is a focal system of business, an ongoing process of—
– Discovering and translating consumer needs and desire to products and services (through
planning and producing the planned products).
– Creating demands for these products and services (through promotion and pricing).
– Serving the consumer demand (through planned physical distribution) with the help of
marketing channels.
– Expanding the market even in the face of keep competition.
The modern marketer is called upon to set the marketing objective, develop the marketing plan
or programme (marketing mix) and control the marketing programme to assure the
accomplishment of the set marketing objectives.

Psychographic
From Wikipedia, the free encyclopedia
Jump to:navigation, search

In the field of marketing, demographics, opinion research, and social research in general,
psychographic variables are any attributes relating to personality, values, attitudes, interests, or
lifestyles. They are also called IAO variables (for Interests, Activities, and Opinions). They can
be contrasted with demographic variables (such as age and gender), behavioral variables (such as
usage rate or loyalty), and firmographic variables (such as industry, seniority and functional area).
Psychographics are often confused with demographics. This confusion can create fundamentally
flawed definitions. For example, historical generations are defined by psychographic variables
like attitudes, personality formation, and cultural touchstones. The traditional definition of the
"Baby Boom Generation" has been the subject of much criticism because it is based on
demographic variables where it should be based on psychographic variables. While all other
generations are defined by psychographic variables, the Boomer definition is based on a
demographic variable: the fertility rates of its members' parents.
When a relatively complete profile of a person or group's psychographic make-up is constructed,
this is called a psychographic profile. Psychographic profiles are used in market segmentation as
well as in advertising.
Some categories of psychographic factors used in market segmentation include:
• Activity, Interest, Opinion (AIO)
• Attitudes
• Values
• 3 Strategies of Market Leaders
• Customers for Life
• By: Brian Tracy
• The purpose of a business is to create and keep a customer.
• The two most important words to keep in mind in developing a successful customer base are Positioning
and Differentiation.
• Differentiation refers to your ability to separate yourself and your product or service from that of your
competitors. And it is the key to building and maintaining a competitive advantage.
• This is the advantage that you and your company have over your competitors in the same marketplace – the
unique and special benefits that no one else can give your custome

Objectives of inventory control


1) Maximize customer service
(2) Minimize costs

1. Cost Objective: Minimize sum of relevant costs


2. Service Objective: desired customer service levels significantly impact inventory
levels.
Service level may be defined in a number of ways, such as:

Inventory Control Models


Remember: two important issues in inventory control:
order quantity and order timing.
Two general classes of models: continuous review (fixed order quantity) and
periodic review (fixed order period).

1. Continuous Review or Fixed Order Quantity Systems (Q-systems)


a. Multi period models

(1) Fixed order quantity, variable time between orders (EOQ, EPQ, and Quantity
Discount)
(2) On-hand inventory balance serves as order trigger (R)
(3) Perpetual inventory count
(4) 2-bin system

b. Single period Model

2. Periodic Review or Fixed Order Period Systems (P-systems)


a. Variable order quantity, fixed time between orders
b. Time serves as order trigger
c. Periodic count
d. Process: When a predetermined amount of time has elapsed, a physical inventory
count is taken. Based upon the number of units in stock at that time, OH, and a
target inventory of TI units, an order is placed for Q = (TI-OH) units.

Fixed Order Quantity Systems (Q-systems):


How Much to Order (Q) and When to Order (R)
1. Multi Period Inventory Models: Order decisions for infinite length inventory
planning address how much to order
a. How Much To Order: Basic Model - the Economic Order Quantity (EOQ or Q-
System)
Inventory Control: Improving the Bottom Line
Inventory control requires the tracking of all parts and materials purchased, products processed,
and products stored and ready for shipment. Having a sophisticated tracking system alone does
not improve your bottom line, it is how you use the information that your system provides.
If your job responsibilities involve inventory control, you know how
critical the function is to business success and the complexities
involved in planning, executing and controlling your supply chain
network.
From a financial perspective, inventory control is no small matter.
Oftentimes, inventory is the largest asset item on a manufacturer’s or
distributor’s balance sheet. As a result, there is a lot of management emphasis on keeping
inventories down so they do not consume too much cash. The objectives of inventory reduction
and minimization are more easily accomplished with modern inventory management processes
that are working effectively.
Inventory Control Problems
In actual practice the vast majority of manufacturing and distribution companies suffer from
lower customer service, higher costs and excessive inventories than are necessary. Inventory
control problems are usually the result of using poor processes, practices and antiquated support
systems. The inventory management process is much more complex than the uninitiated
understand. In fact, in many companies the inventory control department is perceived as little
more than a clerical function. When this is the case, the fact is the function is probably not very
effective.
The likely result of this approach to inventory control is lots of material shortages, excessive
inventories, high costs and poor customer service. For example, if a customer orders a product
that requires a manufacturer to acquire 20 part numbers to assemble a product and then, only 19
of the 20 part numbers are available, you have nineteen part numbers which are excess
inventory. Worse, the product can’t be shipped to create revenue and the customer is not
serviced. Think for a moment about the complexities of making products that require hundreds
and maybe thousands of part numbers to be available in the right quantity, at the right place and
at the right time to make products to satisfy customer orders. It is a complex network to control
and a set of inventory management tasks that must be performed with precision.
What Should Be Done?
Too much inventory and not high enough customer service is very common, but unnecessary.
There are proven methods that can help you accurately project customer demand and to calculate
the inventory you will need to meet your defined level of customer service. Using the right
techniques for sales forecasting and inventory management will allow you to monitor changes
and respond to alerts when action needs to be taken. The right approach to inventory control can
produce dramatic benefits in customer service with lower inventory, no matter how complex
your network is.
Modern inventory management processes utilize new and more refined techniques that provide
for dynamic optimization of inventories to maximize customer service with decreased inventory
and lower costs. These improved approaches to inventory management are of major consequence
to overall competitiveness where the highest level of customer service and delivered value can
favorably impact market share and profits.
Understanding the Process
Overall inventory control crosses a number of functions. The inventory control process can be
divided into the following general categories: Demand management which covers the processes
for sales and operations planning, sales forecasting and finished goods inventory deployment
planning.
1. Inventory planning and ordering which is often accomplished with material
requirements planning, often referred to by its acronym MRP or in a lean
manufacturing environment kanban ordering is used to effect deliveries of
material.
2. Inventory optimization systems are being advocated by some as the supply
chain management mechanism that should be used to mathematically
calculate where inventory should be deployed to satisfy predetermined
supply chain management objectives.
3. Physical inventory control is a phrase that describes the receiving,
movement, stocking and overall physical control of inventories.
Effective inventory control is a vital function to help insure the success of manufacturing and
distribution companies. The effectiveness of inventory control is directly measurable by how
successful a company is in providing high levels of customer service, low inventory investment,
maximum throughput and low costs. Certainly, an area where management should apply a
philosophy of aggressive improvement.
<b>Inventory Con

Corporate governance
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Not to be confused with a corporate state, a corporative government rather than


the government of a corporation

Corporate governance is the set of processes, customs, policies, laws, and institutions affecting the
way a corporation (or company) is directed, administered or controlled. Corporate governance also
includes the relationships among the many stakeholders involved and the goals for which the
corporation is governed. The principal stakeholders are the shareholders, management, and the
board of directors. Other stakeholders include employees, customers, creditors, suppliers,
regulators, and the community at large.
Corporate governance is a multi-faceted subject.[1] An important theme of corporate governance
is to ensure the accountability of certain individuals in an organization through mechanisms that
try to reduce or eliminate the principal-agent problem. A related but separate thread of discussions
focuses on the impact of a corporate governance system in economic efficiency, with a strong
emphasis on shareholders' welfare. There are yet other aspects to the corporate governance
subject, such as the stakeholder view and the corporate governance models around the world (see
section 9 below).
There has been renewed interest in the corporate governance practices of modern corporations
since 2001, particularly due to the high-profile collapses of a number of large U.S. firms such as
Enron Corporation and MCI Inc. (formerly WorldCom). In 2002, the U.S. federal government passed
the Sarbanes-Oxley Act, intending to restore public confidence in corporate governance.

[edit] Definition
In A Board Culture of Corporate Governance, business author Gabrielle O'Donovan defines
corporate governance as 'an internal system encompassing policies, processes and people, which
serves the needs of shareholders and other stakeholders, by directing and controlling
management activities with good business savvy, objectivity, accountability and integrity. Sound
corporate governance is reliant on external marketplace commitment and legislation, plus a
healthy board culture which safeguards policies and processes.
O'Donovan goes on to say that 'the perceived quality of a company's corporate governance can
influence its share price as well as the cost of raising capital. Quality is determined by the
financial markets, legislation and other external market forces plus how policies and processes
are implemented and how people are led. External forces are, to a large extent, outside the circle
of control of any board. The internal environment is quite a different matter, and offers
companies the opportunity to differentiate from competitors through their board culture. To date,
too much of corporate governance debate has centred on legislative policy, to deter fraudulent
activities and transparency policy which misleads executives to treat the symptoms and not the
cause.'[2]
It is a system of structuring, operating and controlling a company with a view to achieve long
term strategic goals to satisfy shareholders, creditors, employees, customers and suppliers, and
complying with the legal and regulatory requirements, apart from meeting environmental and
local community needs.
Report of SEBI committee (India) on Corporate Governance defines corporate governance as the
acceptance by management of the inalienable rights of shareholders as the true owners of the
corporation and of their own role as trustees on behalf of the shareholders. It is about
commitment to values, about ethical business conduct and about making a distinction between
personal & corporate funds in the management of a company.” The definition is drawn from the
Gandhian principle of trusteeship and the Directive Principles of the Indian Constitution.
Corporate Governance is viewed as business ethics and a moral duty. See also Corporate Social
Entrepreneurship regarding employees who are driven by their sense of integrity (moral
conscience) and duty to society. This notion stems from traditional philosophical ideas of virtue
(or self governance) [3]and represents a "bottom-up" approach to corporate governance (agency)
which supports the more obvious "top-down" (systems and processes, i.e. structural) perspective.
[edit] History - United States
In the 19th century, state corporation laws enhanced the rights of corporate boards to govern
without unanimous consent of shareholders in exchange for statutory benefits like appraisal
rights, to make corporate governance more efficient. Since that time, and because most large
publicly traded corporations in the US are incorporated under corporate administration friendly
Delaware law, and because the US's wealth has been increasingly securitized into various
corporate entities and institutions, the rights of individual owners and shareholders have become
increasingly derivative and dissipated. The concerns of shareholders over administration pay and
stock losses periodically has led to more frequent calls for corporate governance reforms.
In the 20th century in the immediate aftermath of the Wall Street Crash of 1929 legal scholars such
as Adolf Augustus Berle, Edwin Dodd, and Gardiner C. Means pondered on the changing role of
the modern corporation in society. Berle and Means' monograph "The Modern Corporation and
Private Property" (1932, Macmillan) continues to have a profound influence on the conception of
corporate governance in scholarly debates today.
From the Chicago school of economics, Ronald Coase's "The Nature of the Firm" (1937) introduced
the notion of transaction costs into the understanding of why firms are founded and how they
continue to behave. Fifty years later, Eugene Fama and Michael Jensen's "The Separation of
Ownership and Control" (1983, Journal of Law and Economics) firmly established agency theory
as a way of understanding corporate governance: the firm is seen as a series of contracts. Agency
theory's dominance was highlighted in a 1989 article by Kathleen Eisenhardt ("Agency theory: an
assessement and review", Academy of Management Review).
US expansion after World War II through the emergence of multinational corporations saw the
establishment of the managerial class. Accordingly, the following Harvard Business School
management professors published influential monographs studying their prominence: Myles Mace
(entrepreneurship), Alfred D. Chandler, Jr. (business history), Jay Lorsch (organizational behavior)
and Elizabeth MacIver (organizational behavior). According to Lorsch and MacIver "many large
corporations have dominant control over business affairs without sufficient accountability or
monitoring by their board of directors."
Since the late 1970’s, corporate governance has been the subject of significant debate in the U.S.
and around the globe. Bold, broad efforts to reform corporate governance have been driven, in
part, by the needs and desires of shareowners to exercise their rights of corporate ownership and
to increase the value of their shares and, therefore, wealth. Over the past three decades, corporate
directors’ duties have expanded greatly beyond their traditional legal responsibility of duty of
loyalty to the corporation and its shareowners.[4]
In the first half of the 1990s, the issue of corporate governance in the U.S. received considerable
press attention due to the wave of CEO dismissals (e.g.: IBM, Kodak, Honeywell) by their boards.
The California Public Employees' Retirement System (CalPERS) led a wave of institutional
shareholder activism (something only very rarely seen before), as a way of ensuring that
corporate value would not be destroyed by the now traditionally cozy relationships between the
CEO and the board of directors (e.g., by the unrestrained issuance of stock options, not
infrequently back dated).
In 1997, the East Asian Financial Crisis saw the economies of Thailand, Indonesia, South Korea,
Malaysia and The Philippines severely affected by the exit of foreign capital after property assets
collapsed. The lack of corporate governance mechanisms in these countries highlighted the
weaknesses of the institutions in their economies.
In the early 2000s, the massive bankruptcies (and criminal malfeasance) of Enron and Worldcom,
as well as lesser corporate debacles, such as Adelphia Communications, AOL, Arthur Andersen,
Global Crossing, Tyco, led to increased shareholder and governmental interest in corporate
governance. This is reflected in the passage of the Sarbanes-Oxley Act of 2002.[3]
[edit] Impact of Corporate Governance
The positive effect of corporate governance on different stakeholders ultimately is a strengthened
economy, and hence good corporate governance is a tool for socio-economic development.[5]
[edit] Role of Institutional Investors
Many years ago, worldwide, buyers and sellers of corporation stocks were individual investors,
such as wealthy businessmen or families,who often had a vested, personal and emotional interest
in the corporations whose shares they owned. Over time, markets have become largely
institutionalized: buyers and sellers are largely institutions (e.g., pension funds, mutual funds,
hedge funds, exchange-traded funds, other investor groups; insurance companies, banks, brokers, and
other financial institutions).
The rise of the institutional investor has brought with it some increase of professional diligence
which has tended to improve regulation of the stock market (but not necessarily in the interest of
the small investor or even of the naïve institutions, of which there are many). Note that this
process occurred simultaneously with the direct growth of individuals investing indirectly in the
market (for example individuals have twice as much money in mutual funds as they do in bank
accounts). However this growth occurred primarily by way of individuals turning over their
funds to 'professionals' to manage, such as in mutual funds. In this way, the majority of
investment now is described as "institutional investment" even though the vast majority of the
funds are for the benefit of individual investors.
Program trading,the hallmark of institutional trading, averaged over 80% of NYSE trades in some
months of 2007. [4] (Moreover, these statistics do not reveal the full extent of the practice,
because of so-called 'iceberg' orders. See Quantity and display instructions under last reference.)
Unfortunately, there has been a concurrent lapse in the oversight of large corporations, which are
now almost all owned by large institutions. The Board of Directors of large corporations used to be
chosen by the principal shareholders, who usually had an emotional as well as monetary
investment in the company (think Ford), and the Board diligently kept an eye on the company
and its principal executives (they usually hired and fired the President, or Chief Executive Officer—
CEO).1
A recent study by Credit Suisse found that companies in which "founding families retain a stake
of more than 10% of the company's capital enjoyed a superior performance over their respective
sectorial peers." Since 1996, this superior performance amounts to 8% per year.[5] Forget the
celebrity CEO. "Look beyond Six Sigma and the latest technology fad. One of the biggest
strategic advantages a company can have, [BusinessWeek has found], is blood lines." [6] In that
last study, "BW identified five key ingredients that contribute to superior performance. Not all
are qualities unique to enterprises with retained family interests. But they do go far to explain
why it helps to have someone at the helm— or active behind the scenes— who has more than a
mere paycheck and the prospect of a cozy retirement at stake." See also, "Revolt in the
Boardroom," by Alan Murray.
Nowadays, if the owning institutions don't like what the President/CEO is doing and they feel
that firing them will likely be costly (think "golden handshake") and/or time consuming, they will
simply sell out their interest. The Board is now mostly chosen by the President/CEO, and may be
made up primarily of their friends and associates, such as officers of the corporation or business
colleagues. Since the (institutional) shareholders rarely object, the President/CEO generally takes
the Chair of the Board position for his/herself (which makes it much more difficult for the
institutional owners to "fire" him/her). Occasionally, but rarely, institutional investors support
shareholder resolutions on such matters as executive pay and anti-takeover, aka, "poison pill"
measures.
Finally, the largest pools of invested money (such as the mutual fund 'Vanguard 500', or the
largest investment management firm for corporations, State Street Corp.) are designed simply to
invest in a very large number of different companies with sufficient liquidity, based on the idea
that this strategy will largely eliminate individual company financial or other risk and, therefore,
these investors have even less interest in a particular company's governance.
Since the marked rise in the use of Internet transactions from the 1990s, both individual and
professional stock investors around the world have emerged as a potential new kind of major
(short term) force in the direct or indirect ownership of corporations and in the markets: the
casual participant. Even as the purchase of individual shares in any one corporation by individual
investors diminishes, the sale of derivatives (e.g., exchange-traded funds (ETFs), Stock market index
options [7], etc.) has soared. So, the interests of most investors are now increasingly rarely tied to
the fortunes of individual corporations.
But, the ownership of stocks in markets around the world varies; for example, the majority of the
shares in the Japanese market are held by financial companies and industrial corporations (there
is a large and deliberate amount of cross-holding among Japanese keiretsu corporations and within
S. Korean chaebol 'groups') [8], whereas stock in the USA or the UK and Europe are much more
broadly owned, often still by large individual investors.
[edit] Parties to corporate governance
Parties involved in corporate governance include the regulatory body (e.g. the Chief Executive
Officer, the board of directors, management, shareholders and Auditors). Other stakeholders who
take part include suppliers, employees, creditors, customers and the community at large.
In corporations, the shareholder delegates decision rights to the manager to act in the principal's
best interests. This separation of ownership from control implies a loss of effective control by
shareholders over managerial decisions. Partly as a result of this separation between the two
parties, a system of corporate governance controls is implemented to assist in aligning the
incentives of managers with those of shareholders. With the significant increase in equity
holdings of investors, there has been an opportunity for a reversal of the separation of ownership
and control problems because ownership is not so diffuse.
A board of directors often plays a key role in corporate governance. It is their responsibility to
endorse the organisation's strategy, develop directional policy, appoint, supervise and remunerate
senior executives and to ensure accountability of the organisation to its owners and authorities.
The Company Secretary, known as a Corporate Secretary in the US and often referred to as a
Chartered Secretary if qualified by the Institute of Chartered Secretaries and Administrators (ICSA),
is a high ranking professional who is trained to uphold the highest standards of corporate
governance, effective operations, compliance and administration.
All parties to corporate governance have an interest, whether direct or indirect, in the effective
performance of the organization. Directors, workers and management receive salaries, benefits
and reputation, while shareholders receive capital return. Customers receive goods and services;
suppliers receive compensation for their goods or services. In return these individuals provide
value in the form of natural, human, social and other forms of capital.
A key factor is an individual's decision to participate in an organisation e.g. through providing
financial capital and trust that they will receive a fair share of the organisational returns. If some
parties are receiving more than their fair return then participants may choose to not continue
participating leading to organizational collapse.
[edit] Principles
Key elements of good corporate governance principles include honesty, trust and integrity,
openness, performance orientation, responsibility and accountability, mutual respect, and
commitment to the organization.
Of importance is how directors and management develop a model of governance that aligns the
values of the corporate participants and then evaluate this model periodically for its
effectiveness. In particular, senior executives should conduct themselves honestly and ethically,
especially concerning actual or apparent conflicts of interest, and disclosure in financial reports.
Commonly accepted principles of corporate governance include:
• Rights and equitable treatment of shareholders: Organizations should
respect the rights of shareholders and help shareholders to exercise those
rights. They can help shareholders exercise their rights by effectively
communicating information that is understandable and accessible and
encouraging shareholders to participate in general meetings.
• Interests of other stakeholders: Organizations should recognize that they
have legal and other obligations to all legitimate stakeholders.
• Role and responsibilities of the board: The board needs a range of skills
and understanding to be able to deal with various business issues and have
the ability to review and challenge management performance. It needs to be
of sufficient size and have an appropriate level of commitment to fulfill its
responsibilities and duties. There are issues about the appropriate mix of
executive and non-executive directors.
• Integrity and ethical behaviour: Ethical and responsible decision making
is not only important for public relations, but it is also a necessary element in
risk management and avoiding lawsuits. Organizations should develop a code
of conduct for their directors and executives that promotes ethical and
responsible decision making. It is important to understand, though, that
reliance by a company on the integrity and ethics of individuals is bound to
eventual failure. Because of this, many organizations establish Compliance
and Ethics Programs to minimize the risk that the firm steps outside of ethical
and legal boundaries.
• Disclosure and transparency: Organizations should clarify and make
publicly known the roles and responsibilities of board and management to
provide shareholders with a level of accountability. They should also
implement procedures to independently verify and safeguard the integrity of
the company's financial reporting. Disclosure of material matters concerning
the organization should be timely and balanced to ensure that all investors
have access to clear, factual information.
Issues involving corporate governance principles include:
• internal controls and internal auditors
• the independence of the entity's external auditors and the quality of their
audits
• oversight and management of risk
• oversight of the preparation of the entity's financial statements
• review of the compensation arrangements for the chief executive officer and
other senior executives
• the resources made available to directors in carrying out their duties
• the way in which individuals are nominated for positions on the board
• dividend policy
Nevertheless "corporate governance," despite some feeble attempts from various quarters,
remains an ambiguous and often misunderstood phrase. For quite some time it was confined only
to corporate management. That is not so. It is something much broader, for it must include a fair,
efficient and transparent administration and strive to meet certain well defined, written
objectives. Corporate governance must go well beyond law. The quantity, quality and frequency
of financial and managerial disclosure, the degree and extent to which the board of Director
(BOD) exercise their trustee responsibilities (largely an ethical commitment), and the
commitment to run a transparent organization- these should be constantly evolving due to
interplay of many factors and the roles played by the more progressive/responsible elements
within the corporate sector. John G. Smale, a former member of the General Motors board of
directors, wrote: "The Board is responsible for the successful perpetuation of the corporation.
That responsibility cannot be relegated to management."[6] However it should be noted that a
corporation should cease to exist if that is in the best interests of its stakeholders. Perpetuation
for its own sake may be counterproductive.
[edit] Mechanisms and controls
Corporate governance mechanisms and controls are designed to reduce the inefficiencies that
arise from moral hazard and adverse selection. For example, to monitor managers' behaviour, an
independent third party (the external auditor) attests the accuracy of information provided by
management to investors. An ideal control system should regulate both motivation and ability.
[edit] Internal corporate governance controls
Internal corporate governance controls monitor activities and then take corrective action to
accomplish organisational goals. Examples include:
• Monitoring by the board of directors: The board of directors, with its
legal authority to hire, fire and compensate top management, safeguards
invested capital. Regular board meetings allow potential problems to be
identified, discussed and avoided. Whilst non-executive directors are thought
to be more independent, they may not always result in more effective
corporate governance and may not increase performance.[7] Different board
structures are optimal for different firms. Moreover, the ability of the board to
monitor the firm's executives is a function of its access to information.
Executive directors possess superior knowledge of the decision-making
process and therefore evaluate top management on the basis of the quality
of its decisions that lead to financial performance outcomes, ex ante. It could
be argued, therefore, that executive directors look beyond the financial
criteria.
• Internal control procedures and internal auditors: Internal control
procedures are policies implemented by an entity's board of directors, audit
committee, management, and other personnel to provide reasonable
assurance of the entity achieving its objectives related to reliable financial
reporting, operating efficiency, and compliance with laws and regulations.
Internal auditors are personnel within an organization who test the design
and implementation of the entity's internal control procedures and the
reliability of its financial reporting
• Balance of power: The simplest balance of power is very common; require
that the President be a different person from the Treasurer. This application
of separation of power is further developed in companies where separate
divisions check and balance each other's actions. One group may propose
company-wide administrative changes, another group review and can veto
the changes, and a third group check that the interests of people (customers,
shareholders, employees) outside the three groups are being met.
• Remuneration: Performance-based remuneration is designed to relate some
proportion of salary to individual performance. It may be in the form of cash
or non-cash payments such as shares and share options, superannuation or
other benefits. Such incentive schemes, however, are reactive in the sense
that they provide no mechanism for preventing mistakes or opportunistic
behaviour, and can elicit myopic behaviour.

[edit] External corporate governance controls


External corporate governance controls encompass the controls external stakeholders exercise
over the organisation. Examples include:
• competition
• debt covenants
• demand for and assessment of performance information (especially financial
statements)
• government regulations
• managerial labour market
• media pressure
• takeovers

[edit] Systemic problems of corporate governance


• Demand for information: In order to influence the directors, the shareholders
must combine with others to form a significant voting group which can pose a
real threat of carrying resolutions or appointing directors at a general
meeting.
• Monitoring costs: A barrier to shareholders using good information is the cost
of processing it, especially to a small shareholder. The traditional answer to
this problem is the efficient market hypothesis (in finance, the efficient
market hypothesis (EMH) asserts that financial markets are efficient), which
suggests that the small shareholder will free ride on the judgements of larger
professional investors.
• Supply of accounting information: Financial accounts form a crucial link in
enabling providers of finance to monitor directors. Imperfections in the
financial reporting process will cause imperfections in the effectiveness of
corporate governance. This should, ideally, be corrected by the working of
the external auditing process.

[edit] Role of the accountant


Financial reporting is a crucial element necessary for the corporate governance system to
function effectively.[8] Accountants and auditors are the primary providers of information to capital
market participants. The directors of the company should be entitled to expect that management
prepare the financial information in compliance with statutory and ethical obligations, and rely
on auditors' competence.
Current accounting practice allows a degree of choice of method in determining the method of
measurement, criteria for recognition, and even the definition of the accounting entity. The
exercise of this choice to improve apparent performance (popularly known as creative accounting)
imposes extra information costs on users. In the extreme, it can involve non-disclosure of
information.
One area of concern is whether the auditing firm acts as both the independent auditor and
management consultant to the firm they are auditing. This may result in a conflict of interest
which places the integrity of financial reports in doubt due to client pressure to appease
management. The power of the corporate client to initiate and terminate management consulting
services and, more fundamentally, to select and dismiss accounting firms contradicts the concept
of an independent auditor. Changes enacted in the United States in the form of the Sarbanes-
Oxley Act (in response to the Enron situation as noted below) prohibit accounting firms from
providing both auditing and management consulting services. Similar provisions are in place
under clause 49 of SEBI Act in India.
The Enron collapse is an example of misleading financial reporting. Enron concealed huge losses
by creating illusions that a third party was contractually obliged to pay the amount of any losses.
However, the third party was an entity in which Enron had a substantial economic stake. In
discussions of accounting practices with Arthur Andersen, the partner in charge of auditing, views
inevitably led to the client prevailing.
However, good financial reporting is not a sufficient condition for the effectiveness of corporate
governance if users don't process it, or if the informed user is unable to exercise a monitoring
role due to high costs (see Systemic problems of corporate governance above).[citation needed]
[edit] Regulation

Companies law

Company · Business
Sole proprietorship

Partnership
(General · Limited · LLP)

Corporation
Cooperative

United States

S corporation · C corporation
LLC · LLLP · Series LLC
Delaware corporation
Nevada corporation
Massachusetts business trust

UK / Ireland /
Commonwealth

Limited company
(by shares · by guarantee
Public · Proprietary)

Unlimited company
Community interest company

European Union / EEA

SE · SCE · SPE · EEIG

Elsewhere

AB · AG · ANS · A/S · AS ·
GmbH
K.K. · N.V. · OY · S.A. · more

Doctrines
Corporate governance
Limited liability · Ultra vires
Business judgment rule
Internal affairs doctrine

De facto corporation and


corporation by estoppel

Piercing the corporate veil


Rochdale Principles

Related areas

Contract · Civil procedure

v • d • e

[edit] Rules versus principles


Rules are typically thought to be simpler to follow than principles, demarcating a clear line
between acceptable and unacceptable behaviour. Rules also reduce discretion on the part of
individual managers or auditors.
In practice rules can be more complex than principles. They may be ill-equipped to deal with
new types of transactions not covered by the code. Moreover, even if clear rules are followed,
one can still find a way to circumvent their underlying purpose - this is harder to achieve if one is
bound by a broader principle.
Principles on the other hand is a form of self regulation. It allows the sector to determine what
standards are acceptable or unacceptable. It also pre-empts over zealous legislations that might
not be practical.
[edit] Enforcement
Enforcement can affect the overall credibility of a regulatory system. They both deter bad actors
and level the competitive playing field. Nevertheless, greater enforcement is not always better,
for taken too far it can dampen valuable risk-taking. In practice, however, this is largely a
theoretical, as opposed to a real, risk. There are various integrated governance, risk and
compliance solutions available to capture information in order to evaluate risk and to identify
gaps in the organization’s principles and processes. This type of software is based on project
management style methodologies such as the ABACUS methodology which attempts to unify
the management of these areas, rather than treat them as separate entities.
[edit] Action Beyond Obligation
Enlightened boards regard their mission as helping management lead the company. They are
more likely to be supportive of the senior management team. Because enlightened directors
strongly believe that it is their duty to involve themselves in an intellectual analysis of how the
company should move forward into the future, most of the time, the enlightened board is aligned
on the critically important issues facing the company.
Unlike traditional boards, enlightened boards do not feel hampered by the rules and regulations
of the Sarbanes-Oxley Act. Unlike standard boards that aim to comply with regulations,
enlightened boards regard compliance with regulations as merely a baseline for board
performance. Enlightened directors go far beyond merely meeting the requirements on a
checklist. They do not need Sarbanes-Oxley to mandate that they protect values and ethics or
monitor CEO performance.
At the same time, enlightened directors recognize that it is not their role to be involved in the
day-to-day operations of the corporation. They lead by example. Overall, what most
distinguishes enlightened directors from traditional and standard directors is the passionate
obligation they feel to engage in the day-to-day challenges and strategizing of the company.
Enlightened boards can be found in very large, complex companies, as well as smaller
companies.[9]
[edit] Proposals
The book Money for Nothing suggests importing from England the concept of term limits to
prevent independent directors from becoming too close to management and demanding that
directors invest a meaningful amount of their own money (not grants of stock or options that they
receive free) to ensure that the directors' interests align with those of average investors.[10]
Another proposal is for the government to allow poorly-managed businesses to go bankrupt,
since after a filing, directors have to cover more of their own legal bills and are frequently sued
by bankruptcy trustees as well as investors.[11]
[edit] Corporate governance models around the world
Although the US model of corporate governance is the most notorious, there is a considerable
variation in corporate governance models around the world. The intricated shareholding
structures of keiretsus in Japan, the heavy presence of banks in the equity of German firms [9], the
chaebols in South Korea and many others are examples of arrangements which try to respond to
the same corporate governance challenges as in the US.
In the United States, the main problem is the conflict of interest between widely-dispersed
shareholders and powerful managers. In Europe, the main problem is that the voting ownership is
tightly-held by families through pyramidal ownership and dual shares (voting and nonvoting).
This can lead to "self-dealing", where the controlling families favor subsidiaries for which they
have higher cash flow rights.[12]
[edit] Anglo-American Model
There are many different models of corporate governance around the world. These differ
according to the variety of capitalism in which they are embedded. The liberal model that is
common in Anglo-American countries tends to give priority to the interests of shareholders. The
coordinated model that one finds in Continental Europe and Japan also recognizes the interests of
workers, managers, suppliers, customers, and the community. Each model has its own distinct
competitive advantage. The liberal model of corporate governance encourages radical innovation
and cost competition, whereas the coordinated model of corporate governance facilitates
incremental innovation and quality competition. However, there are important differences
between the U.S. recent approach to governance issues and what has happened in the UK. In the
United States, a corporation is governed by a board of directors, which has the power to choose an
executive officer, usually known as the chief executive officer. The CEO has broad power to
manage the corporation on a daily basis, but needs to get board approval for certain major
actions, such as hiring his/her immediate subordinates, raising money, acquiring another
company, major capital expansions, or other expensive projects. Other duties of the board may
include policy setting, decision making, monitoring management's performance, or corporate
control.
The board of directors is nominally selected by and responsible to the shareholders, but the bylaws
of many companies make it difficult for all but the largest shareholders to have any influence
over the makeup of the board; normally, individual shareholders are not offered a choice of
board nominees among which to choose, but are merely asked to rubberstamp the nominees of
the sitting board. Perverse incentives have pervaded many corporate boards in the developed
world, with board members beholden to the chief executive whose actions they are intended to
oversee. Frequently, members of the boards of directors are CEOs of other corporations, which
some[13] see as a conflict of interest.
[edit] Codes and guidelines
Corporate governance principles and codes have been developed in different countries and issued
from stock exchanges, corporations, institutional investors, or associations (institutes) of
directors and managers with the support of governments and international organizations. As a
rule, compliance with these governance recommendations is not mandated by law, although the
codes linked to stock exchange listing requirements may have a coercive effect.
For example, companies quoted on the London and Toronto Stock Exchanges formally need not
follow the recommendations of their respective national codes. However, they must disclose
whether they follow the recommendations in those documents and, where not, they should
provide explanations concerning divergent practices. Such disclosure requirements exert a
significant pressure on listed companies for compliance.
In the United States, companies are primarily regulated by the state in which they incorporate
though they are also regulated by the federal government and, if they are public, by their stock
exchange. The highest number of companies are incorporated in Delaware, including more than
half of the Fortune 500. This is due to Delaware's generally business-friendly corporate legal
environment and the existence of a state court dedicated solely to business issues (Delaware Court
of Chancery).
Most states' corporate law generally follow the American Bar Association's Model Business
Corporation Act. While Delaware does not follow the Act, it still considers its provisions and
several prominent Delaware justices, including former Delaware Supreme Court Chief Justice E.
Norman Veasey, participate on ABA committees.
One issue that has been raised since the Disney decision[14] in 2005 is the degree to which
companies manage their governance responsibilities; in other words, do they merely try to
supersede the legal threshold, or should they create governance guidelines that ascend to the
level of best practice. For example, the guidelines issued by associations of directors (see Section
3 above), corporate managers and individual companies tend to be wholly voluntary. For
example, The GM Board Guidelines reflect the company’s efforts to improve its own governance
capacity. Such documents, however, may have a wider multiplying effect prompting other
companies to adopt similar documents and standards of best practice.
One of the most influential guidelines has been the 1999 OECD Principles of Corporate
Governance. This was revised in 2004. The OECD remains a proponent of corporate governance
principles throughout the world.
Building on the work of the OECD, other international organisations, private sector associations
and more than 20 national corporate governance codes, the United Nations Intergovernmental
Working Group of Experts on International Standards of Accounting and Reporting (ISAR) has produced
voluntary Guidance on Good Practices in Corporate Governance Disclosure. This internationally
agreed[15] benchmark consists of more than fifty distinct disclosure items across five broad
categories:[16]
• Auditing
• Board and management structure and process
• Corporate responsibility and compliance
• Financial transparency and information disclosure
• Ownership structure and exercise of control rights
The World Business Council for Sustainable Development WBCSD has done work on corporate
governance, particularly on accountability and reporting, and in 2004 created an Issue Management
Tool: Strategic challenges for business in the use of corporate responsibility codes, standards, and
frameworks.This document aims to provide general information, a "snap-shot" of the landscape
and a perspective from a think-tank/professional association on a few key codes, standards and
frameworks relevant to the sustainability agenda.
[edit] Ownership structures
Ownership structures refers to the various patterns in which shareholders seem to set up with
respect to a certain group of firms. It is a tool frequently employed by policy-makers and
researchers in their analyses of corporate governance within a country or business group.And
ownership can be changed by the stakeholders of the company.
Generally, ownership structures are identified by using some observable measures of ownership
concentration (i.e. concentration ratios) and then making a sketch showing its visual
representation. The idea behind the concept of ownership structures is to be able to understand
the way in which shareholders interact with firms and, whenever possible, to locate the ultimate
owner of a particular group of firms. Some examples of ownership structures include pyramids,
cross-share holdings, rings, and webs.
[edit] Corporate governance and firm performance
In its 'Global Investor Opinion Survey' of over 200 institutional investors first undertaken in
2000 and updated in 2002, McKinsey found that 80% of the respondents would pay a premium for
well-governed companies. They defined a well-governed company as one that had mostly out-
side directors, who had no management ties, undertook formal evaluation of its directors, and
was responsive to investors' requests for information on governance issues. The size of the
premium varied by market, from 11% for Canadian companies to around 40% for companies
where the regulatory backdrop was least certain (those in Morocco, Egypt and Russia).
Other studies have linked broad perceptions of the quality of companies to superior share price
performance. In a study of five year cumulative returns of Fortune Magazine's survey of 'most
admired firms', Antunovich et al. found that those "most admired" had an average return of
125%, whilst the 'least admired' firms returned 80%. In a separate study Business Week enlisted
institutional investors and 'experts' to assist in differentiating between boards with good and bad
governance and found that companies with the highest rankings had the highest financial returns.
On the other hand, research into the relationship between specific corporate governance controls
and some definitions of firm performance has been mixed and often weak. The following
examples are illustrative.
[edit] Board composition
Some researchers have found support for the relationship between frequency of meetings and
profitability. Others have found a negative relationship between the proportion of external
directors and profitability, while others found no relationship between external board
membership and profitability. In a recent paper Bhagat and Black found that companies with
more independent boards are not more profitable than other companies. It is unlikely that board
composition has a direct impact on profitability, one measure of firm performance.
[edit] Remuneration/Compensation
The results of previous research on the relationship between firm performance and executive
compensation have failed to find consistent and significant relationships between executives'
remuneration and firm performance. Low average levels of pay-performance alignment do not
necessarily imply that this form of governance control is inefficient. Not all firms experience the
same levels of agency conflict, and external and internal monitoring devices may be more
effective for some than for others.
Some researchers have found that the largest CEO performance incentives came from ownership
of the firm's shares, while other researchers found that the relationship between share ownership
and firm performance was dependent on the level of ownership. The results suggest that
increases in ownership above 20% cause management to become more entrenched, and less
interested in the welfare of their shareholders.
Some argue that firm performance is positively associated with share option plans and that these
plans direct managers' energies and extend their decision horizons toward the long-term, rather
than the short-term, performance of the company. However, that point of view came under
substantial criticism circa in the wake of various security scandals including mutual fund timing
episodes and, in particular, the backdating of option grants as documented by University of Iowa
academic Erik Lie and reported by James Blander and Charles Forelle of the Wall Street Journal.
Even before the negative influence on public opinion caused by the 2006 backdating scandal, use
of options faced various criticisms. A particularly forceful and long running argument concerned
the interaction of executive options with corporate stock repurchase programs. Numerous
authorities (including U.S. Federal Reserve Board economist Weisbenner) determined options
may be employed in concert with stock buybacks in a manner contrary to shareholder interests.
These authors argued that, in part, corporate stock buybacks for U.S. Standard & Poors 500
companies surged to a $500 billion annual rate in late 2006 because of the impact of options. A
compendium of academic works on the option/buyback issue is included in the study Scandal by
author M. Gumport issued in 2006.
A combination of accounting changes and governance issues led options to become a less
popular means of remuneration as 2006 progressed, and various alternative implementations of
buybacks surfaced to challenge the dominance of "open market" cash buybacks as the preferred
means of implementing a share repurchase plan.

Agenda 21
Agenda 21 is a programme run by the United Nations (UN) related to sustainable development and
was the planet's first summit to discuss global warming related issues. It is a comprehensive
blueprint of action to be taken globally, nationally and locally by organizations of the UN,
governments, and major groups in every area in which humans directly affect the environment.
Development of Agenda 21
The full text of Agenda 21 was revealed at the United Nations Conference on Environment and
Development (Earth Summit), held in Rio de Janeiro on June 14, 1992, where 178 governments
voted to adopt the program. The final text was the result of drafting, consultation and
negotiation, beginning in 1989 and culminating at the two-week conference. The number 21
refers to an agenda for the 21st century. It may also refer to the number on the UN's agenda at
this particular summit.
[edit] Rio5
In 1997, the General Assembly of the UN held a special session to appraise five years of progress
on the implementation of Agenda 21 (Rio +5). The Assembly recognized progress as 'uneven'
and identified key trends including increasing globalization, widening inequalities in income and a
continued deterioration of the global environment. A new General Assembly Resolution (S-19/2)
promised further action.
[edit] The Johannesburg Summit
The Johannesburg Plan of Implementation, agreed at the World Summit on Sustainable Development
(Earth Summit 2002) affirmed UN commitment to 'full implementation' of Agenda 21, alongside
achievement of the Millennium Development Goals and other international agreements.
[edit] Implementation
The Commission on Sustainable Development acts as a high level forum on sustainable
development and has acted as preparatory committee for summits and sessions on the
implementation of Agenda 21. The United Nations Division for Sustainable Development acts as
the secretariat to the Commission and works 'within the context of' Agenda 21. Implementation
by member states remains essentially voluntary.
[edit] Structure and contents
There are 40 chapters in the Agenda 21, divided into four main sections.
[edit] Section I: Social and Economic Dimensions
Includes combating poverty, changing consumption patterns, population and demographic
dynamics, promoting health, promoting sustainable settlement patterns and integrating
environment and development into decision-making.
[edit] Section II: Conservation and Management of Resources for
Development
Includes atmospheric protection, combating deforestation, protecting fragile environments,
conservation of biological diversity (biodiversity), and control of pollution.
[edit] Section III: Strengthening the Role of Major Groups
Includes the roles of children and youth, women, NGOs, local authorities, business and workers.
[edit] Section IV: Means of Implementation
includes science, technology transfer, education, international institutions and
Implementation
mechanisms and financial mechanisms.
[edit] Local Agenda 21
The implementation of Agenda 21 was intended to involve action at international, national,
regional and local levels. Some national and state governments have legislated or advised that
local authorities take steps to implement the plan locally, as recommended in Chapter 28 of the
document. Such programmes are often known as 'Local Agenda 21' or 'LA21'.[1]

Decision support system


From Wikipedia, the free encyclopedia
Jump to:navigation, search
Example of a Decision Support System for John Day Reservoir.

Decision support systems constitute a class of computer-based information systems including


knowledge-based systems that support decision-making activities.
DSSs serve the management level of the organization and help to take decisions, which may be
rapidly changing and not easily specified in advance.

Contents
[hide]
• 1 Overview
• 2 History
• 3 Taxonomies
• 4 Architecture
○ 4.1 Development Frameworks
• 5 Classifying DSS
• 6 Applications
• 7 Benefits of DSS
• 8 See also
• 9 References
• 10 Further reading

[edit] Overview
A Decision Support System (DSS) is a class of information systems (including but not limited
to computerized systems) that support business and organizational decision-making activities. A
properly designed DSS is an interactive software-based system intended to help decision makers
compile useful information from a combination of raw data, documents, personal knowledge, or
business models to identify and solve problems and make decisions.

Typical information that a decision support application might gather and present are:
• inventories of all of your current information assets (including legacy and
relational data sources, cubes, data warehouses, and data marts),
• comparative sales figures between one week and the next,
• projected revenue figures based on new product sales assumptions.

[edit] History
According to Keen (1978)[1], the concept of decision support has evolved from two main areas of
research: The theoretical studies of organizational decision making done at the Carnegie Institute
of Technology during the late 1950s and early 1960s, and the technical work on interactive
computer systems, mainly carried out at the Massachusetts Institute of Technology in the 1960s.[1] It
is considered that the concept of DSS became an area of research of its own in the middle of the
1970s, before gaining in intensity during the 1980s. In the middle and late 1980s, executive
information systems (EIS), group decision support systems (GDSS), and organizational decision
support systems (ODSS) evolved from the single user and model-oriented DSS.
According to Sol (1987)[2] the definition and scope of DSS has been migrating over the years. In
the 1970s DSS was described as "a computer based system to aid decision making". Late 1970s
the DSS movement started focusing on "interactive computer-based systems which help
decision-makers utilize data bases and models to solve ill-structured problems". In the 1980s
DSS should provide systems "using suitable and available technology to improve effectiveness
of managerial and professional activities", and end 1980s DSS faced a new challenge towards the
design of intelligent workstations.[2]
In 1987 Texas Instruments completed development of the Gate Assignment Display System
(GADS) for United Airlines. This decision support system is credited with significantly reducing
travel delays by aiding the management of ground operations at various airports, beginning with
O'Hare International Airport in Chicago and Stapleton Airport in Denver Colorado.[3][4]
Beginning in about 1990, data warehousing and on-line analytical processing (OLAP) began
broadening the realm of DSS. As the turn of the millennium approached, new Web-based
analytical applications were introduced.
The advent of better and better reporting technologies has seen DSS start to emerge as a critical
component of management design. Examples of this can be seen in the intense amount of
discussion of DSS in the education environment.
DSS also have a weak connection to the user interface paradigm of hypertext. Both the University of
Vermont PROMIS system (for medical decision making) and the Carnegie Mellon ZOG/KMS
system (for military and business decision making) were decision support systems which also
were major breakthroughs in user interface research. Furthermore, although hypertext researchers
have generally been concerned with information overload, certain researchers, notably Douglas
Engelbart, have been focused on decision makers in particular.

[edit] Taxonomies
As with the definition, there is no universally-accepted taxonomy of DSS either. Different authors
propose different classifications. Using the relationship with the user as the criterion,
Haettenschwiler[5] differentiates passive, active, and cooperative DSS. A passive DSS is a system
that aids the process of decision making, but that cannot bring out explicit decision suggestions
or solutions. An active DSS can bring out such decision suggestions or solutions. A cooperative
DSS allows the decision maker (or its advisor) to modify, complete, or refine the decision
suggestions provided by the system, before sending them back to the system for validation. The
system again improves, completes, and refines the suggestions of the decision maker and sends
them back to her for validation. The whole process then starts again, until a consolidated solution
is generated.
Another taxonomy for DSS has been created by Daniel Power. Using the mode of assistance as
the criterion, Power differentiates communication-driven DSS, data-driven DSS, document-
driven DSS, knowledge-driven DSS, and model-driven DSS.[6]
• A communication-driven DSS supports more than one person working on a
shared task; examples include integrated tools like Microsoft's NetMeeting or
Groove[7]
• A data-driven DSS or data-oriented DSS emphasizes access to and
manipulation of a time series of internal company data and, sometimes,
external data.
• A document-driven DSS manages, retrieves, and manipulates unstructured
information in a variety of electronic formats.
• A knowledge-driven DSS provides specialized problem-solving expertise
stored as facts, rules, procedures, or in similar structures.[6]
• A model-driven DSS emphasizes access to and manipulation of a statistical,
financial, optimization, or simulation model. Model-driven DSS use data and
parameters provided by users to assist decision makers in analyzing a
situation; they are not necessarily data-intensive. Dicodess is an example of an
open source model-driven DSS generator[8].
Using scope as the criterion, Power[9] differentiates enterprise-wide DSS and desktop DSS. An
enterprise-wide DSS is linked to large data warehouses and serves many managers in the
company. A desktop, single-user DSS is a small system that runs on an individual manager's PC.
[edit] Architecture

Design of a Drought Mitigation Decision Support System.

Three fundamental components of a DSS architecture are:[5][6][10][11][12]


1. the database (or knowledge base),
2. the model (i.e., the decision context and user criteria), and
3. the user interface.
The users themselves are also important components of the architecture.[5][12]
[edit] Development Frameworks
DSS systems are not entirely different from other systems and require a structured approach.
Such a framework includes people, technology, and the development approach.[10]
DSS technology levels (of hardware and software) may include:
1. The actual application that will be used by the user. This is the part of the
application that allows the decision maker to make decisions in a particular
problem area. The user can act upon that particular problem.
2. Generator contains Hardware/software environment that allows people to
easily develop specific DSS applications. This level makes use of case tools or
systems such as Crystal, AIMMS, and iThink.
3. Tools include lower level hardware/software. DSS generators including special
languages, function libraries and linking modules
An iterative developmental approach allows for the DSS to be changed and redesigned at various
intervals. Once the system is designed, it will need to be tested and revised for the desired
outcome.
[edit] Classifying DSS
There are several ways to classify DSS applications. Not every DSS fits neatly into one category,
but a mix of two or more architecture in one.
Holsapple and Whinston[13] classify DSS into the following six frameworks: Text-oriented DSS,
Database-oriented DSS, Spreadsheet-oriented DSS, Solver-oriented DSS, Rule-oriented DSS,
and Compound DSS.
A compound DSS is the most popular classification for a DSS. It is a hybrid system that includes
two or more of the five basic structures described by Holsapple and Whinston[13].
The support given by DSS can be separated into three distinct, interrelated categories[14]:
Personal Support, Group Support, and Organizational Support.
DSS components may be classified as:
1. Inputs: Factors, numbers, and characteristics to analyze
2. User Knowledge and Expertise: Inputs requiring manual analysis by the
user
3. Outputs: Transformed data from which DSS "decisions" are generated
4. Decisions: Results generated by the DSS based on user criteria
DSSs which perform selected cognitive decision-making functions and are based on artificial
intelligence or intelligent agents technologies are called Intelligent Decision Support Systems (IDSS)
[15]
.
The nascent field of Decision engineering treats the decision itself as an engineered object, and
applies engineering principles such as Design and Quality assurance to an explicit representation
of the elements that make up a decision.
[edit] Applications
As mentioned above, there are theoretical possibilities of building such systems in any
knowledge domain.
One example is the Clinical decision support system for medical diagnosis. Other examples include a
bank loan officer verifying the credit of a loan applicant or an engineering firm that has bids on
several projects and wants to know if they can be competitive with their costs.
DSS is extensively used in business and management. Executive dashboard and other business
performance software allow faster decision making, identification of negative trends, and better
allocation of business resources.
A growing area of DSS application, concepts, principles, and techniques is in agricultural
production, marketing for sustainable development. For example, the DSSAT4 package[16][17],
developed through financial support of USAID during the 80's and 90's, has allowed rapid
assessment of several agricultural production systems around the world to facilitate decision-
making at the farm and policy levels. There are, however, many constraints to the successful
adoption on DSS in agriculture[18].
DSS are also prevalent in forest management where the long planning time frame demands
specific requirements. All aspects of Forest management, from log transportation, harvest
scheduling to sustainability and ecosystem protection have been addressed by modern DSSs. A
comprehensive list and discussion of all available systems in forest management is being
compiled under the COST action Forsys
A specific example concerns the Canadian National Railway system, which tests its equipment on a
regular basis using a decision support system. A problem faced by any railroad is worn-out or
defective rails, which can result in hundreds of derailments per year. Under a DSS, CN managed
to decrease the incidence of derailments at the same time other companies were experiencing an
increase.
DSS has many applications that have already been spoken about. However, it can be used in any
field where organization is necessary. Additionally, a DSS can be designed to help make
decisions on the stock market, or deciding which area or segment to market a product toward.
CACI has begun integrating simulation and decision support systems. CACI defines three
levels of simulation model maturity. “Level 1” models are traditional desktop simulation models
that are executed within the native software package. These often require a simulation expert to
implement modifications, run scenarios, and analyze results. “Level 2” models embed the
modeling engine in a web application that allows the decision maker to make process and
parameter changes without the assistance of an analyst. “Level 3” models are also embedded in a
web-based application but are tied to real-time operational data. The execution of “level 3”
models can be triggered automatically based on this real-time data and the corresponding results
can be displayed on the manager’s desktop showing the prevailing trends and predictive
analytics given the current processes and state of the system. The advantage of this approach is
that “level 1” models developed for the FDA projects can migrate to “level 2 and 3” models in
support of decision support, production/operations management, process/work flow
management, and predictive analytics. This approach involves developing and maintaining
reusable models that allow decision makers to easily define and extract business level
information (e.g., process metrics). “Level 1” models are decomposed into their business objects
and stored in a database. All process information is stored in the database, including activity,
resource, and costing data. The database becomes a template library that users can access to
build, change, and modify their own unique process flows and then use simulation to study their
performance in an iterative manner.
Developing a Mission Statement
1. Basically, the mission statement describes the overall purpose of the
organization.
2. If the organization elects to develop a vision statement before developing the
mission statement, ask “Why does the image, the vision exist -- what is it’s
purpose?” This purpose is often the same as the mission.
3. Developing a mission statement can be quick culture-specific, i.e., participants
may use methods ranging from highly analytical and rational to highly creative and
divergent, e.g., focused discussions, divergent experiences around daydreams,
sharing stories, etc. Therefore, visit with the participants how they might like to
arrive at description of their organizational mission.
4. When wording the mission statement, consider the organization's products,
services, markets, values, and concern for public image, and maybe priorities of
activities for survival.
5. Consider any changes that may be needed in wording of the mission statement
because of any new suggested strategies during a recent strategic planning
process.
6. Ensure that wording of the mission is to the extent that management and
employees can infer some order of priorities in how products and services are
delivered.
7. When refining the mission, a useful exercise is to add or delete a word from the
mission to realize the change in scope of the mission statement and assess how
concise is its wording.
8. Does the mission statement include sufficient description that the statement
clearly separates the mission of the organization from other organizations?

Developing a Vision Statement


1. The vision statement includes vivid description of the organization as it
effectively carries out its operations.
2. Developing a vision statement can be quick culture-specific, i.e., participants
may use methods ranging from highly analytical and rational to highly creative and
divergent, e.g., focused discussions, divergent experiences around daydreams,
sharing stories, etc. Therefore, visit with the participants how they might like to
arrive at description of their organizational vision.
3. Developing the vision can be the most enjoyable part of planning, but the part
where time easily gets away from you.
4. Note that originally, the vision was a compelling description of the state and
function of the organization once it had implemented the strategic plan, i.e., a very
attractive image toward which the organization was attracted and guided by the
strategic plan. Recently, the vision has become more of a motivational tool, too
often including highly idealistic phrasing and activities which the organization
cannot realistically aspire.

Developing a Values Statement


1. Values represent the core priorities in the organization’s culture, including what
drives members’ priorities and how they truly act in the organization, etc. Values
are increasingly important in strategic planning. They often drive the intent and
direction for “organic” planners.
2. Developing a values statement can be quick culture-specific, i.e., participants
may use methods ranging from highly analytical and rational to highly creative and
divergent, e.g., focused discussions, divergent experiences around daydreams,
sharing stories, etc. Therefore, visit with the participants how they might like to
arrive at description of their organizational values.
3. Establish four to six core values from which the organization would like to
operate. Consider values of customers, shareholders, employees and the
community.
4. Notice any differences between the organization’s preferred values and its true
values (the values actually reflected by members’ behaviors in the organization).
Record each preferred value on a flash card, then have each member “rank” the
values with 1, 2, or 3 in terms of the priority needed by the organization with 3
indicating the value is very important to the organization and 1 is least important.
Then go through the cards again to rank how people think the values are actually
being enacted in the organization with 3 indicating the values are fully enacted and
1 indicating the value is hardly reflected at all. Then address discrepancies where a
value is highly preferred (ranked with a 3), but hardly enacted (ranked with a 1).
5. Incorporate into the strategic plan, actions to align actual behavior with
preferred behaviors.

[edit] Mission, vision and values


Mission: Defines the fundamental purpose of an organization or an enterprise, succinctly
describing why it exists and what it does to achieve its Vision.
Vision: Defines the desired or intended future state of an organization or enterprise in terms of
its fundamental objective and/or strategic direction. Vision is a long term view, sometimes
describing how the organization would like the world in which it operates to be. For example a
charity working with the poor might have a vision statement which read "A world without
poverty"
It is sometimes used to set out a 'picture' of the organization in the future. A vision statement
provides inspiration, the basis for all the organization's planning. It could answer the question:
"Where do we want to go?"
Values: Beliefs that are shared among the stakeholders of an organization. Values drive an
organization's culture and priorities.
Strategy: Strategy narrowly defined, means "the art of the general" (from Greek stratigos). A
combination of the ends (goals) for which the firm is striving and the means (policies)by which it
is seeking to get there.
[edit] Mission statements and vision statements
Organizations sometimes summarize goals and objectives into a mission statement and/or a
vision statement. Others begin with a vision and mission and use them to formulate goals and
objectives.
While the existence of a shared mission is extremely useful, many strategy specialists question
the requirement for a written mission statement. However, there are many models of strategic
planning that start with mission statements, so it is useful to examine them here.
• A Mission statement tells you the fundamental purpose of the organization.
It defines the customer and the critical processes. It informs you of the
desired level of performance.
• A Vision statement outlines what the organization wants to be, or how it
wants the world in which it operates to be. It concentrates on the future. It is
a source of inspiration. It provides clear decision-making criteria.
An advantage of having a statement is that it creates value for those who get exposed to the
statement, and those prospects are managers, employees and sometimes even customers.
Statements create a sense of direction and opportunity. They both are an essential part of the
strategy-making process.
Many people mistake vision statement for mission statement, and sometimes one is simply used
as a longer term version of the other. The Vision should describe why it is important to achieve
the Mission. A Vision statement defines the purpose or broader goal for being in existence or in
the business and can remain the same for decades if crafted well. A Mission statement is more
specific to what the enterprise can achieve itself. Vision should describe what will be achieved in
the wider sphere if the organization and others are successful in achieving their individual
missions.
A mission statement can resemble a vision statement in a few companies, but that can be a grave
mistake. It can confuse people. The mission statement can galvanize the people to achieve
defined objectives, even if they are stretch objectives, provided it can be elucidated in SMART
(Specific, Measurable, Achievable, Relevant and Time-bound) terms. A mission statement
provides a path to realize the vision in line with its values. These statements have a direct bearing
on the bottom line and success of the organization.
Which comes first? The mission statement or the vision statement? That depends. If you have a
new start up business, new program or plan to re engineer your current services, then the vision
will guide the mission statement and the rest of the strategic plan. If you have an established
business where the mission is established, then many times, the mission guides the vision
statement and the rest of the strategic plan. Either way, you need to know your fundamental
purpose - the mission, your current situation in terms of internal resources and capabilities
(strengths and/or weaknesses) and external conditions (opportunities and/or threats), and where
you want to go - the vision for the future. It's important that you keep the end or desired result in
sight from the start.[citation needed] .
Features of an effective vision statement include:
• Clarity and lack of ambiguity
• Vivid and clear picture
• Description of a bright future
• Memorable and engaging wording
• Realistic aspirations
• Alignment with organizational values and culture
To become really effective, an organizational vision statement must (the theory states) become
assimilated into the organization's culture. Leaders have the responsibility of communicating the
vision regularly, creating narratives that illustrate the vision, acting as role-models by embodying
the vision, creating short-term objectives compatible with the vision, and encouraging others to
craft their own personal vision compatible with the organization's overall vision. In addition,
mission statements need to be subjected to an internal assessment and an external assessment.
The internal assessment should focus on how members inside the organization interpret their
mission statement. The external assessment — which includes all of the businesses stakeholders
— is valuable since it offers a different perspective. These discrepancies between these two
assessments can give insight on the organization's mission statement effectiveness.
Another approach to defining Vision and Mission is to pose two questions. Firstly, "What
aspirations does the organization have for the world in which it operates and has some influence
over?", and following on from this, "What can (and /or does) the organization do or contribute to
fulfill those aspirations?". The succinct answer to the first question provides the basis of the
Vision Statement. The answer to the second question determines the Mission Statement.
[edit] Methodologies
There are many approaches to strategic planning but typically a three-step process may be used:
• Situation - evaluate the current situation and how it came about.
• Target - define goals and/or objectives (sometimes called ideal state)
• Path - map a possible route to the goals/objectives
One alternative approach is called Draw-See-Think
• Draw - what is the ideal image or the desired end state?
• See - what is today's situation? What is the gap from ideal and why?
• Think - what specific actions must be taken to close the gap between today's
situation and the ideal state?
• Plan - what resources are required to execute the activities?
An alternative to the Draw-See-Think approach is called See-Think-Draw
• See - what is today's situation?
• Think - define goals/objectives
• Draw - map a route to achieving the goals/objectives
In other terms strategic planning can be as follows:
• Vision - Define the vision and set a mission statement with hierarchy of goals
and objectives
• SWOT - Analysis conducted according to the desired goals
• Formulate - Formulate actions and processes to be taken to attain these
goals
• Implement - Implementation of the agreed upon processes
• Control - Monitor and get feedback from implemented processes to fully
control the operation

[edit] Situational analysis


When developing strategies, analysis of the organization and its environment as it is at the
moment and how it may develop in the future, is important. The analysis has to be executed at an
internal level as well as an external level to identify all opportunities and threats of the external
environment as well as the strengths and weaknesses of the organizations.
There are several factors to assess in the external situation analysis:
1. Markets (customers)
2. Competition
3. Technology
4. Supplier markets
5. Labor markets
6. The economy
7. The regulatory environment
It is rare to find all seven of these factors having critical importance. It is also uncommon to find
that the first two - markets and competition - are not of critical importance. (Bradford "External
Situation - What to Consider")
Analysis of the external environment normally focuses on the customer. Management should be
visionary in formulating customer strategy, and should do so by thinking about market
environment shifts, how these could impact customer sets, and whether those customer sets are
the ones the company wishes to serve.
Analysis of the competitive environment is also performed, many times based on the framework
suggested by Michael Porter.
[edit] Goals, objectives and targets
Strategic planning is a very important business activity. It is also important in the public sector
areas such as education. It is practiced widely informally and formally. Strategic planning and
decision processes should end with objectives and a roadmap of ways to achieve those
objectives.
One of the core goals when drafting a strategic plan is to develop it in a way that is easily
translatable into action plans. Most strategic plans address high level initiatives and over-arching
goals, but don’t get articulated (translated) into day-to-day projects and tasks that will be
required to achieve the plan. Terminology or word choice, as well as the level a plan is written,
are both examples of easy ways to fail at translating your strategic plan in a way that makes
sense and is executable to others. Often, plans are filled with conceptual terms which don’t tie
into day-to-day realities for the staff expected to carry out the plan.
The following terms have been used in strategic planning: desired end states, plans, policies,
goals, objectives, strategies, tactics and actions. Definitions vary, overlap and fail to achieve
clarity. The most common of these concepts are specific, time bound statements of intended
future results and general and continuing statements of intended future results, which most
models refer to as either goals or objectives (sometimes interchangeably).
One model of organizing objectives uses hierarchies. The items listed above may be organized in
a hierarchy of means and ends and numbered as follows: Top Rank Objective (TRO), Second
Rank Objective, Third Rank Objective, etc. From any rank, the objective in a lower rank answers
to the question "How?" and the objective in a higher rank answers to the question "Why?" The
exception is the Top Rank Objective (TRO): there is no answer to the "Why?" question. That is
how the TRO is defined.
People typically have several goals at the same time. "Goal congruency" refers to how well the
goals combine with each other. Does goal A appear compatible with goal B? Do they fit together
to form a unified strategy? "Goal hierarchy" consists of the nesting of one or more goals within
other goal(s).
One approach recommends having short-term goals, medium-term goals, and long-term goals. In
this model, one can expect to attain short-term goals fairly easily: they stand just slightly above
one's reach. At the other extreme, long-term goals appear very difficult, almost impossible to
attain. Strategic management jargon sometimes refers to "Big Hairy Audacious Goals" (BHAGs)
in this context. Using one goal as a stepping-stone to the next involves goal sequencing. A
person or group starts by attaining the easy short-term goals, then steps up to the medium-term,
then to the long-term goals. Goal sequencing can create a "goal stairway". In an organizational
setting, the organization may co-ordinate goals so that they do not conflict with each other. The
goals of one part of the organization should mesh compatibly with those of other parts of the
organization.

Difference Between Goals and Objectives

Goals vs Objectives
When you have something you want to accomplish, it is important to set both goals and
objectives. Once you learn the difference between goals and objectives, you will realize that how
important it is that you have both of them. Goals without objectives can never be accomplished
while objectives without goals will never get you to where you want to be. The two concepts are
separate but related and will help you to be who you want to be.
Definition of Goals and Objectives
Goals are long-term aims that you want to accomplish.
Objectives are concrete attainments that can be achieved by following a certain number of steps.
Goals and objectives are often used interchangeably, but the main difference comes in their level
of concreteness. Objectives are very concrete, whereas goals are less structured.

Remembering the Differences between Goals and Objectives


When you are giving a presentation to a potential or current employer, knowing the difference
between goals and objectives can be crucial to the acceptance of your proposal. Here is an easy
way to remember how they differ:
Goals – has the word “go” in it. Your goals should go forward in a specific direction. However,
goals are more about everything you accomplish on your journey, rather than getting to that
distant point. Goals will often go into undiscovered territory and you therefore can’t even know
where the end will be.
Objectives – has the word “object” in it. Objects are concrete. They are something that you can
hold in your hand. Because of this, your objectives can be clearly outlined with timelines,
budgets, and personnel needs. Every area of each objective should be firm.
Measuring Goals and Objectives
Goals – unfortunately, there is no set way in which to measure the accomplishment of your
goals. You may feel that you are closer, but since goals are de facto nebulous, you can never say
for sure that you have definitively achieved them.
Objectives – can be measured. Simply phrase your objective in the form of a question. For
example, “I want to accomplish x in y amount of time” becomes “Did I accomplish x in y
amount of time?” This can easily be answered in a yes or no form.
Examples of Goals and Objectives
Goals – I want to be a better ball player. I want to learn more about Chinese history. I want to
maximize my professional performance.
Objectives – I want to memorize the periodic table before my next quiz. I want to increase my
sales by 10% this month. I want learn to play “Freebird” on the guitar.
Summary:
1. Goals and objectives are both tools for accomplishing what you want to achieve.
2. Goals are long term and objectives are usually accomplished in the short or medium term.
3. Goals are nebulous and you can’t definitively say you have accomplished one whereas the
success of an objective can easily be measured.
4. Goals are hard to quantify or put in a timeline, but objectives should be given a timeline to be
more effective.

Economic value added


In corporate finance, Economic Value Added or EVA is an estimate of economic profit, which
can be determined, among other ways, by making corrective adjustments to GAAP accounting,
including deducting the opportunity cost of equity capital. The concept of EVA is in a sense
nothing more than the traditional, commonsense idea of "profit," however, the utility of having a
separate and more precisely defined term such as EVA or Residual Cash Flow is that it makes a
clear separation from dubious accounting adjustments that have enabled businesses such as
Enron to report profits while in fact being in the final approach to becoming insolvent. EVA can
be measured as Net Operating Profit After Taxes (or NOPAT) less the money cost of capital.
EVA is similar to Residual Income (RI), although under some definitions there may be minor
technical differences between EVA and RI (for example, adjustments that might be made to
NOPAT before it is suitable for the formula below). Another, much older term for economic
value added is Residual Cash Flow. In all three cases, money cost of capital refers to the amount
of money rather than the proportional cost (% cost of capital). The amortization of goodwill or
capitalization of brand advertising and other similar adjustments are the translations that can be
made to Economic Profit to make it EVA. The EVA is a registered trademark by its developer,
Stern Stewart & Co.

Calculating EVA
In the field of corporate finance, Economic Value Added is a way to determine the value created,
above the required return, for the shareholders of a company.
The basic formula is:

where

, called the Return on Invested Capital (ROIC).

r is the firm's return on capital, NOPAT is the Net Operating Profit After Tax, c is the Weighted
Average Cost of Capital (WACC) and K is capital employed. To put it simply, EVA is the profit
earned by the firm less the cost of financing the firm's capital.
Shareholders of the company will receive a positive value added when the return from the capital
employed in the business operations is greater than the cost of that capital; see Working capital
management. Any value obtained by employees of the company or by product users is not
included in the calculations.
[edit] Relationship to Market Value Added
The firm's market value added, or MVA, is the discounted sum of all future expected economic
value added:

Note that MVA = NPV of company.

Culture of India
From Wikipedia, the free encyclopedia
Jump to:navigation, search
A Kathakali performer as Krishna. One the eight major Indian classical dances, Kathakali is
more than 1,500 years old and its theme is heavily influenced by the Puranas.[1]

The culture of India has been shaped not only by its long history, unique geography and diverse
demography, but also by its ancient heritages, which were formed during the Indus Valley
Civilization and evolved further during the Vedic age, rise and decline of Buddhism, the Golden age,
invasions from Central Asia, European colonization and the rise of Indian nationalism.
The languages, religions, dance, music, architecture and its customs differ from place to place
within the country, but nevertheless possess a commonality. The culture of India is an
amalgamation of diverse sub-cultures spread all over the country and traditions that are several
millennia old.
[edit] Religion
Close-up of a statue depicting Maitreya at the Thikse Monastery in Ladakh, India.
Dharmic religions such as Hinduism and Buddhism are indigenous to India.[2]

Main articles: Religion in India and Indian religions

India is the birth place of Dharmic religions such as Hinduism, Buddhism, Jainism and Sikhism.[3]
Dharmic religions, also known as Indian religions, are a major form of world religions next to
the Abrahamic ones. Today, Hinduism and Buddhism are the world's third- and fourth-largest
religions respectively, with around 1.4 billion followers altogether.
India is one of the most religiously diverse nations in the world, with some of the most deeply
religious societies and cultures. Religion still plays a central and definitive role in the life of most
of its people.
The religion of 80% of the people is Hinduism. Islam is practiced by around 13% of all Indians.[4]
Sikhism, Jainism and especially Buddhism are influential not only in India but across the world.
Christianity, Zoroastrianism, Judaism and the Bahá'í Faith are also influential but their numbers are
smaller. Despite the strong role of religion in Indian life, atheism and agnostics also have visible
influence along with a self-ascribed tolerance to other people.
[edit] Society
[edit] Overview
According to Eugene M. Makar, traditional Indian culture is defined by relatively strict social
hierarchy. He also mentions that from an early age, children are reminded of their roles and
places in society.[5] This is reinforced by the fact that many believe gods and spirits have an
integral and functional role in determining their life.[5] Several differences such as religion divide
the culture.[5] However, a far more powerful division is the traditional Hindu bifurcation into non-
polluting and polluting occupations.[5] Strict social taboos have governed these groups for thousands
of years.[5] In recent years, particularly in cities, some of these lines have blurred and sometimes
even disappeared.[5] The Nuclear family is becoming central to Indian culture. Important family
relations extend as far as gotra, the mainly patrilinear lineage or clan assigned to a Hindu at birth.
[5]
In rural areas & sometimes in urban areas as well, it is common that three or four generations
of the family live under the same roof.[5] The patriarch often resolves family issues.[5]
Among developing countries, India has low levels of occupational and geographic mobility.
People choose same occupations as their parents and rarely move geographically in the country.
[6]
During the nationalist movement, pretentious behaviour was something to be avoided.
Egalitarian behaviour and social service were promoted while nonessential spending was disliked
and spending money for ‘showing off’ was deemed a vice. This image continues in politics with
many politicians wearing simple looking / traditionally rural clothes, such as the traditional 'kurta
-pyjama' and the 'Gandhi topi'.
[edit] Family
Main articles: Hindu joint family, Arranged marriage in India, and Women in India

A bride during a traditional Punjabi Hindu wedding ceremony.

Family plays a significant role in the Indian culture. For generations, India has had a prevailing
tradition of the joint family system. It is a system under which extended members of a family -
parents, children, the children’s spouses and their offspring, etc. - live together. Usually, the
eldest male member is the head in the joint Indian family system. He makes all important
decisions and rules, and other family members abide by them.
[edit] Marriage
For centuries, arranged marriages have been the tradition in Indian society. Even today, the vast
majority of Indians have their marriages planned by their parents and other respected family-
members, with the consent of the bride and groom.[7] Arranged matches are made after taking
into account factors such as age, height, personal values and tastes, the backgrounds of their
families (wealth, social standing) and their castes and the astrological compatibility of the
couples' horoscopes.
In India, the marriage is thought to be for life,[8] and the divorce rate is extremely low — 1.1%
compared with about 50% in the United States.[9] The arranged marriages generally have a much
lower divorce rate. The divorce rates have risen significantly in recent years:
"Opinion is divided over what the phenomenon means: for traditionalists the
rising numbers portend the breakdown of society while, for some modernists,
they speak of a healthy new empowerment for women."[10]

Although child marriage was outlawed in 1860, its practiced continues in some rural parts of
India.[11] According to UNICEF’s “State of the World’s Children-2009” report, 47% of India's
women aged 20–24 were married before the legal age of 18, with 56% in rural areas.[12] The
report also showed that 40% of the world's child marriages occur in India.[13]
[edit] Names and language
Indian names are based on a variety of systems and naming conventions, which vary from region to
region. Names are also influenced by religion and caste and may come from the Indian epics.
India's population speaks a wide variety of languages.
[edit] Gender equality
Although women and men are equal before the law and the trend toward gender equality has
been noticeable, women and men still occupy distinct functions in Indian society. Woman's role
in the society is often to perform household works and pro bono community work.[5] This low
rate of participation has ideological and historical reasons. Women and women's issues appear
only 7-14% of the time in news programs.[5] In most Indian families, women do not own any
property in their own names, and do not get a share of parental property.[14] Due to weak
enforcement of laws protecting women, they continue to have little access to land and property.
[15]
In many families, especially rural ones, the girls and women face nutritional discrimination
within the family, and are anaemic and malnourished.[14] They still lag behind men in terms of
income and job status. Traditional Hindu art, such as Rangoli (or Kolam), is very popular among
Indian women. Popular and influential woman's magazines include Femina, Grihshobha and
Woman's Era', 'Savvy.
[edit] Animals

Cows depicted in the decorated goppuram of the Kapaleeshwarar temple in Chennai

See also: Wildlife of India, Animal husbandry in India, and Cattle in religion

The varied and rich wildlife of India has had a profound impact on the region's popular culture.
Common name for wilderness in India is Jungle which was adopted by the British colonialists to
the English language. The word has been also made famous in The Jungle Book by Rudyard
Kipling. India's wildlife has been the subject of numerous other tales and fables such as the
Panchatantra and the Jataka tales.[16]
In Hinduism, the cow is regarded as a symbol of ahimsa (non-violence), mother goddess and
bringer of good fortune and wealth.[17] For this reason, cows are revered in Hindu culture and
feeding a cow is seen as an act of worship.[18]
[edit] Namaste
Namaste, Namaskar or Namaskaram or Vannakam is a common spoken greeting or salutation in
the Indian subcontinent. Namaskar is considered a slightly more formal version than namaste but
both express deep respect. It is commonly used in India and Nepal by Hindus, Jains and Buddhists,
and many continue to use this outside the Indian subcontinent. In Indian and Nepali culture, the
word is spoken at the beginning of written or verbal communication. However, the same hands
folded gesture is made usually wordlessly upon departure. In yoga, namaste is said to mean "The
light in me honors the light in you", as spoken by both the yoga instructor and yoga students.
Taken literally, it means "I bow to you". The word is derived from Sanskrit (namas): to bow,
obeisance, reverential salutation, and respect, and (te): "to you".
When spoken to another person, it is commonly accompanied by a slight bow made with hands
pressed together, palms touching and fingers pointed upwards, in front of the chest. The gesture
can also be performed wordlessly or calling on another god E.g.: "Jai shri Krishna" and carry the
same meaning.

Dipawali, a festival of lights, is celebrated by Hindus across India by lighting diyas


and making rangolis.

[edit] Festivals
Main article: Festivals in India

India, being a multi-cultural and multi-religious society, celebrates holidays and festivals of
various religions. The three national holidays in India, the Independence Day, the Republic Day and
the Gandhi Jayanti, are celebrated with zeal and enthusiasm across India. In addition, many states
and regions have local festivals depending on prevalent religious and linguistic demographics.
Popular religious festivals include the Hindu festivals of Navratri Diwali, Ganesh Chaturthi, Durga
puja, Holi, Rakshabandhan and Dussehra. Several harvest festivals, such as Sankranthi, Pongal
and Onam,"Nuakhai" are also fairly popular.
Certain festivals in India are celebrated by multiple religions. Notable examples include Diwali,
which is celebrated by Hindus, Sikhs and Jains, and Buddh Purnima, celebrated by Buddhists
and Hindus. Islamic festivals, such Eid ul-Fitr, Eid al-Adha and Ramadan, are celebrated by
Muslims across India. Adding colors to the culture of India, the Dree Festival is one of the tribal
festivals of India celebrated by the Apatanis of the Ziro valley of Arunachal Pradesh, which is the
easternmost state of India.
[edit] Cuisine
Main article: Cuisine of India
A variety of Indian curries and vegetable dishes.

The multiple varieties of Indian cuisine are characterized by their sophisticated and subtle use of
many spices and herbs. Each family of this cuisine is characterized by a wide assortment of
dishes and cooking techniques. Though a significant portion of Indian food is vegetarian, many
traditional Indian dishes also include chicken, goat, lamb, fish, and other meats.
Food is an important part of Indian culture, playing a role in everyday life as well as in festivals.
Indian cuisine varies from region to region, reflecting the varied demographics of the ethnically
diverse subcontinent. Generally, Indian cuisine can be split into five categories: North, South,
East,West Indian and North-eastern India.
Despite this diversity, some unifying threads emerge. Varied uses of spices are an integral part of
food preparation, and are used to enhance the flavor of a dish and create unique flavors and
aromas. Cuisine across India has also been influenced by various cultural groups that entered
India throughout history, such as the Persians, Mughals, and European colonists. Though the
tandoor originated in Central Asia, Indian tandoori dishes, such as chicken tikka made with Indian
ingredients, enjoy widespread popularity.[19]
Indian cuisine is one of the most popular cuisines across the globe.[20] Historically, Indian spices
and herbs were one of the most sought after trade commodities. The spice trade between India
and Europe led to the rise and dominance of Arab traders to such an extent that European
explorers, such as Vasco da Gama and Christopher Columbus, set out to find new trade routes with
India leading to the Age of Discovery.[21] The popularity of curry, which originated in India, across
Asia has often led to the dish being labeled as the "pan-Asian" dish.[22]
[edit] Clothing
A girl from Tripura sports a bindi while preparing to take part in a traditional dance
festival.

Traditional Indian clothing for women are the saris and also Ghaghra Cholis (Lehengas). For men,
traditional clothes are the Dhoti/pancha/veshti or Kurta. In some village parts of India, traditional
clothing mostly will be worn. In southern India the men wear long, white sheets of cloth called
dhoti in English and in Tamil. Over the dhoti, men wear shirts, t-shirts, or anything else. Women
wear a sari, a long sheet of colourful cloth with patterns. This is draped over a simple or fancy
blouse. This is worn by young ladies and woman. Little girls wear a pavada, a long skirt worn
under a blouse.
Bindiis part of the women's make-up. Traditionally, the red bindi (or sindhur) was worn only by
the married Hindu women, but now it has become a part of women's fashion. A bindi is also
worn by some as their third eye. It sees what the others eyes cannot and is reputed to protect the
brain from the outside and the sun.[23] Indo-western clothing is the fusion of Western and
Subcontinental fashion.
Delhi is considered to be India's fashion capital, housing the annual Fashion weeks.
[edit] Literature
[edit] History
Main article: Indian literature
Rabindranath Tagore, Asia's first Nobel laureate.[24]

The earliest works of Indian literature were orally transmitted.[citation needed] Sanskrit literature begins
with the Rig Veda, a collection of sacred hymns dating to the period 1500–1200 BCE. The
Sanskrit epics Ramayana and Mahabharata appeared towards the end of the first millennium
BCE. Classical Sanskrit literature flourished in the first few centuries of the first millennium CE.
[
[Tamil literature]] begins with the sangam literature, a collection of sacred hymns dating to the
period 10000BC–1200 BCE.[citation needed] The Tamil epics tolkappiyam and thirukural appeared
towards the end of the first millennium BCE.[citation needed] Classical Tamil literature succeeded well in
the first few centuries of the first millennium CE.[citation needed]
In the medieval period, literature in Kannada and Telugu appears in the 9th and 10th and 11th
centuries respectively,[25] followed by the first Malayalam works in the 12th century. During this
time, literature in the Tamil, Bengali, Marathi, and various dialects of Hindi, and Urdu began to
appear as well.
Some of the most important authors from India are Rabindranath Tagore, Ramdhari Singh 'Dinkar',
Subramania Barathi Kuvempu, Bankim Chandra Chattopadhyay, Michael Madhusudan Dutt, Munshi
Premchand, Muhammad Iqbal, Devaki Nandan Khatri. In contemporary India, among the writers who
have received critical acclaim are: Girish Karnad, Agyeya, Nirmal Verma, Kamleshwar, Vaikom
Muhammad Basheer, Indira Goswami, Mahasweta Devi, Amrita Pritam, Maasti Venkatesh Ayengar,
Qurratulain Hyder and Thakazhi Sivasankara Pillai.
In contemporary Indian literature, there are two major literary awards; these are the Sahitya
Akademi Fellowship and the Jnanpith Award. Seven Jnanpith awards each have been awarded in
Kannada, six in Hindi, five in Bengali, four in Malayalam, three each in Marathi, Gujarati, Urdu and
Oriya.[26]
[edit] Poetry
Main article: Indian poetry

Illustration of the Battle of Kurukshetra. With more than 74,000 verses, long prose
passages, and about 1.8 million words in total, the Mahābhārata is one of the longest
epic poems in the world.

India has strong traditions of poetry ever since the Rigveda, as well as prose compositions. Poetry
is often closely related to musical traditions, and much of poetry can be attributed to religious
movements. Writers and philosophers were often also skilled poets. In modern times, poetry has
served as an important non-violent tool of nationalism during the Indian freedom movement. A
famous modern example of this tradition can be found in such figures as Rabindranath Tagore,
Kuvempu and K. S. Narasimhaswamy in modern times, and poets such as Basava (vachanas) , Kabir
and Purandaradasa (padas and devaranamas) in medieval times, as well as the epics of ancient
times. Two examples of poetry from Tagore's Gitanjali serve as the national anthems of both
India and Bangladesh.

[edit] Epics
The Ramayana and Mahabharata are the oldest preserved and well-known epics of India. Versions
have been adopted as the epics of Southeast Asian countries like Thailand, Malaysia and Indonesia.
In addition, there are five epics in the classical Tamil language: Silappadhikaram, Manimegalai,
Civaka Cintamani, Valaiyapathi and Kundalakesi.
Other regional variations of these, as well as unrelated epics include the Tamil Kamba
Ramayanam, in Kannada, the Pampa Bharata by Adikavi Pampa, Torave Ramayana by Kumara
Valmiki and Karnata Bharata Katha Manjari by Kumaravyasa, Hindi Ramacharitamanasa, and
Malayalam Adhyathmaramayanam.

[edit] Performing arts


[edit] Music

Panchavadyam temple music in Kerala.

Main article: Music of India

The music of India includes multiple varieties of religious, folk, popular, pop, and classical music.
The oldest preserved examples of Indian music are the melodies of the Samaveda that are still
sung in certain Vedic Śrauta sacrifices. India's classical music tradition is heavily influenced by
Hindu texts. It includes two distinct styles: Carnatic and Hindustani music. It is noted for the use of
several Raga, melodic modes. It has a history spanning millennia and it was developed over
several eras. It remains instrumental to religious inspiration, cultural expression and pure
entertainment.
Purandaradasa is considered the "father of carnatic music" (Karnataka sangeeta pitamaha).[27][28]
[29]
He concluded his songs with a salutation to Lord Purandara Vittala and is believed to have
composed as many as 475,000 songs in the Kannada language.[30] However, only about 1000 are
known today.[27][31]
[edit] Dance
Main article: Indian dance
Odissi dancer in front of the Konark Sun Temple.

Indian dance too has diverse folk and classical forms. Among the well-known folk dances are the
bhangra of the Punjab, the bihu of Assam, the chhau of Jharkhand and Orissa, the ghoomar of
Rajasthan, the dandiya and garba of Gujarat, the Yakshagana of Karnataka and lavani of
Maharashtra and Dekhnni of Goa. Eight dance forms, many with narrative forms and mythological
elements, have been accorded classical dance status by India's National Academy of Music,
Dance, and Drama. These are: bharatanatyam of the state of Tamil Nadu, kathak of Uttar Pradesh,
kathakali and mohiniattam of Kerala, kuchipudi of Andhra Pradesh, manipuri of Manipur, odissi of the
state of Orissa and the sattriya of Assam.[32][33]
Kalarippayattu, or Kalari for short, is considered one of the world's oldest martial arts. It is
preserved in texts such as the Mallapurana. Kalari and other later formed martial arts have been
assumed by some[who?] to have traveled to China, like Buddhism, and eventually developing into
Kung-fu.[citation needed] Other later martial arts are Gatka, Pehlwani and Malla-yuddha.
[edit] Drama and theater
Natyacarya Mani Madhava Chakyar as Ravana in Bhasa's Abhiṣeka Nataka Kutiyattam -
one of the oldest surviving drama tradition of the world.

Main article: Theatre in India

Indian drama and theater has a long history alongside its music and dance. Kalidasa's plays like
Shakuntala and Meghadoota are some of the older plays, following those of Bhasa. One of the
oldest surviving theatre traditions of the world is the 2,000 year old Kutiyattam of Kerala. It
strictly follows the Natya Shastra.[34] The natak of Bhasa are very popular in this art form.
Nātyāchārya (late) Padma Shri Māni Mādhava Chākyār - the unrivaled maestro of this art form and
Abhinaya,[citation needed] revived the age old drama tradition from extinction. He was known for
mastery of Rasa Abhinaya. He started to perform the Kalidasa plays like Abhijñānaśākuntala,
Vikramorvaśīya and Mālavikāgnimitra; Bhasa's Swapnavāsavadatta and Pancharātra; Harsha's
Nagananda in Kutiyattam form.[35][36]
The tradition of folk theater is popular in most linguistic regions of India. In addition, there is a
rich tradition of puppet theater in rural India, going back to at least the second century BCE. (It is
mentioned in Patanjali's commentary on Panini). Group Theater is also thriving in the cities,
initiated by the likes of Gubbi Veeranna,[37] Utpal Dutt, Khwaja Ahmad Abbas, and K. V. Subbanna
and still maintained by groups like Nandikar, Ninasam and Prithvi Theatre.
[edit] Visual arts
Main article: Indian art

[edit] Painting
Main article: Indian painting

The Jataka tales from Ajanta Caves.

The earliest Indian paintings were the rock paintings of pre-historic times, the petroglyphs as found
in places like Bhimbetka, some of which go back to the Stone Age. Ancient texts outline theories
of darragh and anecdotal accounts suggesting that it was common for households to paint their
doorways or indoor rooms where guests resided.
Cave paintings from Ajanta, Bagh, Ellora and Sittanavasal and temple paintings testify to a love of
naturalism. Most early and medieval art in India is Hindu, Buddhist or Jain. A freshly made
coloured flour design (Rangoli) is still a common sight outside the doorstep of many (mostly
South Indian) Indian homes. Raja Ravi Varma is one the classical painters from medieval India.
Madhubani painting, Mysore painting, Rajput painting, Tanjore painting, Mughal painting are some
notable Genres of Indian Art; while Nandalal Bose, M. F. Husain, S. H. Raza, Geeta Vadhera, Jamini
Roy and B.Venkatappa[38] are some modern painters. Among the present day artists, Atul Dodiya,
Bose Krishnamacnahri, Devajyoti Ray and Shibu Natesan represent a new era of Indian art
where global art shows direct amalgamation with Indian classical styles. These recent artists
have acquired international recognition. Jehangir Art Gallery, Mumbai, Mysore Palace has on display
a few good Indian paintings.
[edit] Sculpture
Main article: Sculpture in India

Hindu sculptures at the famous Khajuraho temple in Madhya Pradesh.

The first sculptures in India date back to the Indus Valley civilization, where stone and bronze
figures have been discovered. Later, as Hinduism, Buddhism, and Jainism developed further, India
produced some extremely intricate bronzes as well as temple carvings. Some huge shrines, such
as the one at Ellora were not constructed by using blocks but carved out of solid rock.
Sculptures produced in the northwest, in stucco, schist, or clay, display a very strong blend of
Indian and Classical Hellenistic or possibly even Greco-Roman influence. The pink sandstone
sculptures of Mathura evolved almost simultaneously. During the Gupta period (4th to 6th century)
sculpture reached a very high standard in execution and delicacy in modeling. These styles and
others elsewhere in India evolved leading to classical Indian art that contributed to Buddhist and
Hindu sculpture throughout Southeast Central and East Asia.
[edit] Architecture
Main article: Indian architecture
The Umaid Bhawan Palace in Rajasthan, one of the largest private residences in the
world.[39]

Indian architecture encompasses a multitude of expressions over space and time, constantly
absorbing new ideas. The result is an evolving range of architectural production that nonetheless
retains a certain amount of continuity across history. Some of its earliest production are found in
the Indus Valley Civilization (2600-1900 BCE) which is characterised by well planned cities and
houses. Religion and kingship do not seem to have played an important role in the planning and
layout of these towns.
During the period of the Mauryan and Gupta empires and their successors, several Buddhist
architectural complexes, such as the caves of Ajanta and Ellora and the monumental Sanchi Stupa
were built. Later on, South India produced several Hindu temples like Chennakesava Temple at
Belur, the Hoysaleswara Temple at Halebidu, and the Kesava Temple at Somanathapura,
Brihadeeswara Temple, Thanjavur, the Sun Temple, Konark, Sri Ranganathaswamy Temple at
Srirangam, and the Buddha stupa (Chinna Lanja dibba and Vikramarka kota dibba) at Bhattiprolu.
Angkor Wat, Borobudur and other Buddhist and Hindu temples indicate strong Indian influence on
South East Asian architecture, as they are built in styles almost identical to traditional Indian
religious buildings.

Akshardham in Delhi the largest Hindu temple in the world.

The traditional system of Vaastu Shastra serves as India's version of Feng Shui, influencing town
planning, architecture, and ergonomics. It is unclear which system is older, but they contain
certain similarities. Feng Shui is more commonly used throughout the world. Though Vastu is
conceptually similar to Feng Shui in that it also tries to harmonize the flow of energy, (also called
life-force or Prana in Sanskrit and Chi/Ki in Chinese/Japanese), through the house, it differs in the
details, such as the exact directions in which various objects, rooms, materials, etc. are to be
placed.
With the advent of Islamic influence from the west, Indian architecture was adapted to allow the
traditions of the new religion. Fatehpur Sikri, Taj Mahal, Gol Gumbaz, Qutub Minar, Red Fort of Delhi
are creations of this era, and are often used as the stereotypical symbols of India. The colonial
rule of the British Empire saw the development of Indo-Saracenic style, and mixing of several
other styles, such as European Gothic. The Victoria Memorial or the Chhatrapati Shivaji Terminus are
notable examples.
Indian architecture has influenced eastern and southeastern Asia, due to the spread of Buddhism.
A number of Indian architectural features such as the temple mound or stupa, temple spire or
sikhara, temple tower or pagoda and temple gate or torana, have become famous symbols of
Asian culture, used extensively in East Asia and South East Asia. The central spire is also
sometimes called a vimanam. The southern temple gate, or gopuram is noted for its intricacy and
majesty.
Contemporary Indian architecture is more cosmopolitan. Cities are extremely compact and
densely populated. Mumbai's Nariman Point is famous for its Art Deco buildings. Recent creations
such as the Lotus Temple, and the various modern urban developments of India like Chandigarh,
are notable.
[edit] Recreation and sports
Main article: Sports in India

See also: kabaddi and Indian chess

The annual snake boat race is performed during Onam Celebrations on the Pamba River at
Aranmula near Pathanamthitta.

In the area of recreation and sports India had evolved a number of games. The modern eastern
martial arts originated as ancient games and martial arts in India, and it is believed by some that
these games were transmitted to foreign countries, where they were further adapted and
modernized. Traditional indigenous sports include kabaddi and gilli-danda, which are played in
most parts of the country.
A few games introduced during the British Raj have grown quite popular in India: field hockey,
football (soccer) and especially cricket. Although field hockey is India's official national sport,
cricket is by far the most popular sport not only in India, but the entire subcontinent, thriving
recreationally and professionally. Cricket has even been used recently as a forum for diplomatic
relations between India and Pakistan. The two nations' cricket teams face off annually and such
contests are quite impassioned on both sides. Polo is also popular.
Indoor and outdoor games like Chess, Snakes and Ladders, Playing cards, Carrom, Badminton are
popular. Chess was invented in India.
Games of strength and speed flourished in India. In ancient India stones were used for weights,
marbles, and dice. Ancient Indians competed in chariot racing, archery, horsemanship, military tactics,
wrestling, weight lifting, hunting, swimming and running races.

[edit] Popular media


[edit] Television
Main article: Television in India

See also: List of Indian television stations

Indian television started off in 1959 in New Delhi with tests for educational telecasts.[40] Indian
small screen programming started off in the mid 1970s. At that time there was only one national
channel Doordarshan, which was government owned. 1982 saw revolution in TV programming in
India, with the New Delhi Asian games, India saw the colour version of TV, that year. The
Ramayana and Mahabharat were some among the popular television series produced. By the late
1980s more and more people started to own television sets. Though there was a single channel,
television programming had reached saturation. Hence the government opened up another
channel which had part national programming and part regional. This channel was known as DD
2 later DD Metro. Both channels were broadcasted terrestrially.
In 1991, the government liberated its markets, opening them up to cable television. Since then,
there has been a spurt in the number of channels available. Today, Indian silver screen is a huge
industry by itself, and has thousands of programmes in all the states of India. The small screen
has produced numerous celebrities of their own kind some even attaining national fame for
themselves. TV soaps are extremely popular with housewives as well as working women, and
even men of all kinds. Some lesser known actors have found success in Bollywood. Indian TV
now has many of the same channels as Western TV, including stations such as Cartoon Network,
Nickelodeon, and MTV India.

[edit] Cinema
Main article: Cinema of India
Shooting of a Bollywood dance number.

Bollywood is the informal name given to the popular Mumbai-based film industry in India.
Bollywood and the other major cinematic hubs (in Bengali, Kannada, Malayalam, Marathi, Tamil,
Punjabi and Telugu) constitute the broader Indian film industry, whose output is considered to be the
largest in the world in terms of number of films produced and number of tickets sold.
India has produced many critically acclaimed cinema-makers like K.Vishwanath, Bapu
,Jagdaman Grewal, Satyajit Ray, Ritwik Ghatak, Guru Dutt, K. Vishwanath, Adoor Gopalakrishnan,
Girish Kasaravalli, Shekhar Kapoor, Hrishikesh Mukherjee, Shankar Nag, Girish Karnad, G. V. Iyer,etc.
(See Indian film directors). With the opening up of the economy in the recent years and consequent
exposure to world cinema, audience tastes have been changing. In addition, multiplexes have
mushroomed in most cities, changing the revenue patterns.
co
sts
C
re
ar
be o
du
e
co
hi
ce
lo
m nt
SMALL INDUSTRIES DEVELOPMENT ORGANISATION (SIDO) gh
d
we
e e
du
re
co
ORGANISATIONAL STRUCTURE OF SIDO
slo
e
d
un
w
nt
Small Industries Development Organisation (SIDO) an apex body at Central level forter to
as
sal s
formulating policy for the development of Small Scale Industries in the country, is -a ec
es
headed by the Additional Secretary & Development Commissioner (Small Scale [op
on
re hi
vo
Industries) under Ministry of Small Scale Industries Govt. of India. de
o
sul
ti ]
lu
tmi
m
SIDO is playing a very constructive role for strengthening this vital sector which hasm 1
es
of
al
proved to be one of the strong pillars of the economy of the country. It functions es Pr
of
pr
to
sal
through a network of the field offices namely 30 SISIs, 28 Br. SISIs, 4 RTCs, 7 FTSs, od
sc
od
st
es
various training and production centers and specialized institutes spread over uc
different parts of the country. It is rendering the services in the following areas :- vo
al
uc
art
ttio
e
Advising the Govt. in policy matters concerning small scale sector. lu
litt
lif
n
sal
m
le
Providing techno-economic and managerial consultancy, common facilities and vo e
es
e
or
extension services. cy
lu
vo
de
no
cle
Providing facilities for technology up-gradation, modernization quality m
lu
cli
improvement & infrastructure. co
(P
es
m
ne
m
Human resources development through training and skill up-gradation. LC
in
e
or
pe
)
cr
Providing economic information services. in
st
titi
ea
1.
cr
ab
Maintaining close liaison and vital linkage with the Central Ministries, Planning on sin
1
ea
iliz
Commission, Financial Institutions, State Govts. & similar other developmental g de
Re
se
e
organizations/agencies related to the promotion and development of SSI Sector. an m
qu
s
Evolving and coordinating policies for development of ancillaries. pri
d
an
es
sig
ce
ex
d
Monitoring of PMRY Scheme tnif
s,
pe
ha
for
ica
Monitoring the working of different Tool Rooms & PPDC's pr
rie
s
de
ntl
ofi
nc
to
vi
y
ta
e
be
ati
pr
bil
cu
cr
on
ofi
ity
rv
ea
1.
ta
di
e
te
2
bil
mi
eff
d
Convertible bond M
ity
nis
ec
cu
ar
be
h
ts
st
ke
gi
pr
sal
o
In finance, a convertible note (or, if it has a maturity of greater than 10 years, a tns
ofi
convertible debenture) is a type of bond that the holder can convert into shares of es m
id
to
ter
common stock in the issuing company or cash of equal value, at an agreed-upon price. vo en
risIt
be
lu
s
is a hybrid security with debt- and equity-like features. Although it typically has a low e tifi
co
m
coupon rate, the instrument carries additional value through the option to convert the 4. 1.
ha
ca
pu
m
e
ve
tio
M
Sa
bond to stock, and thereby participate in further growth in the company's equity value. bli
es
pe
to
n
The investor receives the potential upside of conversion into equity while protecting tu ar
c
m
ak
be
downside with cash flow from the coupon payments. 2
ke
ra
aw
or
s
pr
From the issuer's perspective, the key benefit of raising money by selling convertible Le tan
ti
ar
e
o
bonds is a reduced cash interest payment. However, in exchange for the benefit of ss en
a
in
3.
on
d
m
reduced interest payments, the value of shareholder's equity is reduced due to the stockon es
ch
m
tr
2.
M
an
pt
s
s
all
ar
dilution expected when bondholders convert their bonds into new shares. ed
od
Gr
at
d Ch
of
in
en
ke
The convertible bond markets in the United States and Japan are of primary global th to
uc
o
ur
de ar
cr
ge
ttry
ti
wt
it
cli
eac
ea
of
sa
th
pr
on
h
y
ne
se
prte
tur
e
od
s
od
st
ris
St
ati
pr
uc
uc
ag
tic
on
od
co
ttio
is
ee
uc
m s
e
dis
ac
titi
m
cy
tri
he
on
ak
cle
bu
d
be
es
(P
tio
gi
in
no
LC
n
ns
cr
m
)eff
to
ea
on
• ILO Constitution ici
3
in
se
ey
en
Li
cr
• ILO Convention No. 29 : Forced Labour Convention, 1930 in
at
cy
mi
ea
co
thi
th
tat
se
m
• ILO Convention No. 81 : Labour Inspection Convention, 1947 s
an
io
wi
pe
st
in
ns
th
• ILO Convention No. 87 : Freedom of Association and Protection of the Right to Organise, 1948 tit
ag
cr
a
4
or
e
ea
fe
• ILO Convention No. 97 : Migration for Employment Convention, 1949 Se
s
se
w
e
en
d
ne
• ILO Convention No. 98 : Right to Organise and Collective Bargaining Convention, 1949 als
ter
sal
w
o
in
es
pl
• ILO Convention No. 100 : Equal Remuneration Convention, 1951 g
5
ay
th
Re
er
• ILO Convention No. 102 : Social Security (Minimum Standards) Convention, 1952 e
fer
s
m
en
in
• ILO Convention No. 105 : Abolition of Forced Labour Convention, 1957 ar
ce
es
ke
s
• ILO Convention No. 111 : Discrimination (Employment and Occupation) Convention, 1958 tta
6
bli
pri
• ILO Convention No. 115 : Radiation Protection Convention, 1960 Ex
shi
ce
ter
ng
• ILO Convention No. 122 : Employment Policy Convention, 1964
s
na
m
te
l
ar
• ILO Convention No. 129 : Labour Inspection (Agriculture) Convention, 1969
nd
lin
ke
to
tks
• ILO Convention No. 138 : Minimum Age Convention, 1973 dr
in
op
cr
du
• ILO Convention No. 143 : Migrant Workers (Supplementary Provisions). 1975
ea
e
• ILO Convention No. 144 : Tripartite Consultation (International Labour Standards) Convention, 1976
se
to
d
th
• ILO Convention No. 155 : Occupational Safety and Health Convention, 1981
co
e
m
pr
• ILO Convention No. 158 : Termination of Employment Convention. 1982 pe
oli
titi
fer
• ILO Convention No. 161 : Occupational Health Services Convention, 1985 on
ati
le
on
• ILO Convention No. 182 : Worst Forms of Child Labour Convention, 1999 ad
of
s
co
• ILO Convention No. 187 : Promotional Framework for Occupational Safety and Health Convention, 2006 to
m
pri
pe
• ILO Declaration of Philadelphia ce
tin
de
g
• ILO Declaration on Fundamental Principles and Rights at Work, 1998 cr
pr
ea
od
• ILO Declaration on Social Justice for a Fair Globalization se
uc
s
ts
br
an
d
dif
fer
en
tia
tio
n
d
fe
at
ur
e
di
ve
rsi
fic
United Nations Global Compact ati
on
The United Nations Global Compact, also known as Compact or UNGC, is a United Nationsis
initiative to encourage businesses worldwide to adopt sustainable and socially responsible policies,
e
and to report on their implementation. The Global Compact is a principle-based framework form
businesses, stating ten principles in the areas of human rights, labour, the environment and anti- ph
corruption. Under the Global Compact, companies are brought together with UN agencies, labour asi
groups and civil society. ze
d
The Global Compact is the world's largest corporate citizenship initiative and as voluntary
to
initiative has two objectives: "Mainstream the ten principles in business activities around the
m
world" and "Catalyse actions in support of broader UN goals, such as the Millennium Development ai
Goals (MDGs)."[1]
nt
The Global Compact was first announced by the then UN Secretary-General Kofi Annan in an ai
address to The World Economic Forum on January 31, 1999, and was officially launched at UN n
Headquarters in New York on July 26, 2000. or
in
The Global Compact Office is supported by six UN agencies: the United Nations High cr
Commissioner for Human Rights; the United Nations Environment Programme; the International Labour
ea
Organization; the United Nations Development Programme; the United Nations Industrial Development se
Organization; and the United Nations Office on Drugs and Crime. m
ar
The Ten Principles ke
The Global Compact was initially launched with nine Principles. June 24, 2004, during the first t
Global Compact Leaders Summit, Kofi Annan announced the addition of a tenth principle sh
against corruption. This step followed an extensive consultation process with all Global Compact ar
participants. e
Human Rights In
Businesses should: du
str
• Principle 1: Support and respect the protection of internationally proclaimed ial
human rights; and pr
• Principle 2: Make sure that they are not complicit in human rights abuses. ofi
ts
Labour Standards
go
Businesses should uphold:
do
• Principle 3: the freedom of association and the effective recognition of the right to
wn
collective bargaining;
• Principle 4: the elimination of all forms of forced and compulsory labour;
• Principle 5: the effective abolition of child labour; and
• Principle 6: the elimination of discrimination in employment and occupation.
Environment
Businesses should:
• Principle 7: support a precautionary approach to environmental challenges;
• Principle 8: undertake initiatives to promote environmental responsibility; and
• Principle 9: encourage the development and diffusion of environmentally
friendly technologies.
Anti-Corruption
• Principle 10: Businesses should work against corruption in all its forms,
including extortion and bribery.

[edit] Facilitation
The Global Compact is not a regulatory instrument, but rather a forum for discussion and a
network for communication including governments; companies and labour organisations, whose
actions it seeks to influence; and civil society organizations, representing its stakeholders.
The Compact itself says that once companies declared their support for the Global Compact
principles "This does not mean that the Global Compact recognizes or certifies that these
companies have fulfilled the Compact’s principles."
The Compact's goals are intentionally flexible and vague, but it distinguishes the following
channels through which it provides facilitation and encourages dialogue: policy dialogues,
learning, local networks and projects.
The first Global Compact Leaders Summit, chaired by the then Secretary-General Kofi Annan,
was held in UN Headquarters in New York on June 24, 2004. It aimed to bring "intensified
international focus and increased momentum" to the Global Compact. On the eve of the
conference, delegates were invited to attend the first Prix Ars Electronica Digital Communities
award ceremony, which was co-hosted by a representative from the UN.
The second Global Compact Leaders Summit, chaired by Secretary-General Ban Ki-moon, was
held on 5–6 July 2007 at the Palais des Nations in Geneva, Switzerland. It adopted the Geneva
Declaration on corporate responsibility.

[edit] The UN Global Compact - Cities Programme


The UN Global Compact - Cities Programme was launched in 2002 by the then UN Secretary-
General Kofi Annan. It was formed as an urban-focused component of the Global Compact with
its International Secretariat located in Melbourne, Australia. The aim of the Cities Programme is to
improve urban life in cities throughout the world.
The formation of the Programme goes back to early 2001 when the City of Melbourne proposed
that cities as well as corporations should be allowed and encouraged to engage the UN Global
Compact. Melbourne argued that this would engender a clear statement of a city's civic, cultural
and corporate commitment to positive change, as well as motivating participation in international
dialogue. The Global Compact office in New York accepted the proposal and Melbourne became
the first city to engage the Global Compact in June 2001. It provided an opportunity for the ten
Principles of the Global Compact to be translated into meaningful outcomes within a cities
(rather than just organizations).
In April 2003 under the directorship of David Teller, a simple framework called the Melbourne
Model was developed that entailed more than just signing onto the Ten Principles. It begins by
drawing the resources of government, business and civil society into a cross-sector partnership in
order to develop a practical project that addresses a seemingly intractable urban issue. For
example, Porto Alegre is tackling the problem of developing infrastructure and utilities for slum
dwellers.
Member cities include Al Salt, Berlin, Jinan, Melbourne, Le Havre, Plock, Porto Alegre, San
Francisco, Tshwane and Ulan Bator.
In 2007, the International Secretariat moved from the Committee for Melbourne to the Global
Cities Institute at RMIT University, itself affiliated with UN-HABITAT. There, projects associated
with city-based responses to global climate change and globalization have become increasingly
important. The Melbourne Model has been further elaborated, with a sustainability indicators
program developed as a way of assessing and monitoring progress.[2]
[edit] UN Global Compact In Syria
The Syria initiative aims at enhancing civic engagement and corporate social responsibility of
private sector by promoting the ten principles of the UN Global Compact as well as forging
partnerships between private sector organizations, public sector institutions and civil society.
This initiative is a partnership between the Syrian Government represented by the State Planning
Commission and the UNDP Country Office in Syria. It was launched under the patronage of the
Head of State Planning Commission and in the presence of the Deputy Chairperson of the UN
Global Compact, in July 2008.
The Syria Local Network has 26 businesses, 5 NGO’s, and 5 federations of commerce and
industry. It was displayed among 10 selected ones from around the world in the Global Compact
Sixth Annual Local Networks Forum. The Syria story was called a “leadership case” and the
Syria Network growth ratio was ranked first among the global top ten in 2008. available at [3]
The UNGC National Advisory Council has been formulated and held its founders’ meeting in
October 15, 2008, with the participation of leaders from the Syrian private sector, international
corporate representatives, local and international civil society organizations, UNDP, the Syrian
Government, media and education sectors.
[edit] Criticism
Many civil society organizations believe that without any effective monitoring and enforcement
provisions, the Global Compact fails to hold corporations accountable.[4] Moreover, these
organizations argue that companies can misuse the Global Compact as a public relations
instrument for "bluewash"[5], as an excuse and argument to oppose any binding international
regulation on corporate accountability, and as an entry door to increase corporate influence on
the policy discourse and the development strategies of the United Nations.[6]
[edit] Global Compact Critics
An informal network of organizations and people with concerns about the UN Global Compact,
called Global Compact Critics, levels a variety of criticisms at the Global Compact:
• The compact contains no mechanisms to sanction member companies for
non-compliance with the Compact's principles;
• A corporation’s continued participation is not dependent on demonstrated
progress;
• The Global Compact has admitted companies with dubious humanitarian and
environmental records in contrast with the principles demanded by the
Compact.
[edit] Alliance for a Corporate-Free UN
The Alliance for a Corporate-Free UN, which no longer exists, was a campaigning organization of
several international NGOs, led by Corpwatch, which highlighted weaknesses in the principles
underlying the Global Compact.
[edit] Criticism from within the United Nations
The Global Compact has been criticized by several senior UN officials and advisers. In
December 2008, Maude Barlow, senior adviser on water issues to the President of the United
Nations General Assembly, called the Global Compact "bluewashing".[7] Other vocal critics have
been David Andrews, senior adviser on Food Policy and Sustainable Development[8], and Peter
Utting, deputy director of UNRISD[9].

Corporate social responsibility


Corporate social responsibility (CSR), also known as corporate responsibility, corporate
citizenship, responsible business, sustainable responsible business (SRB), or corporate
social performance,[1] is a form of corporate self-regulation integrated into a business model.
Ideally, CSR policy would function as a built-in, self-regulating mechanism whereby business
would monitor and ensure its support to law, ethical standards, and international norms.
Consequently, business would embrace responsibility for the impact of its activities on the
environment, consumers, employees, communities, stakeholders and all other members of the
public sphere. Furthermore, CSR-focused businesses would proactively promote the public interest
by encouraging community growth and development, and voluntarily eliminating practices that
harm the public sphere, regardless of legality. Essentially, CSR is the deliberate inclusion of
public interest into corporate decision-making, and the honoring of a triple bottom line: People,
Planet, Profit.
The practice of CSR is subject to much debate and criticism. Proponents argue that there is a
strong business case for CSR, in that corporations benefit in multiple ways by operating with a
perspective broader and longer than their own immediate, short-term profits. Critics argue that
CSR distracts from the fundamental economic role of businesses; others argue that it is nothing
more than superficial window-dressing; others yet argue that it is an attempt to pre-empt the role
of governments as a watchdog over powerful multinational corporations. Corporate Social
Responsibility has been redefined throughout the years. However, it essentially is titled to aid to
an organization's mission as well as a guide to what the company stands for and will uphold to its
consumers.
Development Business ethics is one of the forms of applied ethics that examines ethical principles
and moral or ethical problems that can arise in a business environment.
In the increasingly conscience-focused marketplaces of the 21st century, the demand for more
ethical business processes and actions (known as ethicism) is increasing. Simultaneously, pressure
is applied on industry to improve business ethics through new public initiatives and laws (e.g.
higher UK road tax for higher-emission vehicles).
Business ethics can be both a normative and a descriptive discipline. As a corporate practice and
a career specialization, the field is primarily normative. In academia, descriptive approaches are
also taken. The range and quantity of business ethical issues reflects the degree to which business
is perceived to be at odds with non-economic social values. Historically, interest in business
ethics accelerated dramatically during the 1980s and 1990s, both within major corporations and
within academia. For example, today most major corporate websites lay emphasis on
commitment to promoting non-economic social values under a variety of headings (e.g. ethics
codes, social responsibility charters). In some cases, corporations have re-branded their core values
in the light of business ethical considerations (e.g. BP's "beyond petroleum" environmental tilt).
The term CSR came in to common use in the early 1970s, after many multinational corporations
formed, although it was seldom abbreviated. The term stakeholder, meaning those on whom an
organization's activities have an impact, was used to describe corporate owners beyond
shareholders as a result of an influential book by R Freeman in 1984.[2]
ISO 26000 is the recognized international standard for CSR (currently a Draft International
Standard). Public sector organizations (the United Nations for example) adhere to the Triple
Bottom Line (TBL). It is widely accepted that CSR adheres to similar principles but with no
formal act of legislation. The UN has developed the Principles for Responsible Investment as
guidelines for investing entities.
CSR and the nature of business
Milton Friedman and others have argued that a corporation's purpose is to maximize returns to its
shareholders, and that since (in their view), only people can have social responsibilities,
corporations are only responsible to their shareholders and not to society as a whole. Although
they accept that corporations should obey the laws of the countries within which they work, they
assert that corporations have no other obligation to society. Some people perceive CSR as
incongruent with the very nature and purpose of business, and indeed a hindrance to free trade.
Those who assert that CSR is contrasting with capitalism and are in favor of neoliberalism argue
that improvements in health, longevity and/or infant mortality have been created by economic growth
attributed to free enterprise.[15]
Critics of this argument perceive neoliberalism as opposed to the well-being of society and a
hindrance to human freedom. They claim that the type of capitalism practiced in many
developing countries is a form of economic and cultural imperialism, noting that these countries
usually have fewer labor protections, and thus their citizens are at a higher risk of exploitation by
multinational corporations.[16]
A wide variety of individuals and organizations operate in between these poles. For example, the
REALeadership Alliance asserts that the business of leadership (be it corporate or otherwise) is
to change the world for the better.[17] Many religious and cultural traditions hold that the
economy exists to serve human beings, so all economic entities have an obligation to society
(e.g., cf. Economic Justice for All). Moreover, as discussed above, many CSR proponents point out
that CSR can significantly improve long-term corporate profitability because it reduces risks and
inefficiencies while offering a host of potential benefits such as enhanced brand reputation and
employee engagement.
[edit] CSR and questionable motives
Some critics believe that CSR programs are undertaken by companies such as British American
Tobacco (BAT),[18] the petroleum giant BP (well-known for its high-profile advertising campaigns
on environmental aspects of its operations), and McDonald's (see below) to distract the public
from ethical questions posed by their core operations. They argue that some corporations start
CSR programs for the commercial benefit they enjoy through raising their reputation with the
public or with government. They suggest that corporations which exist solely to maximize profits
are unable to advance the interests of society as a whole.[19]
Another concern is when companies claim to promote CSR and be committed to Sustainable
Development whilst simultaneously engaging in harmful business practices. For example, since
the 1970s, the McDonald's Corporation's association with Ronald McDonald House has been viewed
as CSR and relationship marketing. More recently, as CSR has become mainstream, the
company has beefed up its CSR programs related to its labor, environmental and other
practices[20] All the same, in McDonald's Restaurants v Morris & Steel, Lord Justices Pill, May and
Keane ruled that it was fair comment to say that McDonald's employees worldwide 'do badly in
terms of pay and conditions'[21] and true that 'if one eats enough McDonald's food, one's diet may
well become high in fat etc., with the very real risk of heart disease.'[22]
Shellhas a much-publicized CSR policy and was a pioneer in triple bottom line reporting, but this
did not prevent the 2004 scandal concerning its misreporting of oil reserves, which seriously
damaged its reputation and led to charges of hypocrisy. Since then, the Shell Foundation has
become involved in many projects across the world, including a partnership with Marks and
Spencer (UK) in three flower and fruit growing communities across Africa.
Critics concerned with corporate hypocrisy and insincerity generally suggest that better
governmental and international regulation and enforcement, rather than voluntary measures, are
necessary to ensure that companies behave in a socially responsible manner. Others, such as
Patricia Werhane argue that CSR should be looked more upon as a Corporate Moral
Responsibility, and limit the reach of CSR by focusing more on direct impacts of the
organization as viewed through a systems perspective to identify stakeholders.
[edit] Ethical consumerism
The rise in popularity of ethical consumerism over the last two decades can be linked to the rise of
CSR. As global population increases, so does the pressure on limited natural resources required
to meet rising consumer demand (Grace and Cohen 2005, 147). Industrialization, in many
developing countries, is booming as a result of both technology and globalization. Consumers
are becoming more aware of the environmental and social implications of their day-to-day
consumer decisions and are therefore beginning to make purchasing decisions related to their
environmental and ethical concerns. However, this practice is far from consistent or universal.
[edit] Globalization and market forces
As corporations pursue growth through globalization, they have encountered new challenges that
impose limits to their growth and potential profits. Government regulations, tariffs,
environmental restrictions and varying standards of what constitutes "labor exploitation" are
problems that can cost organizations millions of dollars. Some view ethical issues as simply a
costly hindrance, while some companies use CSR methodologies as a strategic tactic to gain
public support for their presence in global markets, helping them sustain a competitive advantage
by using their social contributions to provide a subconscious level of advertising. (Fry, Keim,
Meiners 1986, 105) Global competition places a particular pressure on multinational
corporations to examine not only their own labor practices, but those of their entire supply chain,
from a CSR perspective.
[edit] Social awareness and education
The role among corporate stakeholders is to work collectively to pressure corporations that are
changing. Shareholders and investors themselves, through socially responsible investing are
exerting pressure on corporations to behave responsibly. Non-governmental organizations are also
taking an increasing role, leveraging the power of the media and the Internet to increase their
scrutiny and collective activism around corporate behavior. Through education and dialogue, the
development of community in holding businesses responsible for their actions is growing (Roux
2007).
[edit] Ethics training
The rise of ethics training inside corporations, some of it required by government regulation, is
another driver credited with changing the behavior and culture of corporations. The aim of such
training is to help employees make ethical decisions when the answers are unclear. Tullberg
believes that humans are built with the capacity to cheat and manipulate, a view taken from
(Trivers 1971, 1985), hence the need for learning normative values and rules in human behavior
(Tullberg 1996). The most direct benefit is reducing the likelihood of "dirty hands" (Grace and
Cohen 2005), fines and damaged reputations for breaching laws or moral norms. Organizations
also see secondary benefit in increasing employee loyalty and pride in the organization.
Caterpillar and Best Buy are examples of organizations that have taken such steps (Thilmany
2007).
Increasingly, companies are becoming interested in processes that can add visibility to their CSR
policies and activities. One method that is gaining increasing popularity is the use of well-
grounded training programs, where CSR is a major issue, and business simulations can play a part
in this.[citation needed]
One relevant documentary is The Corporation, the history of organizations and their growth in
power is discussed. Corporate social responsibility, what a company does to in trying to benefit
society, versus corporate moral responsibility (CMR), what a company should morally do, are
both important topics to consider when looking at ethics in CSR. For example, Ray Anderson, in
The Corporation, takes a CMR perspective in order to do what is moral and he begins to shift his
company's focus towards the biosphere by utilizing carpets in sections so that they will sustain
for longer periods. This is Anderson thinking in terms of Garret Hardin's "The Tragedy of the
Commons," where if people do not pay attention to the private ways in which we use public
resources, people will eventually lose those public resources.
[edit] Laws and regulation
Another driver of CSR is the role of independent mediators, particularly the government, in
ensuring that corporations are prevented from harming the broader social good, including people
and the environment. CSR critics such as Robert Reich argue that governments should set the
agenda for social responsibility by the way of laws and regulation that will allow a business to
conduct themselves responsibly.
The issues surrounding government regulation pose several problems. Regulation in itself is
unable to cover every aspect in detail of a corporation's operations. This leads to burdensome
legal processes bogged down in interpretations of the law and debatable grey areas (Sacconi
2004). General Electric is an example of a corporation that has failed to clean up the Hudson River
after contaminating it with organic pollutants. The company continues to argue via the legal
process on assignment of liability, while the cleanup remains stagnant. (Sullivan & Schiafo
2005).
The second issue is the financial burden that regulation can place on a nation's economy. This
view shared by Bulkeley, who cites the Australian federal government's actions to avoid
compliance with the Kyoto Protocol in 1997, on the concerns of economic loss and national
interest. The Australian government took the position that signing the Kyoto Pact would have
caused more significant economic losses for Australia than for any other OECD nation (Bulkeley
2001, pg 436).On the change of government following the election in November 2007, Prime
Minister Kevin Rudd signed the ratification immediately after assuming office on 3 December
2007, just before the meeting of the UN Framework Convention on Climate Change. Critics of
CSR also point out that organisations pay taxes to government to ensure that society and the
environment are not adversely affected by business activities.
Denmark made a law on CSR. 16 December 2008, the Danish parliament adopted a bill making
it mandatory for the 1100 largest Danish companies, investors and state owned companies to
include information on corporate social responsibility (CSR) in their annual financial reports.
The reporting requirements became effective on 1 January 2009[23].
The information shall include:
• information on the companies’ policies for CSR or socially responsible
investments (SRI)
• information on how such policies are implemented in practice and
• information on what results have been obtained so far and managements
expectations for the future with regard to CSR/SRI.

CSR/SRI is still voluntary in Denmark, but if a company has no policy on this they must state
their positioning on CSR in their annual financial report.
More on the Danish law on CSRgov.dk
[edit] Crises and their consequences
Often it takes a crisis to precipitate attention to CSR. One of the most active stands against
environmental management is the CERES Principles that resulted after the Exxon Valdez incident
in Alaska in 1989 (Grace and Cohen 2006). Other examples include the lead poisoning paint
used by toy giant Mattel, which required a recall of millions of toys globally and caused the
company to initiate new risk management and quality control processes. In another example,
Magellan Metals in the West Australian town of Esperance was responsible for lead contamination
killing thousands of birds in the area. The company had to cease business immediately and work
with independent regulatory bodies to execute a cleanup. Odwalla also experienced a crisis with
sales dropping 90 percent, and the company's stock price dropping 34 percent due to several
cases of E.Coli spread through Odwalla apple juice. The company ordered a recall of all apple or
carrot juice products and introduced a new process called "flash pasteurization" as well as
maintaining lines of communication constantly open with customers.
[edit] Stakeholder priorities
Increasingly, corporations are motivated to become more socially responsible because their most
important stakeholders expect them to understand and address the social and community issues
that are relevant to them. Understanding what causes are important to employees is usually the
first priority because of the many interrelated business benefits that can be derived from
increased employee engagement (i.e. more loyalty, improved recruitment, increased retention,
higher productivity, and so on). Key external stakeholders include customers, consumers,
investors (particularly institutional investors), regulators, academics, and the media).

Adjudication
Adjudication is the legal process by which an arbiter or judge reviews evidence and argumentation
including legal reasoning set forth by opposing parties or litigants to come to a decision which
determines rights and obligations between the parties involved. Three types of disputes are
resolved through adjudication:
1. Disputes between private parties, such as individuals or corporations.
2. Disputes between private parties and public officials.
3. Disputes between public officials or public bodies.

[edit] Other meanings


Adjudication can also be the process (at dance competitions, in television game shows and at other
competitive forums) by which competitors are evaluated and ranked and a winner is found.
[edit] In construction
Adjudication is a legal process provided for by statute for the resolution of disputes in the
construction industry. The process is by the presentation of case supported by evidence, together
with counter argument to an Adjudicator who performs an inquisitorial role in reaching a
binding, enforceable decision on the Parties to the dispute. The "Decision" if not complied with
is enforceable by the winning Party in the Courts.
The relevant legislation in the UK is the Housing Grants, Construction and Regeneration Act
1996, (1996 Chapter 53).[1]
[edit] In healthcare
Claims adjudication in health insurance refers to the determination of the insurer's payment or
financial responsibility, after the member's insurance benefits are applied to a medical claim. The
process of claims adjudication, in this context, is also referred to as [medical bill advocacy].
[edit] Pertaining to security clearances
Adjudication is the process directly following a background investigation where the investigation
results are reviewed to determine if a candidate should be awarded a security clearance.
From the United States Department of the Navy Central Adjudication Facility: "Adjudication is the
review and consideration of all available information to ensure an individual's loyalty, reliability,
and trustworthiness are such that entrusting an individual with national security information or
assigning an individual to sensitive duties is clearly in the best interest of national security."
[edit] Referring to a minor
Referring to a minor, the term adjudicated refers to children that are under a court's jurisdiction
usually as a result of having engaged in delinquent behavior and not having a legal guardian that
could be entrusted with being responsible for him or her.
Different states have different processes for declaring a child as adjudicated.
• The Arizona State Legislature' has this definition:
"'Dually adjudicated child' means a child who is found to be dependent or
temporarily subject to court jurisdiction pending an adjudication of a
dependency petition and who is alleged or found to have committed a
delinquent or incorrigible act."
[2]*
The 'Illinois General Assembly' has this definition:
"'Adjudicated' means that the Juvenile Court has entered an order declaring
that a child is neglected, abused, dependent, a minor requiring authoritative
intervention, a delinquent minor or an addicted minor. "[3]

[edit] In Australia
Robert Gaussen is said to have played pioneered the introduction of Adjudication process in
Australia through his role in drafting of Adjudication legislations in most states and territories in
the country.
[edit] In Victoria
Adjudication[4] is a relatively new process introduced by the Government of Victoria[5] in
Australia, to allow for the rapid determination of progress claims under building contracts or
sub-contracts and contracts for the supply of goods or services in the building industry. This
process was designed to ensure cash flow to businesses in the building industry, without parties
getting tied up in lengthy and expensive litigation or arbitration. It is regulated by the Building and
Construction Industry Security of Payment Act 2002.[6]
Builders, sub-contractors and suppliers need to carefully choose a nominating authority to which
they make an adjudication application.[7]
[edit] In Queensland
The Building and Construction Industry Payments Act 2004 (BCIPA) came into effect in
Queensland in October, 2004. Through a statuatory-based process known as adjudication a
claimant can seek to resolve payment on account disputes. The act covers construction, and
related supply of goods and services, contracts, whether written or verbal. BCIPA is regulated by
the Building and Construction Industry Payments Agency, a branch of the Queensland Building
Services Authority[8].
Conciliation
Conciliation is an alternative dispute resolution (ADR) process whereby the parties to a dispute
(including future interest disputes) agree to utilize the services of a conciliator, who then meets
with the parties separately in an attempt to resolve their differences. He does this by lowering
tensions, improving communications, interpreting issues, providing technical assistance,
exploring potential solutions and bringing about a negotiated settlement.
Conciliation differs from arbitration in that the conciliation process, in and of itself, has no legal
standing, and the conciliator usually has no authority to seek evidence or call witnesses, usually
writes no decision, and makes no award.
Conciliation differs from mediation in that the main goal is to conciliate, most of the time by
seeking concessions. In mediation, the mediator tries to guide the discussion in a way that
optimizes parties needs, takes feelings into account and reframes representations.
In conciliation the parties seldom, if ever, actually face each other across the table in the
presence of the conciliator.
[edit] Effectiveness
Recent studies in the processes of negotiation have indicated the effectiveness of a technique that
deserves mention here. A conciliator assists each of the parties to independently develop a list of
all of their objectives (the outcomes which they desire to obtain from the conciliation). The
conciliator then has each of the parties separately prioritize their own list from most to least
important. He/She then goes back and forth between the parties and encourages them to "give"
on the objectives one at a time, starting with the least important and working toward the most
important for each party in turn. The parties rarely place the same priorities on all objectives, and
usually have some objectives that are not listed by the other party. Thus the conciliator can
quickly build a string of successes and help the parties create an atmosphere of trust which the
conciliator can continue to develop.
Most successful conciliators are highly skilled negotiators. Some conciliators operate under the
auspices of any one of several non-governmental entities, and for governmental agencies such as
the Federal Mediation and Conciliation Service in the United States.
[edit] Conciliation in Japan
Japanese law makes extensive use of conciliation (調停, chōtei?) in civil disputes. The most
common forms are civil conciliation and domestic conciliation, both of which are managed under
the auspices of the court system by one judge and two non-judge "conciliators."
Civil conciliation is a form of dispute resolution for small lawsuits, and provides a simpler and
cheaper alternative to litigation. Depending on the nature of the case, non-judge experts (doctors,
appraisers, actuaries, etc.) may be called by the court as conciliators to help decide the case.
Domestic conciliation is most commonly used to handle contentious divorces, but may apply to
other domestic disputes such as the annulment of a marriage or acknowledgment of paternity.
Parties in such cases are required to undergo conciliation proceedings and may only bring their
case to court once conciliation has failed.
Average cost
From Wikipedia, the free encyclopedia
Jump to:navigation, search

In economics, average cost is equal to total cost divided by the number of goods produced (the
output quantity, Q). It is also equal to the sum of average variable costs (total variable costs
divided by Q) plus average fixed costs (total fixed costs divided by Q). Average costs may be
dependent on the time period considered (increasing production may be expensive or impossible
in the short term, for example). Average costs affect the supply curve and are a fundamental
component of supply and demand.

[edit] Overview
Average cost is distinct from the price, and depends on the interaction with demand through
elasticity of demand and elasticity of supply. In cases of perfect competition, price may be lower than
average cost due to marginal cost pricing.
Average cost will vary in relation to the quantity produced unless fixed costs are zero and
variable costs constant. A cost curve can be plotted, with cost on the y-axis and quantity on the x-
axis. Marginal costs are often shown on these graphs, with marginal cost representing the cost of
the last unit produced at each point; marginal costs are the first derivative of total or variable costs.
A typical average cost curve will have a U-shape, because fixed costs are all incurred before any
production takes place and marginal costs are typically increasing, because of diminishing marginal
productivity. In this "typical" case, for low levels of production there are economies of scale:
marginal costs are below average costs, so average costs are decreasing as quantity increases. An
increasing marginal cost curve will intersect a U-shaped average cost curve at its minimum, after
which point the average cost curve begins to slope upward. This is indicative of diseconomies of
scale. For further increases in production beyond this minimum, marginal cost is above average
costs, so average costs are increasing as quantity increases. An example of this typical case
would be a factory designed to produce a specific quantity of widgets per period: below a certain
production level, average cost is higher due to under-utilised equipment, while above that level,
production bottlenecks increase the average cost.
[edit] Relationship to marginal cost
When average cost is declining as output increases, marginal cost is less than average cost. When
average cost is rising, marginal cost is greater than average cost. When average cost is neither
rising nor falling (at a minimum or maximum), marginal cost equals average cost.
Other special cases for average cost and marginal cost appear frequently:
• Constant marginal cost/high fixed costs: each additional unit of production is
produced at constant additional expense per unit. The average cost curve
slopes down continuously, approaching marginal cost. An example may be
hydroelectric generation, which has no fuel expense, limited maintenance
expenses and a high up-front fixed cost (ignoring irregular maintenance costs
or useful lifespan). Industries where fixed marginal costs obtain, such as
electrical transmission networks, may meet the conditions for a natural
monopoly, because once capacity is built, the marginal cost to the incumbent
of serving an additional customer is always lower than the average cost for a
potential competitor. The high fixed capital costs are a barrier to entry.
• Minimum efficient scale / maximum efficient scale: marginal or average costs may be
non-linear, or have discontinuities. Average cost curves may therefore only
be shown over a limited scale of production for a given technology. For
example, a nuclear plant would be extremely inefficient (very high average
cost) for production in small quantities; similarly, its maximum output for any
given time period may essentially be fixed, and production above that level
may be technically impossible, dangerous or extremely costly. The long run
elasticity of supply will be higher, as new plants could be built and brought
on-line.
• Low or zero fixed costs / constant marginal cost: since there is no economy of
scale, average cost will be close to or equal to marginal cost. Examples may
include buying and selling of commodities (trading) etc...

[edit] Relationship between AC, AFC, AVC and MC


1. The Average Fixed Cost curve starts from a height and goes on declining continuously as
production increases.
2. The Average Variable Cost curve, Average Cost curve and the Marginal Cost curve start from
a height, reach the minimum points, then rise sharply and continuously.
3. Marginal Cost curve is the determining curve, while the rest are determined curves.
4. The movement in the Marginal Cost curve determines the movement and direction of the other
curves.
5. The Average Fixed Cost curve nears the Average Cost curve initially and then moves away
from it. The Average Variable Cost Curve is never parallel or intersects the Average Cost curve
due to the existence of the Average Fixed Cost in all units of production.
6. The Marginal Cost curve always passes through the minimum points of the Average Variable
Cost and Average Cost curves, though the Average Variable Cost curve attains the minimum
point prior to that of the Average Cost curve.

Marginal cost
From Wikipedia, the free encyclopedia
Jump to:navigation, search

In economics and finance, marginal cost is the change in total cost that arises when the quantity
produced changes by one unit. That is, it is the cost of producing one more unit of a good.[1]
Mathematically, the marginal cost (MC) function is expressed as the first derivative of the total
cost(TC) function with respect to quantity (Q). Note that the marginal cost may change with
volume, and so at each level of production, the marginal cost is the cost of the next unit
produced.

A typical Marginal Cost Curve

In general terms, marginal cost at each level of production includes any additional costs required
to produce the next unit. If producing additional vehicles requires, for example, building a new
factory, the marginal cost of those extra vehicles includes the cost of the new factory. In practice,
the analysis is segregated into short and long-run cases, and over the longest run, all costs are
marginal. At each level of production and time period being considered, marginal costs include
all costs which vary with the level of production, and other costs are considered fixed costs.
A number of other factors can affect marginal cost and its applicability to real world problems.
Some of these may be considered market failures. These may include information asymmetries, the
presence of negative or positive externalities, transaction costs, price discrimination and others.

[edit] Cost functions and relationship to average cost


In the simplest case, the total cost function and its derivative are expressed as follows, where Q
represents the production quantity, VC represents variable costs, FC represents fixed costs and
TC represents total costs.
Since (by definition) fixed costs do not vary with production quantity, it drops out of the
equation when it is differentiated. The important conclusion is that marginal cost is not related to
fixed costs. This can be compared with average total cost or ATC, which is the total cost divided
by the number of units produced and does include fixed costs.

For discrete calculation without calculus, marginal cost equals the change in total (or variable)
cost that comes with each additional unit produced. For instance, suppose the total cost of
making 1 shoe is $30 and the total cost of making 2 shoes is $40. The marginal cost of producing
the second shoe is $40 - $30 = $10.
[edit] Economies of scale
Production may be subject to economies of scale (or diseconomies of scale). Increasing returns to
scale are said to exist if additional units can be produced for less than the previous unit, that is,
average cost is falling. This can only occur if average cost at any given level of production is
higher than the marginal cost. Conversely, there may be levels of production where marginal cost
is higher than average cost, and average cost will rise for each unit of production after that point.
This type of production function is generally known as diminishing marginal productivity: at low
levels of production, productivity gains are easy and marginal costs falling, but productivity gains
become smaller as production increases; eventually, marginal costs rise because increasing
output (with existing capital, labor or organization) becomes more expensive. For this generic
case, minimum average cost occurs at the point where average cost and marginal cost are equal
(when plotted, the two curves intersect); this point will not be at the minimum for marginal cost
if fixed costs are greater than zero.
[edit] Short and long run costs and economies of scale
A textbook distinction is made between short-run and long-run marginal cost. The former takes
fixed costs as unchanged, for example, the capital equipment and overhead of the producer, any
change in its production involves only changes in the inputs of labour, materials and energy. The
latter allows all inputs, including capital items (plant, equipment, buildings) to vary.
A long-run cost function describes the cost of production as a function of output assuming that
all inputs are obtained at current prices, that current technology is employed, and everything is
being built new from scratch. In view of the durability of many capital items this textbook
concept is less useful than one which allows for some scrapping of existing capital items or the
acquisition of new capital items to be used with the existing stock of capital items acquired in the
past. Long-run marginal cost then means the additional cost or the cost saving per unit of
additional or reduced production, including the expenditure on additional capital goods or any
saving from disposing of existing capital goods. Note that marginal cost upwards and marginal
cost downwards may differ, in contrast with marginal cost according to the less useful textbook
concept.
Economies of scale are said to exist when marginal cost according to the textbook concept falls
as a function of output and is less than the average cost per unit. This means that the average cost
of production from a larger new built-from-scratch installation falls below that from a smaller
new built-from-scratch installation. Under the more useful concept, with an existing capital
stock, it is necessary to distinguish those costs which vary with output from accounting costs
which will also include the interest and depreciation on that existing capital stock, which may be
of a different type from what can currently be acquired in past years at past prices. The concept
of economies of scale then does not apply.
[edit] Externalities
Externalities are costs (or benefits) that are not borne by the parties to the economic transaction.
A producer may, for example, pollute the environment, and others may bear those costs. A
consumer may consume a good which produces benefits for society, such as education; because
the individual does not receive all of the benefits, he may consume less than efficiency would
suggest. Alternatively, an individual may be a smoker or alcoholic and impose costs on others. In
these cases, production or consumption of the good in question may differ from the optimum
level.
[edit] Negative externalities of production

Negative Externalities of Production

Much of the time, private and social costs do not diverge from one another, but at times social
costs may be either greater or less than private costs. When marginal social costs of production
are greater than that of the private cost function, we see the occurrence of a negative externality of
production. Productive processes that result in pollution are a textbook example of production that
creates negative externalities.
Such externalities are a result of firms externalising their costs onto a third party in order to
reduce their own total cost. As a result of externalising such costs we see that members of society
will be negatively affected by such behavior of the firm. In this case, we see that an increased
cost of production on society creates a social cost curve that depicts a greater cost than the
private cost curve.
In an equilibrium state we see that markets creating negative externalities of production will
overproduce that good. As a result, the socially optimal production level would be lower than
that observed.
[edit] Positive externalities of production

Positive Externalities of Production

When marginal social costs of production are less than that of the private cost function, we see
the occurrence of a positive externality of production. Production of public goods are a textbook
example of production that create positive externalities. An example of such a public good,
which creates a divergence in social and private costs, includes the production of education. It is
often seen that education is a positive for any whole society, as well as a positive for those
directly involved in the market.
Examining the relevant diagram we see that such production creates a social cost curve that is
less than that of the private curve. In an equilibrium state we see that markets creating positive
externalities of production will under produce that good. As a result, the socially optimal
production level would be greater than that observed.
[edit] Social costs
Main article: Social cost

Of great importance in the theory of marginal cost is the distinction between the marginal private
and social costs. The marginal private cost shows the cost associated to the firm in question. It is
the marginal private cost that is used by business decision makers in their profit maximization
goals, and by individuals in their purchasing and consumption choices. Marginal social cost is
similar to private cost in that it includes the cost functions of private enterprise but also that of
society as a whole, including parties that have no direct association with the private costs of
production. It incorporates all negative and positive externalities, of both production and
consumption.
Hence, when deciding whether or how much to buy, buyers take account of the cost to society of
their actions if private and social marginal cost coincide. The equality of price with social
marginal cost, by aligning the interest of the buyer with the interest of the community as a whole
is a necessary condition for economically efficient resource allocation.
[edit] Other cost definitions
• Fixed costs are costs which do not vary with output, for example, rent. In the
long run all costs can be considered variable.
• Variable cost also known as, operating costs, prime costs, on costs and direct
costs, are costs which vary directly with the level of output, for example,
labor, fuel, power and cost of raw material.
• Social costs of production are costs incurred by society, as a whole, resulting
from private production.
• Average total cost is the total cost divided by the quantity of output.
• Average fixed cost is the fixed cost divided by the quantity of output.
• Average variable cost are variable costs divided by the quantity of output.

[edit] Cost Functions


Total Cost (TC) = Fixed Costs (FC) + Variable Costs (VC)

FC = 420

VC = 60Q + Q2

TC = 420 + 60Q + Q2

Marginal Costs (MC) = dTC/dQ

MC = 60 +2Q

Average Total Cost (ATC) = Total Cost/Q

ATC = (420 + 60Q + Q2)/Q

ATC = 420/Q + 60 + Q

Average Fixed Cost (AFC) = FC/Q

AFC = 420/Q

Average Variable Costs = VC/Q

AVC = (60Q + Q2)/Q

AVC = 60 + Q

Living wage
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Living wage is a term used to describe the minimum hourly wage necessary for shelter (housing
and incidentals such as clothing and other basic needs) and nutrition for a person for an extended
period of time (lifetime). In developed countries such as the United Kingdom or Switzerland, this
standard generally means that a person working forty hours a week, with no additional income,
should be able to afford a specified quality or quantity of housing, food, utilities, transport, health
care, and recreation.
This concept differs from the minimum wage in that the latter is set by law and may fail to meet
the requirements of a living wage. It differs somewhat from basic needs in that the basic needs
model usually measures a minimum level of consumption, without regard for the source of the
income. A related concept is that of a family wage – one sufficient to not only live on oneself, but
also to raise a family, though these notions may be conflated.

[edit] Catholic social teaching


The living wage is a concept central to the Catholic social teaching tradition beginning with the
foundational document, Rerum Novarum, a papal encyclical by Pope Leo XIII, issued in 1891 to
combat the excesses of both laissez-faire capitalism on the one hand and communism on the other.
In this letter, Pope Leo affirms the right to private property while insisting on the role of the state
to require a living wage. The means of production were considered by the pope to be both private
property requiring state protection and a dimension of the common good requiring state
regulation.
Pope Leo first described a living wage in such terms as could be generalized for application in
nations throughout the world. Rerum Novarum touched off legislative reform movements
throughout the world eliminating child labor, reducing the work week, and establishing
minimum wages.
• "If a worker receives a wage sufficiently large to enable him to provide
comfortably for himself, his wife and his children, he will, if prudent,
gladly strive to practice thrift; and the result will be, as nature itself
seems to counsel, that after expenditures are deducted there will
remain something over and above through which he can come into the
possession of a little wealth. We have seen, in fact, that the whole
question under consideration cannot be settled effectually unless it is
assumed and established as a principle, that the right of private
property must be regarded as sacred. Wherefore, the law ought to
favor this right and, so far as it can, see that the largest possible
number among the masses of the population prefer to own property."
(#65)
• "Wealthy owners of the means of production and employers must
never forget that both divine and human law forbid them to squeeze
the poor and wretched for the sake of gain or to profit from the
helplessness of others." (#17)
• "As regards protection of this world’s good, the first task is to save the
wretched workers from the brutality of those who make use of human
beings as mere instruments for the unrestrained acquisition of wealth."
(#43)
• "Care must be taken, therefore, not to lengthen the working day
beyond a man’s capacity. How much time there must be for rest
depends upon the type of work, the circumstances of time and place
and, particularly, the health of the workers." (#43)
[1]
Rerum Novarum, Pope Leo XIII, 1891
In Quadragesimo Anno, Pope Pius XI clarifies Rerum Novarum by warning that, in seeking to
protect the worker from exploitation, society must not exploit the employer.
• "...(T)he wealthy class violates (the common good) no less, when, as if free
from care on account of its wealth, it thinks it the right order of things for it to
get everything and the worker nothing, than does the...working class when,
angered deeply at outraged justice and too ready to assert wrongly the one
right it is conscious of, it demands for itself everything as if produced by its
own hands, and attacks and seeks to abolish, therefore, all property and
returns or incomes, of whatever kind they are or whatever the function they
perform in human society, that have not been obtained by labor, and for no
other reason save that they are of such a nature." (#57)
[2]
Quadragesimo Anno, Pope Pius XI, 1931

[edit] Implementations
The national and international living wage movements are supported by many labor unions and
community action groups such as ACORN.
[edit] Australia
In Australia, the 1907 Harvester Judgment ruled that an employer was obliged to pay his
employees a wage that guaranteed them a standard of living which was reasonable for "a human
being in a civilised community," regardless of his capacity to pay. Justice Higgins established a
wage of 7/- (7 shillings) per day or 42/- per week as a 'fair and reasonable' minimum wage for
unskilled workers. The judgment was later overturned but remains influential. In 1913, to
compensate for the rising cost of living, the basic wage was increased to 8/- per day, the first
increase since the minimum was set. The first Retail Price Index in Australia was published late in
1912. The basic wage system remained in place in Australia until 1967. It was also adopted by
some state tribunals and was in use in some states in the 1980s.
[edit] United States
In the United States, the state of Maryland and several municipalities and local governments have
enacted ordinances which set a minimum wage higher than the federal minimum for the purpose
of requiring all jobs to meet the living wage for that region. However, San Francisco, California
and Santa Fe, New Mexico have notably passed very wide-reaching living wage ordinances.[citation
needed]
U.S. cities with living wage laws include Santa Fe and Albuquerque in New Mexico; San
Francisco, California; and Washington D.C.[3] (The city of Chicago, Illinois also passed a living wage
ordinance in 2006, but it was vetoed by the mayor.) Living wage laws typically cover only
businesses that receive state assistance or have contracts with the government.[4]
This effort began in 1994 when an alliance between a labor union and religious leaders in
Baltimore launched a successful campaign requiring city service contractors to pay a living
wage[5]. Subsequent to this effort, community advocates have won similar ordinances in cities
such as Boston, Los Angeles, San Francisco, and St. Louis. In 2007, there were at least 140
living wage ordinances in cities throughout the United States and more than 100 living wage
campaigns underway in cities, counties, states, and college campuses[6].
[edit] United Kingdom
In the United Kingdom, many campaigning organisations have responded to the low level of the
National Minimum Wage by asserting the need for it to be increased to a level more comparable
to a living wage. For instance, the Mayor of London's office hosts a Living Wage Unit which
monitors the level needed for a living wage in London (which has considerably higher living
costs than the rest of the UK). Other organisations with an interest in living wage issues include
the Living Wage Campaign[7], and the Church Action on Poverty [8] and the Scottish Low Pay Unit.
The Guardian newspaper columnist Polly Toynbee is also a major supporter of the campaign for a
living wage. The charity London Citizens is campaigning for a living wage to be implemented
across London.
[edit] Alternative policies
Some critics[who?] argue that there are alternative ways to deliver income to the poor, such as the
US Earned Income Tax Credit, the UK Working Tax Credit or a negative income tax, that don't have
the unemployment and deadweight loss effects that critics claim are the result of living wage law.
A further alternative is a job guarantee, where jobs are provided to all comers at a living wage,
setting a de facto (but not de jure) living wage.

Minimum wage
From Wikipedia, the free encyclopedia
Jump to:navigation, search

A minimum wage is the lowest hourly, daily or monthly wage that employers may legally pay to
employees or workers. Equivalently, it is the lowest wage at which workers may sell their labor.
Although minimum wage laws are in effect in a great many jurisdictions, there are differences of
opinion about the benefits and drawbacks of a minimum wage. Supporters of the minimum wage
say that it increases the standard of living of workers and reduces poverty.[1] Opponents say that
if it is high enough to be effective, it increases unemployment, particularly among workers with
very low productivity due to inexperience or handicap, thereby harming lesser skilled workers to
the benefit of better skilled workers.[2]

[edit] Background
A sweatshop in Chicago, Illinois in 1903

Minimum wages were first proposed as a way to control the proliferation of sweat shops in
manufacturing industries. The sweat shops employed large numbers of women and young
workers, paying them what were considered to be substandard wages. The sweatshop owners
were thought to have unfair bargaining power over their workers, and a minimum wage was
proposed as a means to make them pay "fairly." Over time, the focus changed to helping people,
especially families, become more self sufficient. Today, minimum wage laws cover workers in
most low-paid fields of employment.[3]
The minimum wage has a strong social appeal, rooted in concern about the ability of markets to
provide income equity for the least able members of the work force. An obvious solution to this
concern is to redefine the wage structure politically to achieve a socially preferable distribution
of income. Thus, minimum wage laws have usually been judged against the criterion of reducing
poverty.[4]
Although the goals of the minimum wage are widely accepted as proper, there is great
disagreement as to whether the minimum wage is effective in attaining its goals. From the time
of their introduction, minimum wage laws have been highly controversial politically, and have
received much less support from economists than from the general public. Despite decades of
experience and economic research, debates about the costs and benefits of minimum wages
continue today.[3]
The classic exposition of the minimum wage's shortcomings in reducing poverty was provided
by George Stigler in 1946:
• Employment may fall more than in proportion to the wage increase, thereby
reducing overall earnings;
• As uncovered sectors of the economy absorb workers released from the
covered sectors, the decrease in wages in the uncovered sectors may exceed
the increase in wages in the covered ones;
• The impact of the minimum wage on family income distribution may be
negative unless the fewer but better jobs are allocated to members of needy
families rather than to, for example, teenagers from families not in poverty;
• The legal restriction that employers cannot pay less than a legislated wage is
equivalent to the legal restriction that workers cannot work at all in the
protected sector unless they can find employers willing to hire them at that
wage.[4]
Direct empirical studies indicate that anti-poverty effects in the U.S. would be quite modest,
even if there were no unemployment effects. Very few low-wage workers come from families in
poverty. Those primarily affected by minimum wage laws are teenagers and low-skilled adult
females who work part time, and any wage rate effects on their income is strictly proportional to
the hours of work they are offered. So, if market outcomes for low-skilled families are to be
supplemented in a socially satisfactory way, factors other than wage rates must also be
considered. Employment opportunities and the factors that limit labor market participation must
be considered as well.[4] Economist Thomas Sowell has also argued that regardless of custom or
law, the real minimum wage is always zero, and zero is what some people would receive if they
fail to find jobs when they try to enter the workforce, or they lose the jobs they already have.[5]
[edit] Minimum wage law
Main article: Minimum wage law

First enacted in New Zealand in 1894,[6][7] there is now legislation or binding collective bargaining
regarding minimum wage in more than 90% of all countries.[8]
Minimum wage rates vary greatly across many different jurisdictions, not only in setting a
particular amount of money (e.g. US$7.25 per hour under U.S. Federal law, $8.55 in the U.S.
state of Washington,[9] and £5.80 (for those aged 22+) in the United Kingdom[10]), but also in terms
of which pay period (e.g. Russia and China set monthly minimums) or the scope of coverage.
Some jurisdictions allow employers to count tips given to their workers as credit towards the
minimum wage level. (See also: List of minimum wages by country)
[edit] Informal minimum wages
Sometimes a minimum wage exists without a law. Custom and extra-legal pressures from
governments or labor unions can produce a de facto minimum wage. So can international public
opinion, by pressuring multinational companies to pay Third World workers wages usually
found in more industrialized countries. The latter situation in Southeast Asia and Latin America
has been publicized in recent years, but it existed with companies in West Africa in the middle of
the twentieth century.[5]
[edit] Economics of the minimum wage
[edit] Simple supply and demand
Main article: Supply and demand

An analysis of supply and demand of the type shown in introductory mainstream economics
textbooks implies that by mandating a price floor above the equilibrium wage, minimum wage
laws should cause unemployment.[11][12] This is because a greater number of workers are willing
to work at the higher wage while a smaller numbers of jobs will be available at the higher wage.
Companies can be more selective in those whom they employ thus the least skilled and least
experienced will typically be excluded.
According to the model shown in nearly all introductory textbooks on economics, increasing the
minimum wage decreases the employment of minimum-wage workers.[13] One such textbook
says:
"If a higher minimum wage increases the wage rates of unskilled workers above the level that
would be established by market forces, the quantity of unskilled workers employed will fall. The
minimum wage will price the services of the least productive (and therefore lowest-wage)
workers out of the market. ... The direct results of minimum wage legislation are clearly mixed.
Some workers, most likely those whose previous wages were closest to the minimum, will enjoy
higher wages. Others, particularly those with the lowest prelegislation wage rates, will be unable
to find work. They will be pushed into the ranks of the unemployed or out of the labor force."[14]
It illustrates the point with a supply and demand diagram similar to the one below.

It is assumed that workers are willing to labor for more hours if paid a higher wage. Economists
graph this relationship with the wage on the vertical axis and the quantity (hours) of labor
supplied on the horizontal axis. Since higher wages increase the quantity supplied, the supply of
labor curve is upward sloping, and is shown as a line moving up and to the right.[15]
A firm's cost is a function of the wage rate. It is assumed that the higher the wage, the fewer
hours an employer will demand of an employee. This is because, as the wage rate rises, it
becomes more expensive for firms to hire workers and so firms hire fewer workers (or hire them
for fewer hours). The demand of labor curve is therefore shown as a line moving down and to the
right.[15]
Combining the demand and supply curves for labor allows us to examine the effect of the
minimum wage. We will start by assuming that the supply and demand curves for labor will not
change as a result of raising the minimum wage. This assumption has been questioned. If no
minimum wage is in place, workers and employers will continue to adjust the quantity of labor
supplied according to price until the quantity of labor demanded is equal to the quantity of labor
supplied, reaching equilibrium price, where the supply and demand curves intersect. Minimum
wage behaves as a classical price floor on labor. Standard theory says that, if set above the
equilibrium price, more labor will be willing to be provided by workers than will be demanded
by employers, creating a surplus of labor i.e. unemployment.[15]
In other words, the simplest and most basic economics says this about commodities like labor
(and wheat, for example): Artificially raising the price of the commodity tends to cause the
supply of it to increase and the demand for it to lessen. The result is a surplus of the commodity.
When there is a wheat surplus, the government buys it. Since the government doesn't hire surplus
labor, the labor surplus takes the form of unemployment, which tends to be higher with
minimum wage laws than without them.[5]
So the basic theory says that raising the minimum wage helps workers whose wages are raised,
and hurts people who are not hired (or lose their jobs) because companies cut back on
employment. But proponents of the minimum wage hold that the situation is much more
complicated than the basic theory can account for.
One complicating factor is possible monopsony in the labor market, whereby the individual
employer has some market power in determining wages paid. Thus it is at least theoretically
possible that the minimum wage may boost employment. Though single employer market power
is unlikely to exist in most labor markets in the sense of the traditional 'company town,'
asymmetric information, imperfect mobility, and the 'personal' element of the labor transaction
give some degree of wage-setting power to most firms.[16]
[edit] Criticism of the "textbook model"
The argument that minimum wages decrease employment is based on a simple supply and
demand model of the labor market. A number of economists (for example Pierangelo
Garegnani[17], Robert L. Vienneau[18], and Arrigo Opocher & Ian Steedman[19]), building on the
work of Piero Sraffa, argue that that model, even given all its assumptions, is logically incoherent.
Michael Anyadike-Danes and Wyne Godley [20] argue, based on simulation results, that little of
the empirical work done with the textbook model constitutes a potentially falsifying test, and,
consequently, empirical evidence hardly exists for that model. Graham White [21] argues, partially
on the basis of Sraffianism, that the policy of increased labor market flexibility, including the
reduction of minimum wages, does not have an "intellectually coherent" argument in economic
theory.
Gary Fields, Professor of Labor Economics and Economics at Cornell University, argues that the
standard "textbook model" for the minimum wage is "ambiguous", and that the standard
theoretical arguments incorrectly measure only a one-sector market. Fields says a two-sector
market, where "the self-employed, service workers, and farm workers are typically excluded
from minimum-wage coverage… [and with] one sector with minimum-wage coverage and the
other without it [and possible mobility between the two]," is the basis for better analysis.
Through this model, Fields shows the typical theoretical argument to be ambiguous and says "the
predictions derived from the textbook model definitely do not carry over to the two-sector case.
Therefore, since a non-covered sector exists nearly everywhere, the predictions of the textbook
model simply cannot be relied on."[22]
An alternate view of the labor market has low-wage labor markets characterized as monopsonistic
competition wherein buyers (employers) have significantly more market power than do sellers
(workers). This monopsony could be a result of intentional collusion between employers, or
naturalistic factors such as segmented markets, information costs, imperfect mobility and the
'personal' element of labor markets. In such a case the diagram above would not yield the
quantity of labor clearing and the wage rate. This is because while the upward sloping aggregate
labor supply would remain unchanged, instead of using the downward labor demand curve
shown in the diagram above, monopsonistic employers would use a steeper downward sloping
curve corresponding to marginal expenditures to yield the intersection with the supply curve
resulting in a wage rate lower than would be the case under competition. Also, the amount of
labor sold would also be lower than the competitive optimal allocation.
Such a case is a type of market failure and results in workers being paid less than their marginal
value. Under the monopsonistic assumption, an appropriately set minimum wage could increase
both wages and employment, with the optimal level being equal to the marginal productivity of
labor.[23] This view emphasizes the role of minimum wages as a market regulation policy akin to
antitrust policies, as opposed to an illusory "free lunch" for low-wage workers.
Another reason minimum wage may not affect employment in certain industries is that the
demand for the product the employees produce is highly inelastic;[24] For example, if
management is forced to increase wages, management can pass on the increase in wage to
consumers in the form of higher prices. Since demand for the product is highly inelastic,
consumers continue to buy the product at the higher price and so the manager is not forced to lay
off workers.
Three other possible reasons minimum wages do not affect employment were suggested by Alan
Blinder: higher wages may reduce turnover, and hence training costs; raising the minimum wage
may "render moot" the potential problem of recruiting workers at a higher wage than current
workers; and minimum wage workers might represent such a small proportion of a business's
cost that the increase is too small to matter. He admits that he does not know if these are correct,
but argues that "the list demonstrates that one can accept the new empirical findings and still be a
card-carrying economist."[25]
[edit] Debate over consequences
Various groups have great ideological, political, financial, and emotional investments in issues
surrounding minimum wage laws. For example, agencies that administer the laws have a vested
interest in showing that "their" laws do not create unemployment, as do labor unions, whose
members' jobs are protected by minimum wage laws. On the other side of the issue, low-wage
employers such as restaurants finance the Employment Policies Institute, which has released
numerous studies opposing the minimum wage.[26] The presence of these powerful groups and
factors means that the debate on the issue is not always based on dispassionate analysis.
Additionally, it is extraordinarily difficult to separate the effects of minimum wage from all the
other variables that affect employment.[5]
The following table summarizes the arguments made by those for and against minimum wage
laws:
Arguments in favor of Minimum Arguments against Minimum Wage Laws
Wage Laws Opponents of the minimum wage claim it has these effects:
Supporters of the minimum wage • As a labor market analogue of political-
claim it has these effects: economic protectionism, it excludes low cost
• Increases the standard of competitors from labor markets, hampers
living for the poorest and firms in reducing wage costs during trade
most vulnerable class in downturns, generates various industrial-
society and raises economic inefficiencies as well as
average.[1] unemployment, poverty, and price rises, and
generally dysfunctions.[28]
• Motivates and
encourages employees • Hurts small business more than large
to work harder (unlike business.[29]
welfare programs and • Reduces quantity demanded of workers,
other transfer either through a reduction in the number of
payments).[27] hours worked by individuals, or through a
• Stimulates consumption, reduction in the number of jobs.[30][31]
by putting more money • May cause inflation as businesses try to
in the hands of low- compensate by raising the prices of the
income people who goods being sold.[32][33]
spend their entire
paychecks.[1] • Benefits some workers at the expense of the
poorest and least productive.[34]
• Increases the work ethic of
those who earn very • Can result in the exclusion of certain groups
little, as employers from the labour force.[35]
demand more return • Businesses may spend less on training their
from the higher cost of employees.[36]
hiring these employees.[1]
• Is less effective than other methods (e.g. the
• Decreases the cost of Earned Income Tax Credit) at reducing poverty,
government social and is more damaging to businesses than
welfare programs by those other methods.[36]
increasing incomes for
• Discourages further education among the
the lowest-paid.[1]
poor by enticing people to enter the job
market.[36]

In 2006, the International Labour Organization (ILO)[8] argued that the minimum wage could not be
directly linked to unemployment in countries that have suffered job losses. In April 2010, the
Organisation for Economic Co-operation and Development (OECD)[37] released a report arguing that
countries could alleviate teen unemployment by “lowering the cost of employing low-skilled
youth” through a sub-minimum training wage. A study of U.S. states showed that businesses'
annual and average payrolls grow faster and employment grew at a faster rate in states with a
minimum wage.[38] The study showed a correlation, but did not claim to prove causation.
Although strongly opposed by both the business community and the Conservative Party when
introduced in 1999, the minimum wage introduced in the UK is no longer controversial and the
Conservatives reversed their opposition in 2000.[39] A review of its effects found no discernible
impact on employment levels.[40] However, prices in the minimum wage sector were found to
have risen significantly faster than prices in non-minimum wage sectors, most notably in the four
years following the implementation of the minimum wage.[41]
Since the introduction of a national minimum wage in the UK in 1999, its effects on employment
were subject to extensive research and observation by the Low Pay Commission. The Low Pay
Commission found that, rather than make employees redundant, employers have reduced their
rate of hiring, reduced staff hours, increased prices, and have found ways to cause current
workers to be more productive (especially service companies).[42] Neither trade unions nor
employer organizations contest the minimum wage, although the latter had especially done so
heavily until 1999.
[edit] Empirical studies
Economists disagree as to the measurable impact of minimum wages in the 'real world'. This
disagreement usually takes the form of competing empirical tests of the elasticities of demand and
supply in labor markets and the degree to which markets differ from the efficiency that models of
perfect competition predict.
Economists have done empirical studies on numerous aspects of the minimum wage,
prominently including:[3]
• Employment effects, the most frequently studied aspect
• Effects on the distribution of wages and earnings among low-paid and higher-
paid workers
• Effects on the distribution of incomes among low-income and higher-income
families
• Effects on the skills of workers through job training and the deferring of work
to acquire education
• Effects on prices and profits
Until the mid-1990s, a strong consensus existed among economists, both conservative and
liberal, that the minimum wage reduced employment, especially among younger and low-skill
workers.[13] In addition to the basic supply-demand intuition, there were a number of empirical
studies that supported this view. For example, Gramlich (1976) found that many of the benefits
went to higher income families, and in particular that teenagers were made worse off by the
unemployment associated with the minimum wage.[43]
Brown et al. (1983) note that time series studies to that point had found that for a 10 percent
increase in the minimum wage, there was a decrease in teenage employment of 1-3 percent.
However, for the effect on the teenage unemployment rate, the studies exhibited wider variation
in their estimates, from zero to over 3 percent. In contrast to the simple supply/demand figure
above, it was commonly found that teenagers withdrew from the labor force in response to the
minimum wage, which produced the possibility of equal reductions in the supply as well as the
demand for labor at a higher minimum wage and hence no impact on the unemployment rate.
Using a variety of specifications of the employment and unemployment equations (using ordinary
least squares vs. generalized least squares regression procedures, and linear vs. logarithmic
specifications), they found that a 10 percent increase in the minimum wage caused a 1 percent
decrease in teenage employment, and no change in the teenage unemployment rate. The study
also found a small, but statistically significant, increase in unemployment for adults aged 20–24.
[44]

Wellington (1991) updated Brown et al.'s research with data through 1986 to provide new
estimates encompassing a period when the real (i.e., inflation-adjusted) value of the minimum
wage was declining, due to the fact that it had not increased since 1981. She found that a 10%
increase in the minimum wage decreased teenage employment by 0.6 percentage points, with no
effect on either the teen or young adult unemployment rates.[45]
Some research suggests that the unemployment effects of small minimum wage increases are
dominated by other factors. [5] In Florida, where voters approved an increase in 2004, a follow-
up comprehensive study confirms a strong economy with increased employment above previous
years in Florida and better than in the U.S. as a whole.[6]
[edit] Card and Krueger
In 1992, the minimum wage in New Jersey increased from $4.25 to $5.05 per hour (an 18.8%
increase) while the adjacent state of Pennsylvania remained at $4.25. David Card and Alan Krueger
gathered information on fast food restaurants in New Jersey and eastern Pennsylvania in an
attempt to see what effect this increase had on employment within New Jersey. Basic economic
theory would have implied that relative employment should have decreased in New Jersey. Card
and Krueger surveyed employers before the April 1992 New Jersey increase, and again in
November-December 1992, asking managers for data on the full-time equivalent staff level of
their restaurants both times.[46] Based on data from the employers' responses, the authors
concluded that the increase in the minimum wage increased employment in the New Jersey
restaurants.[47]
Card and Krueger expanded on this initial article in their 1995 book Myth and Measurement:
The New Economics of the Minimum Wage (ISBN 0-691-04823-1). They argued that the negative
employment effects of minimum wage laws are minimal if not non-existent. For example, they
look at the 1992 increase in New Jersey's minimum wage, the 1988 rise in California's minimum
wage, and the 1990-91 increases in the federal minimum wage. In addition to their own findings,
they reanalyzed earlier studies with updated data, generally finding that the older results of a
negative employment effect did not hold up in the larger datasets.
Critics, however, argue that their research was flawed.[48] Subsequent attempts to verify the
claims requested payroll cards from employers to verify employment, and found that the
minimum wage increases were followed by decreases in employment. On the other hand, an
assessment of data collected and analyzed by David Neumark and William Wascher did not
initially contradict the Card/Krueger results,[49] but in a later edited version they found that the
same general sample set did increase unemployment. The 18.8% wage hike resulted in
"[statistically] insignificant—although almost always negative" employment effects.[50]
Another possible explanation for why the current minimum wage laws may not affect
unemployment in the United States is that the minimum wage is set close to the equilibrium
point for low and unskilled workers. Thus in the absence of the minimum wage law unskilled
workers would be paid approximately the same amount. However, an increase above this
equilibrium point could likely bring about increased unemployment for the low and unskilled
workers.[15]
[edit] Reaction to Card and Krueger
Some leading economists such as Greg Mankiw, Kevin M. Murphy and Nobel laureate Gary Becker
do not accept the Card/Krueger results,[51][52] while some others, like Nobel laureates Paul
Krugman[53] and Joseph Stiglitz do accept them as correct.[54][55]
According to economists Donald Deere (Texas A&M), Kevin Murphy (University of Chicago), and
Finis Weltch (Texas A&M), Card and Krueger's conclusions are contradicted by "common sense
and past research". They conclude that:[56]
Each of the four studies examines a different piece of the minimum
wage/employment relationship. Three of them consider a single state, and two of
them look at only a handful of firms in one industry. From these isolated findings
Card and Krueger paint a big picture wherein increased minimum wages do not
decrease, and may increase, employment. Our view is that there is something
wrong with this picture. Artificial increases in the price of unskilled laborers
inevitably lead to their reduced employment; the conventional wisdom remains
intact.

Nobel laureate James M. Buchanan responded to the Card and Krueger study in the Wall Street
Journal, arguing:[57]
...no self-respecting economist would claim that increases in the minimum wage
increase employment. Such a claim, if seriously advanced, becomes equivalent to a
denial that there is even minimum scientific content in economics, and that, in
consequence, economists can do nothing but write as advocates for ideological
interests. Fortunately, only a handful of economists are willing to throw over the
teaching of two centuries; we have not yet become a bevy of camp-following
whores.

Alan Krueger responded in The Washington Post:[58]


More was at stake here than the minimum wage – the methodology of public policy
analysis was also at issue. Some economists, such as James Buchanan, have simply
rejected the notion that their view of economic theory possibly could be proved
wrong by data.

Nobel laureate Paul Krugman, has argued in favour of the Card and Krueger result, stating that
Card and Krueger;[59]
... found no evidence that minimum wage increases in the range that the United
States has experiences led to job losses. Their work has been attacked because it
seems to contradict Econ 101 and because it was ideologically disturbing to many.
Yet it has stood up very well to repeated challenges, and new cases confirming its
results keep coming in.

[edit] Neumark and Wascher


In a 2008 book, David Neumark and William L. Wascher described their analysis of over 300
studies on the minimum wage.[3] The studies were from several countries covering a period of
over 50 years, primarily from the 1990s onward. According to the Neumark and Wascher, a large
majority of the studies show negative effects for the minimum wage; those showing positive
effects are few, questionable, and disproportionately discussed.
Based on the published studies they considered, Neumark and Wascher conclude that the
minimum wage is not good social policy. They emphasize three especially salient conclusions:
First, while acknowledging Card and Kreuger, they found that studies since the early 1990s have
strongly pointed to a "reduction in employment opportunities for low-skilled and directly
affected workers." Second, they found some evidence that the minimum wage is harmful to
poverty-stricken families, and "virtually no evidence" that it helps them. Third, they found that
the minimum wage lowers adult wages of young workers who encounter it, by reducing their
ultimate level of education.
[edit] Statistical Meta-analyses
Several researchers have conducted statistical meta-analyses of the employment effects of the
minimum wage. Card and Krueger analyzed 14 earlier time-series studies and concluded that
there was clear evidence of publication bias because the later studies, which had more data and
lower standard errors, did not show the expected increase in t-statistic (almost all the studies had
a t of about two, just above the level of statistical significance at the .05 level).[60] Though a serious
methodological indictment, opponents of the minimum wage virtually ignored this issue; as
Thomas C. Leonard noted, "The silence is fairly deafening."[61] More recently, T.D. Stanley has
criticized Card and Krueger's methodology, suggesting that their results could signify either
publication bias or the absence of an effect. Using a different methodology, however, he
concludes that there is statistically significant evidence of publication bias and that correction of
this bias shows no relationship between the minimum wage and unemployment.[62] In 2008,
Hristos Doucouliagos and T.D. Stanley conducted a similar meta-analysis of 64 U.S. studies on
disemployment effects and concluded that Card and Krueger's initial claim of publication bias is
still correct. Moreover, they concluded, "Once this publication selection is corrected, little or no
evidence of a negative association between minimum wages and employment remains."[63]
[edit] Surveys of economists
Until the 1990s, economists generally agreed that raising the minimum wage reduced
employment. This consensus was weakened when some well-publicized empirical studies
showed the opposite, although others confirmed the original view. Today's consensus, if one
exists, is that increasing the minimum wage has, at worst, minor negative effects.[64]
According to a 1978 article in the American Economic Review, 90 percent of the economists
surveyed agreed that the minimum wage increases unemployment among low-skilled workers.[65]
A 2000 survey by Dan Fuller and Doris Geide-Stevenson reports that of a sample of 308
American Economic Association economists, 45.6% fully agreed with the statement, "a minimum
wage increases unemployment among young and unskilled workers", 27.9% agreed with
provisos, and 26.5% disagreed. The authors of this study also reweighted data from a 1990
sample to show that at that time 62.4% of academic economists agreed with the statement above,
while 19.5% agreed with provisos and 17.5% disagreed. They state that the reduction on
consensus on this question is "likely" due to the Card and Krueger research and subsequent
debate.[66]
A similar survey in 2006 by Robert Whaples polled PhD members of the American Economic
Association. Whaples found that 37.7% of respondents supported an increase in the minimum
wage, 14.3% wanted it kept at the current level, 1.3% wanted it decreased, and 46.8% wanted it
completely eliminated.[67]
Surveys of labor economists have found a sharp split on the minimum wage. Fuchs et al. (1998)
polled labor economists at the top 40 research universities in the United States on a variety of
questions in the summer of 1996. Their 65 respondents split exactly 50-50 when asked if the
minimum wage should be increased. They argued that the different policy views were not related
to views on whether raising the minimum wage would reduce teen employment (the median
economist said there would be a reduction of 1%), but on value differences such as income
redistribution.[68] Daniel B. Klein and Stewart Dompe conclude, on the basis of previous surveys,
"the average level of support for the minimum wage is somewhat higher among labor economists
than among AEA members."[69]
In 2007, Klein and Dompe conducted a non-anonymous survey of supporters of the minimum
wage who had signed the "Raise the Minimum Wage" statement published by the Economic
Policy Institute. They found that a majority signed on the grounds that it transferred income from
employers to workers, or equalized bargaining power between them in the labor market. In
addition, a majority considered disemployment to be a moderate potential drawback to the
increase they supported.[70]
[edit] Alternatives
Economists and other political commentators have proposed alternatives to the minimum wage.
They argue that these alternatives may address the issue of poverty better than a minimum wage,
as it would benefit a broader population of low wage earners, not cause any unemployment, and
distribute the costs widely rather than concentrating it on employers of low wage workers.
[edit] Basic income
A basic income (or negative income tax) is a system of social security, that periodically provides
each citizen with a sum of money that is sufficient to live on. Except for citizenship, a basic
income is entirely unconditional. There is no means test, and the richest as well as the poorest
citizens would receive it. A basic income is often proposed in the form of a citizen's dividend (a
transfer payment from the government). Proponents argue that a basic income that is based on a
broad tax base, would be more economically efficient, as the minimum wage effectively imposes
a high marginal tax on employers, causing losses in efficiency.
In 1968 James Tobin, Paul Samuelson, John Kenneth Galbraith and another 1,200 economists signed
a document calling for the US Congress to introduce in that year a system of income guarantees
and supplements.[71] Both Tobin and Samuelson have also come out against the minimum wage.
[72]
In the 1972 presidential campaign, Senator George McGovern called for a 'demogrant' that was
very similar to a basic income.[73]
Winners of the Nobel Prize in Economics that fully support a basic income include Herbert Simon,
Friedrich Hayek,[74] James Meade, Robert Solow, Milton Friedman,[75] Jan Tinbergen and James Tobin.
[citation needed]

[edit] Guaranteed minimum income


A guaranteed minimum income is another proposed system of social welfare provision. It is similar to
a basic income or negative income tax system, except that it is normally conditional and subject
to a means test. Some proposals also stipulate a willingness to participate in the labor market, or a
willingness to perform community services.[citation needed]
[edit] Refundable tax credit
A refundable tax credit is a mechanism whereby the tax system can reduce the tax owed by a
household to below zero, and result in a net payment to the taxpayer beyond their own payments
into the tax system. Examples of refundable tax credits include the earned income tax credit and
the additional child tax credit in the U.S., and working tax credits and child tax credits in the UK.
Such a system is slightly different from a negative income tax, in that the refundable tax credit is
usually only paid to households that have earned at least some income.
The ability of the earned income tax credit to deliver a larger monetary benefit to poor workers at
a lower cost to society was recently documented in a report by the Congressional Budget Office.[76]
[edit] Collective bargaining
Germany,[77] Sweden and Denmark are examples of developed nations where there is no
minimum wage that is required by legislation. Instead, minimum wage standards in different
sectors are set by collective bargaining.[citation needed]
Overdraft
From Wikipedia, the free encyclopedia
Jump to:navigation, search

"I warn you, Sir! The discourtesy of this bank is beyond all limits. One word more
and I — I withdraw my overdraft!"

Cartoon from Punch Magazine Vol. 152, June 27, 1917

An overdraft occurs when withdrawals from a bank account exceed the available balance. In this
situation a person is said to be "overdrawn".
If there is a prior agreement with the account provider for an overdraft protection plan, and the
amount overdrawn is within this authorised overdraft limit, then interest is normally charged at
the agreed rate. If the balance exceeds the agreed terms, then fees may be charged and higher
interest rate might apply.

Contents
[hide]
• 1 History of the overdraft
• 2 Reasons for overdrafts
• 3 United Kingdom
○ 3.1 Overdraft protection in the UK
 3.1.1 Amount of fees
 3.1.2 Legal status and controversy
• 4 United States
○ 4.1 Overdraft protection in the US
 4.1.1 Ad-hoc coverage of overdrafts
 4.1.2 Overdraft lines of credit
 4.1.3 Linked accounts
 4.1.4 Bounce protection plans
○ 4.2 Industry statistics
○ 4.3 Transaction processing order
○ 4.4 Proposed legislation
• 5 See also
• 6 References

[edit] History of the overdraft


The first known overdraft was awarded in 1728 when merchant William Hog was allowed to
take out £1000 (almost £65000 today, US $93000) more than he had in his account.[1] The
overdraft was awarded by The Royal Bank of Scotland which had opened in Edinburgh the
previous year.
[edit] Reasons for overdrafts
Overdrafts occur for a variety of reasons. These may include:
• Intentional short-term loan - The account holder finds themselves short of
money and knowingly makes an insufficient-funds debit. They accept the
associated fees and cover the overdraft with their next deposit.
• Failure to maintain an accurate account register - The account holder
doesn't accurately account for activity on their account and overspends
through negligence.
• ATM overdraft - Banks or ATMs may allow cash withdrawals despite
insufficient availability of funds. The account holder may or may not be aware
of this fact at the time of the withdrawal. If the ATM is unable to
communicate with the cardholder's bank, it may automatically authorize a
withdrawal based on limits preset by the authorizing network.
• Temporary Deposit Hold - A deposit made to the account can be placed on
hold by the bank. This may be due to Regulation CC (which governs the
placement of holds on deposited checks) or due to individual bank policies.
The funds may not be immediately available and lead to overdraft fees.
• Unexpected electronic withdrawals - At some point in the past the
account holder may have authorized electronic withdrawals by a business.
This could occur in good faith of both parties if the electronic withdrawal in
question is made legally possible by terms of the contract, such as the
initiation of a recurring service following a free trial period. The debit could
also have been made as a result of a wage garnishment, an offset claim for a
taxing agency or a credit account or overdraft with another account with the
same bank, or a direct-deposit chargeback in order to recover an
overpayment.
• Merchant error - A merchant may improperly debit a customer's account
due to human error. For example, a customer may authorize a $5.00
purchase which may post to the account for $500.00. The customer has the
option to recover these funds through chargeback to the merchant.
• Chargeback to merchant - A merchant account could receive a chargeback
because of making an improper credit or debit card charge to a customer or a
customer making an unauthorized credit or debit card charge to someone
else's account in order to "pay" for goods or services from the merchant. It is
possible for the chargeback and associated fee to cause an overdraft or leave
insufficient funds to cover a subsequent withdrawal or debit from the
merchant's account that received the chargeback.
• Authorization holds - When a customer makes a purchase using their debit card
without using their PIN, the transaction is treated as a credit transaction. The
funds are placed on hold in the customer's account reducing the customer's
available balance. However the merchant doesn't receive the funds until they
process the transaction batch for the period during which the customer's
purchase was made. Banks do not hold these funds indefinitely, and so the
bank may release the hold before the merchant collects the funds thus
making these funds available again. If the customer spends these funds, then
barring an interim deposit the account will overdraw when the merchant
collects for the original purchase.
• Bank fees - The bank charges a fee unexpected to the account holder,
leaving insufficient funds for a subsequent debit from the same account.
• Playing the Float - The account holder makes a debit while insufficient
funds are present in the account believing they will be able to deposit
sufficient funds before the debit clears. While many cases of playing the float
are done with honest intentions, the time involved in checks clearing and the
difference in the processing of debits and credits are exploited by those
committing check kiting.
• Returned check deposit - The account holder deposits a check or money
order and the deposited item is returned due to non-sufficient funds, a closed
account, or being discovered to be counterfeit, stolen, altered, or forged. As a
result of the check chargeback and associated fee, an overdraft results or a
subsequent debit which was reliant on such funds causes one. This could be
due to a deposited item that is known to be bad, or the customer could be a
victim of a bad check or a counterfeit check scam. If the resulting overdraft is
too large or cannot be covered in a short period of time, the bank could sue
or even press criminal charges.
• Intentional Fraud - An ATM deposit with misrepresented funds is made or a
check or money order known to be bad is deposited (see above) by the
account holder, and enough money is debited before the fraud is discovered
to result in an overdraft once the chargeback is made. The fraud could be
perpetrated against one's own account, another person's account, or an
account set up in another person's name by an identity thief.
• Bank Error - A check debit may post for an improper amount due to human
or computer error, so an amount much larger than the maker intended may
be removed from the account. Same bank errors can work to the account
holder's detriment, but others could work to their benefit.
• Victimization - The account may have been a target of identity theft. This
could occur as the result of demand-draft, ATM-card, or debit-card fraud,
skimming, check forgery, an "account takeover," or phishing. The criminal act
could cause an overdraft or cause a subsequent debit to cause one. The
money or checks from an ATM deposit could also have been stolen or the
envelope lost or stolen, in which case the victim is often denied a remedy.
• Intraday overdraft - A debit occurs in the customer’s account resulting in
an overdraft which is then covered by a credit that posts to the account
during the same business day. Whether this actually results in overdraft fees
depends on the deposit-account holder agreement of the particular bank.

[edit] United Kingdom


[edit] Overdraft protection in the UK
Banks in the UK often offer a basic overdraft facility, subject to a pre-arranged limit (known as
an authorized overdraft limit). However, whether this is offered free of interest, subject to an
average monthly balance figure or at the bank's overdraft lending rate varies from bank to bank
and may differ according to the account product held.
When a customer exceeds their authorized overdraft limit, they become overdrawn without
authorization, which often results in the customer being charged one or more fees, together with
a higher rate of lending on the amount by which they have exceeded their authorized overdraft
limit. The fees charged by banks can vary. A customer may also incur a fee if they present an
item which their issuing bank declines for reason of insufficient funds, that is, the bank elects not
to permit the customer to go into unauthorized overdraft. Again, the level and nature of such fees
varies widely between banks. Usually, the bank sends out a letter informing the customer of the
charge and requesting that the account be operated within its limits from that point onwards. In a
BBC Whistleblower programme on the practice, it was noted that the actual cost of an
unauthorised overdraft to the bank was less than two pounds. [4]
[edit] Amount of fees
No major UK bank has completely dropped unauthorized overdraft fees. Some, however, offer a
"buffer zone", where customers will not be charged fees if they are over their limit by less than a
certain amount. Other banks tend to charge fees regardless of the amount of the level of the
overdraft, which is seen by some as unfair. In response to criticism, Lloyds TSB changed its fee
structure; rather than a single monthly fee for an unauthorized overdraft, they now charge per
day. They also allow a 'grace period' where you can pay money in before 3.30pm (Mon - Fri)
before any items are returned or any bank charges incurred (with exception from Standing
Orders which debit at beginning of working day). Lloyds TSB allowes their customers, if they
have gone into an unplanned Overdraft on a Friday for example, to pay money in before 10am on
Monday morning and the daily fees for the weekend (Saturday and Sunday) to be waivered.
This, however, does need to be cleared funds. Alliance & Leicester formerly had a buffer zone
facility (marketed as a "last few pounds" feature of their account), but this has been withdrawn.
In general, the fee charged is between twenty-five and thirty pounds, along with an increased rate
of debit interest. The charges for cheques and Direct Debits which are refused (or "bounced")
due to insufficient funds are usually the same as or slightly less than the general overdraft fees,
and can be charged on top of them. A situation which has provoked much controversy is the
bank declining a cheque/Direct Debit, levying a fee which takes the customer overdrawn and
then charging them for going overdrawn. However, some banks, like Halifax, have a "no fees on
fees" policy whereby an account that goes overdrawn solely because of an unpaid item fee will
not be charged an additional fee.
[edit] Legal status and controversy
See also: UK default charges controversy

In 2006 the Office of Fair Trading issued a statement which concluded that credit card issuers were
levying penalty charges when customers exceeded their maximum spend limit and / or made late
payments to their accounts. In the statement, the OFT recommended that credit card issuers set
such fees at a maximum of 12 UK pounds.[2]
In the statement, the OFT opined that the fees charged by credit card issuers were analogous to
unauthorized overdraft fees charged by banks. Many customers who have incurred unauthorized
overdraft fees have used this statement as a springboard to sue their banks in order to recover the
fees. It is currently thought that the England and Wales county courts are flooded with such
claims.[3] Claimants tend frequently to be assisted by web sites such as The Consumer Action
Group.[4] To date, many banks do not appear in court to justify their unauthorized overdraft
charging structures and many customers have recovered such charges in full,[5] However, there
have been cases where the courts have ruled in favor of the banks and alternatively struck out
claims against customers who have not adequately made a case against their bank.[6]
[edit] United States
[edit] Overdraft protection in the US
Overdraft protection is a financial service offered by banking institutions primarily in the United
States. Overdraft or courtesy pay program protection pays items presented to a customer's
account when sufficient funds are not present to cover the amount of the withdrawal. Overdraft
protection can cover ATM withdrawals, purchases made with a debit card, electronic transfers,
and checks. In the case of non-preauthorized items such as checks, or ACH withdrawals,
overdraft protection allows for these items to be paid as opposed to being returned unpaid, or
bouncing. However, ATM withdrawals and purchases made with a debit or check card are
considered preauthorized and must be paid by the bank when presented, even if this causes an
overdraft.
[edit] Ad-hoc coverage of overdrafts
Traditionally, the manager of a bank would look at the bank's list of overdrafts each day. If the
manager saw that a favored customer had incurred an overdraft, they had the discretion to pay
the overdraft for the customer. Banks traditionally did not charge for this ad-hoc coverage.
However, it was fully discretionary, and so could not be depended on. With the advent of large-
scale interstate branch banking, traditional ad-hoc coverage has practically disappeared.
The one exception to this is so-called "force pay" lists. At the beginning of each business day,
branch managers often still get a computerized list of items that are pending rejection, only for
accounts held in their specific branch, city or state. Generally, if a customer is able to come into
the branch with cash or make a transfer to cover the amount of the item pending rejection, the
manager can "force pay" the item. In addition, if there are extenuating circumstances or the item
in question is from an account held by a regular customer, the manager may take a risk by paying
the item, but this is increasingly uncommon. Banks have a cut-off time when this action must
take place by, as after that time, the item automatically switches from "pending rejection" to
"rejected," and no further action may be taken.
[edit] Overdraft lines of credit
This form of overdraft protection is a contractual relationship in which the bank promises to pay
overdrafts up to a certain dollar limit. A consumer who wants an overdraft line of credit must
complete and sign an application, after which the bank checks the consumer's credit and
approves or denies the application. Overdraft lines of credit are loans and must comply with the
Truth in Lending Act. As with linked accounts, banks typically charge a nominal fee per overdraft,
and also charge interest on the outstanding balance. Some banks charge a small monthly fee
regardless of whether the line of credit is used. This form of overdraft protection is available to
consumers who meet the creditworthiness criteria established by the bank for such accounts.
Once the line of credit is established, the available credit may be visible as part of the customer's
available balance.
[edit] Linked accounts
Also referred to as "Overdraft Transfer Protection", a checking account can be linked to another
account, such as a savings account, credit card, or line of credit. Once the link is established,
when an item is presented to the checking account that would result in an overdraft, funds are
transferred from the linked account to cover the overdraft. A nominal fee is usually charged for
each overdraft transfer, and if the linked account is a credit card or other line of credit, the
consumer may be required to pay interest under the terms of that account.
The main difference between linked accounts and an overdraft line of credit is that an overdraft
line of credit is typically only usable for overdraft protection. Separate accounts that are linked
for overdraft protection are independent accounts in their own right.
[edit] Bounce protection plans
A more recent product being offered by some banks is called "bounce protection."
Smaller banks offer plans administered by third party companies which help the banks gain
additional fee income.[7] Larger banks tend not to offer bounce protection plans, but instead
process overdrafts as disclosed in their account terms and conditions.
In either case, the bank may choose to cover overdrawn items at their discretion and charge an
overdraft fee, the amount of which may or may not be disclosed. As opposed to traditional ad-
hoc coverage, this decision to pay or not pay overdrawn items is automated and based on
objective criteria such as the customer's average balance, the overdraft history of the account, the
number of accounts the customer holds with the bank, and the length of time those accounts have
been open.[8] However, the bank does not promise to pay the overdraft even if the automated
criteria are met.
Bounce protection plans have some superficial similarities to overdraft lines of credit and ad-hoc
coverage of overdrafts, but tend to operate under different rules. Like an overdraft line of credit,
the balance of the bounce protection plan may be viewable as part of the customer's available
balance, yet the bank reserves the right to refuse payment of an overdrawn item, as with
traditional ad-hoc coverage. Banks typically charge a one-time fee for each overdraft paid. A
bank may also charge a recurring daily fee for each day during which the account has a negative
balance.
Critics argue that because funds are advanced to a consumer and repayment is expected, that
bounce protection is a type of loan.[9] Because banks are not contractually obligated to cover the
overdrafts, "bounce protection" is not regulated by the Truth in Lending Act, which prohibits
certain deceptive advertisements and requires disclosure of the terms of loans. Historically,
bounce protection could be added to a consumer's account without his or her permission or
knowledge.
In May 2005, Regulation DD of the Truth in Savings Act was amended to require that banks
offering "bounce protection" plans provide certain disclosures to their customers. These
amendments include requirements to disclose the types of transaction that may cause bounce
protection to be triggered, the fees associated with bounce protection, separate statement
categories to enumerate the number of fees charged, and restrictions on the marketing of bounce
protection programs to deter misleading advertisements. These disclosures are already provided
by larger banks which process overdrafts according to their terms and conditions.
[edit] Industry statistics
U.S. banks are projected to collect over $38.5 billion in overdraft fees for 2009, nearly double
compared to 2000.[10]
[edit] Transaction processing order
An area of controversy with regards to overdraft fees is the order in which a bank posts
transactions to a customer's account. This is controversial because largest to smallest processing
tends to maximize overdraft occurrences on a customer's account. This situation can arise when
the account holder makes a number of small debits for which there are sufficient funds in the
account at the time of purchase. Later, the account holder makes a large debit that overdraws the
account (either accidentally or intentionally). If all of the items present for payment to the
account on the same day, and the bank processes the largest transaction first, multiple overdrafts
can result.
The "biggest check first" policy is common among large U.S. banks.[11] Banks argue that this is
done to prevent a customer's most important transactions (such as a rent or mortgage check, or
utility payment) from being returned unpaid, despite some such transactions being guaranteed.
Consumers have attempted to litigate to prevent this practice, arguing that banks use "biggest
check first" to manipulate the order of transactions to artificially trigger more overdraft fees to
collect. Banks in the United States are mostly regulated by the Office of the Comptroller of
Currency, a Federal agency, which has formally approved of the practice; the practice has
recently been challenged, however, under numerous individual state deceptive practice laws. [12]
Bank deposit agreements usually provide that the bank may clear transactions in any order, at the
bank's discretion.[13]
Global Depository Receipt
From Wikipedia, the free encyclopedia
Jump to:navigation, search

This article does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (November 2009)

A Global Depository Receipt or Global Depositary Receipt (GDR) is a certificate issued by a


depository bank, which purchases shares of foreign companies and deposits it on the account.
GDRs represent ownership of an underlying number of shares.
Global Depository Receipts facilitate trade of shares, and are commonly used to invest in
companies from developing or emerging markets.
Prices of GLOBAL DEPOSITARY RECEIPT are often close to values of related shares, but
they are traded and settled independently of the underlying share.
Several international banks issue GDRs, such as JPMorgan Chase, Citigroup, Deutsche Bank, Bank
of New York. GDRs are often listed in the Frankfurt Stock Exchange, Luxembourg Stock Exchange
and in the London Stock Exchange, where they are traded on the International Order Book (IOB).
Normally 1 GDR = 10 Shares, but not always.

Foreign direct investment


From Wikipedia, the free encyclopedia
Jump to:navigation, search

Foreign direct investment (FDI) refers to long term participation by country A into country B.
It usually involves participation in management, joint-venture, transfer of technology and expertise.
There are two types of FDI: inward foreign direct investment and outward foreign direct
investment, resulting in a net FDI inflow (positive or negative).

[edit] History
Foreign direct investment (FDI) is a measure of foreign ownership of productive assets, such as
factories, mines and land. Increasing foreign investment can be used as one measure of growing
economic globalization. Figure below shows net inflows of foreign direct investment as a
percentage of gross domestic product (GDP). The largest flows of foreign investment occur
between the industrialized countries (North America, Western Europe and Japan). But flows to non-
industrialized countries are increasing sharply.
US International Direct Investment Flows:[1]
Perio FDI FDI Net
d Outflow Inflows

1960- + $ 37.04
$ 42.18 bn $ 5.13 bn
69 bn

1970- $ 122.72 + $ 81.93


$ 40.79 bn
79 bn bn

1980- $ 206.27 $ 329.23 - $ 122.96


89 bn bn bn

1990- $ 950.47 $ 907.34 + $ 43.13


99 bn bn bn

2000- $ 1,629.05 $ 1,421.31 + $ 207.74


07 bn bn bn

$ 2,950.69 $ 2,703.81 + $ 246.88


Total
bn bn bn

[edit] Types
A foreign direct investor may be classified in any sector of the economy and could be any one of
the following:[citation needed]
• an individual;
• a group of related individuals;
• an incorporated or unincorporated entity;
• a public company or private company;
• a group of related enterprises;
• a government body;
• an estate (law), trust or other societal organisation; or
• any combination of the above.

[edit] Methods
The foreign direct investor may acquire 10% or more of the voting power of an enterprise in an
economy through any of the following methods:
• by incorporating a wholly owned subsidiary or company
• by acquiring shares in an associated enterprise
• through a merger or an acquisition of an unrelated enterprise
• participating in an equity joint venture with another investor or enterprise
Foreign direct investment incentives may take the following forms:[citation needed]
• low corporate tax and income tax rates
• tax holidays
• other types of tax concessions
• preferential tariffs
• special economic zones
• EPZ - Export Processing Zones
• Bonded Warehouses
• Maquiladoras
• investment financial subsidies
• soft loan or loan guarantees
• free land or land subsidies
• relocation & expatriation subsidies
• job training & employment subsidies
• infrastructure subsidies
• R&D support
• derogation from regulations (usually for very large projects)

[edit] Debates about the benefits of FDI for low-income


countries
Some countries have put restrictions on FDI in certain sectors. India, with its restriction on FDI
in the retail sector is an example.[2] In a country like India, the “walmartization” of the country
could have significant negative effects on the overall economy by reducing the number of people
employed in the retail sector (currently the second largest employment sector nationally) and
depressing the income of people involved in the agriculture sector (currently the largest
employment sector nationally).[3]
[edit] Foreign direct investment in the United States
"Invest in America" is an initiative of the Commerce department and aimed to promote the arrival of
foreigners investors to the country.[4]
The “Invest in America” policy is focused on:
• Facilitating investor queries.
• Carrying out maneuvers to aid foreign investors.
• Provide support both at local and state levels.
• Address concerns related to the business environment by helping as an
ombudsman in Washington Dc for the international venture community.
• Offering policy guidelines and helping getting access to the legal system.
The United States is the world’s largest recipient of FDI. More than $325.3 billion in FDI flowed
into the United States in 2008, which is a 37 percent increase from 2007. The $2.1 trillion stock
of FDI in the United States at the end of 2008 is the equivalent of approximately 16 percent of
U.S. gross domestic product (GDP).55
Benefits of FDI in America: In the last 6 years, over 4000 new projects and 630,000 new jobs
have been created by foreign companies, resulting in close to $314 billion in investment.[citation
needed]
Unarguably, US affiliates of foreign companies have a history of paying higher wages than
US corporations.[citation needed] Foreign companies have in the past supported an annual US payroll of
$364 billion with an average annual compensation of $68,000 per employee.[citation needed]
Increased US exports through the use of multinational distribution networks. FDI has resulted in
30% of jobs for Americans in the manufacturing sector, which accounts for 12% of all
manufacturing jobs in the US.[5]
Affiliates of foreign corporations spent more than $34 billions on research and development in 2006
and continue to support many national projects. Inward FDI has led to higher productivity
through increased capital, which in turn has led to high living standards.[6]
[edit] Foreign direct investment in China
FDI in China has been one of the major successes of the past 3 decades.[citation needed] Starting from a
baseline of less than $19 billion just 20 years ago, FDI in China has grown to over $300 billion in
the first 10 years. China has continued its massive growth and is the leader among all developing
nations in terms of FDI.[citation needed] Even though there was a slight dip in FDI in 2009 as a result of
the global slowdown, 2010 has again seen investments increase.[citation needed] The Chinese continue
to steam roll with expectations of an economic growth of a 10% this year.[7]
Chengdu is the centre of technology, science and commerce in the southwest of China, Sichuan
province. It is also the hub of communication and transportation between mainland China and
foreign investors. All those conditions have made Chengdu the natural choice for a lot of
overseas investors or companies to develop their international business in China. While,
Chengdu Hi-Tech Industrial Development Zone (CDHT) is ranked no.4 of state-owned
development zones in China. With over 20 years of experience helping and supporting foreign
companies, Chengdu Hi-Tech has created its prestige and credibility.
They have many very large international or global companies as their clients, such as Intel.
Microsoft, Motorola, Siemens, Nokia, Ericsson, Corning, Sony, Toyota, NEC, Carrefour, UPS,
etc. inside the park. In 2008, CDHT has achieved an added value of 78.762 billion RMB and an
income from trade and technology of 220.7 billion RMB. Its outsourcing services have won the
trust of their clients. They always keep looking for more and better advanced technologies or
methods to help foreign industries and companies to develop well in China at all times.
Especially, at this time, when China has become one of the hot spots for foreign direct
investment, CDHT has the faith to support more foreign companies to succeed their ambition in
China.

The term foreign institutional investment denotes all those investors or investment companies that are not located within the territory of
the country in which they are investing. These are actually the outsiders in the financial markets of the particular company. Foreign
institutional investment is a common term in the financial sector of India.

The type of institutions that are involved in the foreign institutional investment are as follows:

 Mutual Funds
 Hedge Funds
 Pension Funds
InsuranceCompanies

The economies like India, which are growing very rapidly, are becoming hot favorite investment destinations for the foreign institutional
investors. These markets have the potential to grow in the near future . This is the prime reason behind the growing interests of the foreign
investors. The promise of rapid growth of the investable fund is tempting the investors and so they are coming in huge numbers to these
countries. The money, which is coming through the foreign institutional investment is referred as 'hot money' because the money can be
taken out from the market at anytime by these investors.

The foreign investment market was not so developed in the past. But once the globalization took the whole world in its grip, the diversified
global market became united. Because of this the investment sector became very strong and at the same time allowed the foreigners to enter
the national financial market.

At the same time the developing countries understood the value of foreign investment and allowed the foreign direct investment and foreign
institutional investment in their financial markets. Although the foreign direct investments are long term investments but the foreign
institutional investments are unpredictable. The Securities and Exchange Board of India looks after the foriegn institutional investments in
India. SEBI has imposed several rules and regulations on these investments.

Some important facts about the foreign institutional investment:


• The number of registered foreign institutional investors on June 2007 has reached 1042 from 813 in 2006
• US $6 billion has been invested in equities by these investors
• The total amount of these investments in the Indian financial market till June 2007 has been estimated at US $53.06 billion
• The foreign institutional investors are preferring the construction sector, banking sector and the IT companies for the investments
• Most active foreign institutional investors in India are HSBC, Merrill Lynch, Citigroup, CLSA

Price skimming
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Price Skimming

Price skimming is a pricing strategy in which a marketer sets a relatively high price for a product or
service at first, then lowers the price over time. It is a temporal version of price discrimination/yield
management. It allows the firm to recover its sunk costs quickly before competition steps in and
lowers the market price.
Price skimming is sometimes referred to as riding down the demand curve. The objective of a
price skimming strategy is to capture the consumer surplus. If this is done successfully, then
theoretically no customer will pay less for the product than the maximum they are willing to pay.
In practice, it is almost impossible for a firm to capture all of this surplus.
[edit] Limitations of Price Skimming
There are several potential problems with this strategy.
• It is effective only when the firm is facing an inelastic demand curve. If the long
run demand schedule is elastic (as in the diagram to the right), market
equilibrium will be achieved by quantity changes rather than price changes.
Penetration pricing is a more suitable strategy in this case. Price changes by any
one firm will be matched by other firms resulting in a rapid growth in industry
volume. Dominant market share will typically be obtained by a low cost
producer that pursues a penetration strategy.
• A price skimmer must be careful with the law. Price discrimination is illegal in
many jurisdictions, but yield management is not. Price skimming can be
considered either a form of price discrimination or a form of yield
management. Price discrimination uses market characteristics (such as price
elasticity) to adjust prices, whereas yield management uses product
characteristics. Marketers see this legal distinction as quaint since in almost
all cases market characteristics correlate highly with product characteristics.
If using a skimming strategy, a marketer must speak and think in terms of
product characteristics in order to stay on the right side of the law.
• The inventory turn rate can be very low for skimmed products. This could cause
problems for the manufacturer's distribution chain. It may be necessary to
give retailers higher margins to convince them to handle enthusiastically the
product.
• Skimming encourages the entry of competitors. When other firms see the high
margins available in the industry, they will quickly enter.
• Skimming results in a slow rate of stuff diffusion and adaptation. This results in
a high level of untapped demand. This gives competitors time to either
imitate the product or leap frog it with a new innovation. If competitors do
this, the window of opportunity will have been lost.
• The manufacturer could develop negative publicity if they lower the price too
fast and without significant product changes. Some early purchasers will feel
they have been ripped-off. They will feel it would have been better to wait
and purchase the product at a much lower price. This negative sentiment will
be transferred to the brand and the company as a whole.
• High margins may make the firm inefficient. There will be less incentive to
keep costs under control. Inefficient practices will become established
making it difficult to compete on value or price.

[edit] Examples of price skimming


• With certain high-end electronics, such as the Apple iPhone and Sony
PlayStation 3, price skimming was used. For instance, the Playstation 3 was
originally sold at $599, but it has been gradually reduced to $299.

Penetration pricing is a strategy employed by businesses introducing new goods or


services into the marketplace. With this policy, the initial price of the good or service is
set relatively low in hopes of "penetrating" into the marketplace quickly and securing
significant market share. "This pricing approach," wrote Ronald W. Hilton in
Managerial Accounting, "often is used for products that are of good quality, but do not
stand out as vastly better than competing products."

Writing in Basic Marketing, E. Jerome McCarthy and William Perreault Jr. observed
that "a penetration pricing policy tries to sell the whole market at one low price. Such an
approach might be wise when the 'elite' market—those willing to pay a high price—is
small. This is the case when the whole demand curve [for the product] is fairly elastic. A
penetration policy is even more attractive if selling larger quantities results in lower
costs because of economies of scale. Penetration pricing may be wise if the firm expects
strong competition very soon after introduction. A low penetration price may be called a
'stay out' price. It discourages competitors from entering the market." Once the product
has secured a desired market share, its producers can then review business conditions
and decide whether to gradually increase the price.

Penetration pricing, however, is not the same as introductory price dealing, in which
marketers attach temporary low prices to new products when they first hit the market.
"These temporary price cuts should not be confused with low penetration prices," wrote
McCarthy and Perreault Jr. "The plan [with introductory price dealing] is to raise prices
as soon as the introductory offer is over."

SKIMMING VERSUS PENETRATION


Some manufacturers of new products, however, take a decidedly different tack when
introducing their goods to the marketplace. Some choose to engage in skimming pricing,
a strategy wherein the initial price for the product is set quite high for a relatively short
time after introduction. Even though sales will likely be modest with skimming, the
profit margin is great. This pricing approach is most often used for high-prestige or
otherwise unique products with significant cache. Once the product's appeal broadens,
the price is then reduced to appeal to a greater range of consumers. "The decision
between skimming and penetration pricing," said Hilton, "depends on the type of
product and involves trade-offs of price versus volume. Skimming pricing results in
much slower acceptance of a new product, but higher unit profits. Penetration pricing
results in greater initial sales volume, but lower unit profits."

Dividend growth model


An approach that assumes dividends grow at a constant rate in perpetuity. The value of the stock equals
next year's dividends divided by the difference between the required rate of return and the assumed
constant growth rate in dividends.

Lockout (industry)
From Wikipedia, the free encyclopedia
A lockout is a work stoppage in which an employer prevents employees from working. This is
different from a strike, in which employees refuse to work.

[edit] Causes
A lockout may happen for several reasons. When only part of a trade union votes to strike, the
purpose of a lockout is to put pressure on a union by reducing the number of members who are
able to work. For example, if the anticipated strike severely hampers work of non-striking
workers, the employer may declare a lockout until the workers end the strike.
Another case in which an employer may impose a lockout is to avoid slowdowns or intermittent
work-stoppages.
Other times, particularly in the United States, a lockout occurs when union membership rejects
the company's final offer at negotiations and offers to return to work under the same conditions
of employment as existed under the now-expired contract. In such a case, the lockout is designed
to pressure the workers into accepting the terms of the company's last offer.
[edit] Lock-in
The term lock-in refers to the practice of physically preventing workers from leaving a
workplace. In most jurisdictions this is illegal but is occasionally reported, especially in some
developing countries.[citation needed]
More recently, lock-ins have been carried out by employees against management, which have
been labelled 'bossnapping' by the mainstream media. In France during March 2009, 3M's national
manager was locked in his office for 24 hours by employees in a dispute over redundancies.[1][2][3]
The following month, employees of a call centre managed by Synovate in Auckland locked the
front doors of the office, in response to management locking them out.[4] Such practices bear
mild resemblance to the gherao in India.
[edit] Ireland

Cartoon showing the depth of ill feeling caused by the Dublin Lockout.

The Dublin Lockout (Irish: Frithdhúnadh Mór Bhaile-Átha-Cliath) was a major industrial dispute
between approximately 20,000 workers and 300 employers which took place in Ireland's capital
city of Dublin. The dispute lasted from 26 August 1913 to 18 January 1914, and is often viewed
as the most severe and significant industrial dispute in Irish history. Central to the dispute was the
workers' right to unionize.
[edit] United States
In the United States, under Federal labor law, an employer may hire only temporary replacements
during a lockout. In a strike, unless it is an unfair labor practice (ULP) strike, an employer may
legally hire permanent replacements. Also, in many U.S. states, employees who are locked-out
are eligible to receive unemployment benefits, but are not eligible for such benefits during a strike.
[citation needed]

For the above reasons, many American employers have historically been reluctant to impose
lockouts, instead attempting to provoke a strike. However, as American unions have increasingly
begun to resort to slowdowns rather than strikes, lockouts have come "back in fashion" for many
employers, and even as incident of strikes are on the decline, incidents of lockouts are on the rise
in the U.S.[citation needed]
Recent notable lockout incidents have been reported in professional sports, notably involving the
National Basketball Association in the 1998–99 season and the National Hockey League in the 1994–
95 and 2004–05 seasons.
Layoff
From Wikipedia, the free encyclopedia
Jump to:navigation, search

Layoff is the temporary suspension or permanent termination of employment of an employee or


(more commonly) a group of employees for business reasons, such as the decision that certain
positions are no longer necessary or a business slow-down or interruption in work. Originally the
term "layoff" referred exclusively to a temporary interruption in work, as when factory work
cyclically falls off. However, in recent times the term can also refer to the permanent elimination
of a position.
Downsizing is the ‘conscious use of permanent personnel reductions in an attempt to improve
efficiency and/or effectiveness’ (Budros 1999, p. 70). Since the 1980s, downsizing has gained
strategic legitimacy. Indeed, recent research on downsizing in the US (Baumol et al. 2003, see
also the American Management Association annual surveys since 1990), UK (Sahdev et al.
1999; Chorely 2002; Mason 2002; Rogers 2002), and Japan (Mroczkowski and Hanaoka 1997;
Ahmakjian and Robinson 2001) suggests that downsizing is being regarded by management as
one of the preferred routes to turning around declining organisations, cutting cost and improving
organisational performance (Mellahi and Wilkinson 2004) most often as a cost-cutting measure.

[edit] Etymology
Euphemisms are often used to "soften the blow" in the process of firing and being fired,
(Wilkinson 2005, Redman and Wilkinson, 2006) including "downsize", "excess", "rightsize",
"delayering", "smartsize", "redeployment", "workforce reduction", "workforce optimization",
"simplification", "force shaping", "recussion", and "reduction in force" (also called a "RIF",
especially in the government employment sector). "Mass layoff" implies laying off a large
number of workers. "Attrition" implies that positions will be eliminated as workers quit or retire.
"Early retirement" means workers may quit now yet still remain eligible for their retirement
benefits later. While "redundancy" is a specific legal term in UK labour law, it may be perceived as
obfuscation. Firings imply misconduct or failure while lay-offs imply economic forces beyond
one's control.
[edit] Unemployment compensation
The method of separation may have an effect on a former employee's ability to collect whatever
form of unemployment compensation might be available in their jurisdiction. In many U.S. states,
workers who are laid off can file an unemployment claim and receive compensation. Depending
on local or state laws, workers who leave voluntarily are generally ineligible to collect
unemployment benefits, as are those who are fired for gross misconduct. Also, lay-offs due to a
firm's moving production overseas may entitle one to increased re-training benefits.
Certain countries (e.g. France and Germany), distinguish between leaving the company of one's
free will, in which case the person isn't entitled to unemployment benefits and leaving the
company voluntarily in the frame of a RIF, in which case the person is entitled to them. An RIF
reduced the number of positions, rather than laying off specific people, and is usually
accompanied by internal redeployment. A person might leave even if their job isn't reduced,
unless the employer has strong objections. In this situation, it's more beneficial for the state to
facilitate the departure of the more professionally active people, since they are less likely to
remain jobless. Often they find new jobs while still being paid by their old companies, costing
nothing to the social security system in the end.
There have also been increasing concerns about the organisational effectiveness of the post-
downsized ‘anorexic organisation’. The benefits, which organisations claim to be seeking from
downsizing, centre on savings in labour costs, speedier decision making, better communication,
reduced product development time, enhanced involvement of employees and greater
responsiveness to customers (De Meuse et al. 1997, p. 168). However, some writers draw
attention to the ‘obsessive’ pursuit of downsizing to the point of self-starvation marked by
excessive cost cutting, organ failure and an extreme pathological fear of becoming inefficient.
Hence ‘trimming’ and ‘tightening belts’ are the order of the day (Tyler and Wilkinson 2007)
[edit] Derivative terms
Downsizing has come to mean much more than job losses, as the word downsize may now be
applied to almost everything. People describe downsizing their cars, houses and nearly anything
else that can be measured or valued.
This has also spawned the opposite term upsize, which means to grow, expand or purchase
something larger.

Closure (business)
Closure is the term used to refer to the actions necessary when it is no longer necessary or
possible for a business or other organization to continue to operate. Closure may be the result of a
bankruptcy, where the organization lacks sufficient funds to continue operations, as a result of the
proprietor of the business dying, as a result of a business being purchased by another
organization (or a competitor) and shut down as superfluous, or because it is the non-surviving
entity in a corporate merger. A closure may occur because the purpose for which the organization
was created is no longer necessary.
While a closure is typically of a business or a non-profit organization, any entity which is created by
human beings can be subject to a closure, from a single church to a whole religion, up to and
including an entire country if, for some reason, it ceases to exist.
Closures are of two types, voluntary or involuntary. Voluntary closures of organizations are
much rarer than involuntary ones, as, in the absence of some change making operations
impossible or unnecessary, most operations will continue until something happens that causes a
change requiring this situation.
The most common form of voluntary closure would be when a group of people decide to start
some organization such as a social club, a band, or a non-profit organization, then at some point
those involved decide to quit. If the organization has no outstanding debts or pending operations
to finish, closure may consist of nothing more than the informal organization ceasing to exist.
This is referred to as the organizers walking away from the organization.
If an organization has debts that cannot be paid, it may be necessary to perform liquidation of its
assets. If there is anything left after the assets are converted to cash, in the case of a for-profit
organization, the remainder is distributed to the stockholders; in the case of a non-profit, by law
any remaining assets must be distributed to another non-profit.
If an organization has more debts than assets, it may have to declare bankruptcy. If the
organization has viability, it reorganizes itself as a result of the bankruptcy and continues
operations. If it is not viable for the business to continue operating, then a closure occurs through
a bankruptcy liquidation: its assets are liquidated, the creditors are paid from whatever assets
could be liquidated, and the business ceases operations.
Possibly the largest "closure" in history was the destruction of the Soviet Union into the composite
countries that represented it. In comparison, the end of East Germany can be considered a merger
rather than a closure as West Germany assumed all of the assets and liabilities of East Germany.
The end of the Soviet Union was the equivalent of a closure through a bankruptcy liquidation,
because while Russia assumed most of the assets and responsibilities of the former Soviet Union,
it did not assume all of them. There have been issues over who is responsible for unpaid parking
tickets accumulated by motor vehicles operated on behalf of diplomatic missions operated by the
former Soviet Union in other countries, as Russia claims it is not responsible for them.
Several major business closures include the bankruptcy of the Penn Central railroad, the Enron
scandals, and MCI Worldcom's bankruptcy and eventual merger into Verizon.
Retrieved from "http://en.wikipedia.org/wiki/Closure_(business)"

Two-factor theory
From Wikipedia, the free encyclopedia
Jump to: navigation, search

For Schachter's two factor theory of emotion, see Two factor theory of emotion.

It has been suggested that Hygiene factors be merged into this article or
section. (Discuss)

The two-factor theory (also known as Herzberg's motivation-hygiene theory) states that there
are certain factors in the workplace that cause job satisfaction, while a separate set of factors cause
dissatisfaction. It was developed by Frederick Herzberg, a psychologist, who theorized that job
satisfaction and job dissatisfaction act independently of each other.[1]

Contents
[hide]
• 1 Two-factor theory
fundamentals
• 2 Validity and criticisms
• 3 Implications for management
• 4 References
• 5 Further reading
• 6 External links
[edit] Two-factor theory fundamentals
Attitudes and their connection with industrial mental health are related to Maslow's theory of
motivation. His findings have had a considerable theoretical, as well as a practical, influence on
attitudes toward administration[2]. According to Herzberg, individuals are not content with the
satisfaction of lower-order needs at work, for example, those associated with minimum salary
levels or safe and pleasant working conditions. Rather, individuals look for the gratification of
higher-level psychological needs having to do with achievement, recognition, responsibility,
advancement, and the nature of the work itself. So far, this appears to parallel Maslow's theory of
a need hierarchy. However, Herzberg added a new dimension to this theory by proposing a two-
factor model of motivation, based on the notion that the presence of one set of job characteristics
or incentives lead to worker satisfaction at work, while another and separate set of job
characteristics lead to dissatisfaction at work. Thus, satisfaction and dissatisfaction are not on a
continuum with one increasing as the other diminishes, but are independent phenomena. This
theory suggests that to improve job attitudes and productivity, administrators must recognize and
attend to both sets of characteristics and not assume that an increase in satisfaction leads to
decrease in unpleasurable dissatisfaction.
The two-factor, or motivation-hygiene theory, developed from data collected by Herzberg from
interviews with a large number of engineers and accountants in the Pittsburgh area. From
analyzing these interviews, he found that job characteristics related to what an individual does —
that is, to the nature of the work he performs — apparently have the capacity to gratify such
needs as achievement, competency, status, personal worth, and self-realization, thus making him
happy and satisfied. However, the absence of such gratifying job characteristics does not appear
to lead to unhappiness and dissatisfaction. Instead, dissatisfaction results from unfavorable
assessments of such job-related factors as company policies, supervision, technical problems,
salary, interpersonal relations on the job, and working conditions. Thus, if management wishes to
increase satisfaction on the job, it should be concerned with the nature of the work itself — the
opportunities it presents for gaining status, assuming responsibility, and for achieving self-
realization. If, on the other hand, management wishes to reduce dissatisfaction, then it must focus
on the job environment — policies, procedures, supervision, and working conditions[1]. If
management is equally concerned with both (as is usually the case), then managers must give
attention to both sets of job factors.
The theory was based around interviews with 203 American accountants & engineers in
Pittsburgh, chosen because of their professions' growing importance in the business world. The
subjects were asked to relate times when they felt exceptionally good or bad about their present
job or any previous job, and to provide reasons, and a description of the sequence of events
giving rise to that positive or negative feeling.
Here is the description of this interview analysis:
Briefly, we asked our respondents to describe periods in their lives when they were exceedingly
happy and unhappy with their jobs. Each respondent gave as many "sequences of events" as he
could that met certain criteria—including a marked change in feeling, a beginning and an end,
and contained some substantive description other than feelings and interpretations…
The proposed hypothesis appears verified. The factors on the right that led to satisfaction
(achievement, intrinsic interest in the work, responsibility, and advancement) are mostly
unipolar; that is, they contribute very little to job dissatisfaction. Conversely, the dis-satisfiers
(company policy and administrative practices, supervision, interpersonal relationships, working
conditions, and salary) contribute very little to job satisfaction[3].
Two-factor theory distinguishes between:
• Motivators (e.g., challenging work, recognition, responsibility) that give
positive satisfaction, arising from intrinsic conditions of the job itself, such as
recognition, achievement, or personal growth[4], and
• Hygiene factors (e.g. status, job security, salary and fringe benefits) that do not
give positive satisfaction, though dissatisfaction results from their absence.
These are extrinsic to the work itself, and include aspects such as company
policies, supervisory practices, or wages/salary[4].
Essentially, hygiene factors are needed to ensure an employee is not dissatisfied. Motivation
factors are needed to motivate an employee to higher performance, Herzberg also further
classified our actions and how and why we do them, for example, if you perform a work related
action because you have to then that is classed as movement, but if you perform a work related
action because you want to then that is classed as motivation.
Unlike Maslow, who offered little data to support his ideas, Herzberg and others have presented
considerable empirical evidence to confirm the motivation-hygiene theory, although their work
has been criticized on methodological grounds.
[edit] Validity and criticisms
In 1968 Herzberg stated that his two-factor theory study had already been replicated 16 times in
a wide variety of populations including some in Communist countries, and corroborated with
studies using different procedures that agreed with his original findings regarding intrinsic
employee motivation making it one of the most widely replicated studies on job attitudes.
While the Motivator-Hygiene concept is still well regarded, satisfaction and dissatisfaction are
generally no longer considered to exist on separate scales. The separation of satisfaction and
dissatisfaction has been shown to be an artifact of the Critical Incident Technique (CIT) used by
Herzberg to record events [5]. Furthermore, it has been noted the theory does not allow for
individual differences, such as a particular personality traits, which would affect individuals'
unique responses to motivating or hygiene factors [4].
A number of behavioral scientists have pointed to inadequacies in the need hierarchy and
motivation-hygiene theories. The most basic is the criticism that both of these theories contain
the relatively explicit assumption that happy and satisfied workers produce more. Another
problem is that these and other statistical theories are concerned with explaining "average"
behavior and, on the other hand, if playing a better game of golf is the means he chooses to
satisfy his need for recognition, then he will find ways to play and think about golf more often,
perhaps resulting in an accompanying lower output on the job. Finally, in his pursuit of status he
might take a balanced view and strive to pursue several behavioral paths in an effort to achieve a
combination of personal status objectives.
In other words, this individual's expectation or estimated probability that a given behavior will
bring a valued outcome determines his choice of means and the effort he will devote to these
means. In effect, this diagram of expectancy depicts an employee asking himself the question
posed by one investigator, "How much payoff is there for me toward attaining a personal goal
while expending so much effort toward the achievement of an assigned organizational
objective?" [6] The Expectancy theory by Victor Vroom also provides a framework for motivation
based on expectations.
This approach to the study and understanding of motivation would appear to have certain
conceptual advantages over other theories: First, unlike Maslow's and Herzberg's theories, it is
capable of handling individual differences. Second, its focus is toward the present and the future,
in contrast to drive theory, which emphasizes past learning. Third, it specifically correlates with
behavior to a goal and thus eliminates the problem of assumed relationships, such as between
motivation and performance. Fourth, it relates motivation to ability: Performance =
Motivation*Ability.
That said, a study by the Gallup Organization, as detailed in the book "First, Break All the Rules:
What the World's Greatest Managers Do" by Marcus Buckingham and Curt Coffman, appears to
provide strong support for Herzberg's division of satisfaction and dissatisfaction onto two
separate scales. In this book, the authors discuss how the study identified twelve questions that
provide a framework for determining high-performing individuals and organizations. These
twelve questions align squarely with Herzberg's motivation factors, while hygiene factors were
determined to have little effect on motivating high performance.
To better understand employee attitudes and motivation, Frederick Herzberg performed studies
to determine which factors in an employee's work environment caused satisfaction or
dissatisfaction. He published his findings in the 1959 book The Motivation to Work.
The studies included interviews in which employees where asked what pleased and displeased
them about their work. Herzberg found that the factors causing job satisfaction (and presumably
motivation) were different from those causing job dissatisfaction. He developed the motivation-
hygiene theory to explain these results. He called the satisfiers motivators and the dissatisfiers
hygiene factors, using the term "hygiene" in the sense that they are considered maintenance
factors that are necessary to avoid dissatisfaction but that by themselves do not provide
satisfaction.
The following table presents the top six factors causing dissatisfaction and the top six factors
causing satisfaction, listed in the order of higher to lower importance.
Leading to satisfaction Leading to dissatisfaction
• Achievement • Company policy
• Recognition • Supervision
• Work itself • Relationship with boss
• Responsibility • Work conditions
• Advancement • Salary
• Growth • Relationship with peers
• Security

Herzberg reasoned that because the factors causing satisfaction are different from those causing
dissatisfaction, the two feelings cannot simply be treated as opposites of one another. The
opposite of satisfaction is not dissatisfaction, but rather, no satisfaction. Similarly, the opposite
of dissatisfaction is no dissatisfaction.
While at first glance this distinction between the two opposites may sound like a play on words,
Herzberg argued that there are two distinct human needs portrayed. First, there are physiological
needs that can be fulfilled by money, for example, to purchase food and shelter. Second, there is
the psychological need to achieve and grow, and this need is fulfilled by activities that cause one
to grow.
From the above table of results, one observes that the factors that determine whether there is
dissatisfaction or no dissatisfaction are not part of the work itself, but rather, are external factors.
Herzberg often referred to these hygiene factors as "KITA" factors, where KITA is an acronym
for Kick In The Ass, the process of providing incentives or a threat of punishment to cause
someone to do something. Herzberg argues that these provide only short-run success because the
motivator factors that determine whether there is satisfaction or no satisfaction are intrinsic to the
job itself, and do not result from carrot and stick incentives.
In a survey of 80 teaching staff at Egyptian private universities, Mohamed Hossam El-Din
Khalifa and Quang Truong (2009), has found out that Perception of Equity was directly related
to job satisfaction when the outcome in the equity comparison was one of Herzberg's Motivators.
On the contrary, perception of equity and job satisfaction were not related when the outcome in
the equity comparison was one of Herzberg's Hygiene Factors. The findings of this study provide
a kind of an indirect support to Herzberg's findings that improving Hygiene Factors would not
lead to improvement in an employee's job satisfaction.
[edit] Implications for management
If the motivation-hygiene theory holds, management not only must provide hygiene factors to
avoid employee dissatisfaction, but also must factors intrinsic to the work itself for employees to
be satisfied with their jobs.
Herzberg argued that job enrichment is required for intrinsic motivation, and that it is a
continuous management process. According to Herzberg:
• The job should have sufficient challenge to utilize the full ability of the
employee.
• Employees who demonstrate increasing levels of ability should be given
increasing levels of responsibility.
• If a job cannot be designed to use an employee's full abilities, then the firm
should consider automating the task or replacing the employee with one who
has a lower level of skill. If a person cannot be fully utilized, then there will be
a motivation problem.
Critics of Herzberg's theory argue that the two-factor result is observed because it is natural for
people to take credit for satisfaction and to blame dissatisfaction on external factors.
Furthermore, job satisfaction does not necessarily imply a high level of motivation or
productivity.
Herzberg's theory has been broadly read and despite its weaknesses its enduring value is that it
recognizes that true motivation comes from within a person and not from KITA factors.(French,
2008)

360-degree feedback
From Wikipedia, the free encyclopedia

Jump to: navigation, search


In human resources or industrial/organizational psychology, 360-degree feedback, also known as
multi-rater feedback, multisource feedback, or multisource assessment, is feedback that
comes from all around an employee. "360" refers to the 360 degrees in a circle, with an
individual figuratively in the center of the circle. Feedback is provided by subordinates, peers,
and supervisors. It also includes a self-assessment and, in some cases, feedback from external
sources such as customers and suppliers or other interested stakeholders. It may be contrasted
with "upward feedback," where managers are given feedback by their direct reports, or a
"traditional performance appraisal," where the employees are most often reviewed only by their
managers.
The results from 360-degree feedback are often used by the person receiving the feedback to
plan training and development. Results are also used by some organizations in making
administrative decisions, such as pay or promotion. When this is the case, the 360 assessment is
for evaluation purposes, and is sometimes called a "360-degree review." However, there is a
great deal of controversy as to whether 360-degree feedback should be used exclusively for
development purposes, or should be used for appraisal purposes as well (Waldman et al., 1998).
There is also controversy regarding whether 360-degree feedback improves employee
performance, and it has even been suggested that it may decrease shareholder value (Pfau &
Kay, 2002).

Contents
[hide]
• 1 History
• 2 Accuracy
• 3 Results
• 4 References

[edit] History
The German Military first began gathering feedback from multiple sources in order to evaluate
performance during World War II (Fleenor & Prince, 1997). Also during this time period, others
explored the use of multi-rater feedback via the concept of T-groups.
One of the earliest recorded uses of surveys to gather information about employees occurred in
the 1950s at Esso Research and Engineering Company (Bracken, Dalton, Jako, McCauley, &
Pollman, 1997). From there, the idea of 360-degree feedback gained momentum, and by the
1990s most human resources and organization development professionals understood the concept.
The problem was that collecting and collating the feedback demanded a paper-based effort
including either complex manual calculations or lengthy delays. The first led to despair on the
part of practitioners; the second to a gradual erosion of commitment by recipients.
Multi-rater feedback use steadily increased in popularity, due largely to the use of the Internet in
conducting web-based surveys (Atkins & Wood, 2002). Today, studies suggest that over one-
third of U.S. companies use some type of multi-source feedback (Bracken, Timmereck, &
Church, 2001a). Others claim that this estimate is closer to 90% of all Fortune 500 firms
(Edwards & Ewen, 1996). In recent years, Internet-based services have become the norm, with a
growing menu of useful features (e.g., multi languages, comparative reporting, and aggregate
reporting) (Bracken, Summers, & Fleenor, 1998).
[edit] Accuracy
A study on the patterns of rater accuracy shows that length of time that a rater has known the
person being rated has the most significant effect on the accuracy of a 360-degree review. The
study shows that subjects in the group “known for one to three years” are the most accurate,
followed by “known for less than one year,” followed by “known for three to five years” and the
least accurate being “known for more than five years.” The study concludes that the most
accurate ratings come from knowing the person long enough to get past first impressions, but not
so long as to begin to generalize favorably (Eichinger, 2004).
It has been suggested that multi-rater assessments often generate conflicting opinions, and that
there may be no way to determine whose feedback is accurate (Vinson, 1996). Studies have also
indicated that self-ratings are generally significantly higher than the ratings of others (Lublin,
1994; Yammarino & Atwater, 1993; Nowack, 1992).
[edit] Results
Several studies (Hazucha et al., 1993; London & Wohlers, 1991; Walker & Smither, 1999)
indicate that the use of 360-degree feedback helps people improve performance. In a 5-year
Walker and Smither (1999) study, no improvement in overall ratings was found between the 1st
and 2nd year, but higher scores were noted between 2nd and 3rd and 3rd and 4th years. A study
by Reilly et al. (1996) found that performance increased between the 1st and 2nd
administrations, and sustained this improvement 2 years later. Additional studies show that 360
feedback may be predictive of future performance (Maylett & Riboldi, 2007).
Some authors maintain that 360 processes are much too complex to make blanket generalizations
about their effectiveness (Bracken, Timmreck, Fleenor, & Summers, 2001b; Smither, London, &
Reilly, 2005). Smither et al. (2005) suggest, "We therefore think that it is time for researchers
and practitioners to ask, 'Under what conditions and for whom is multisource feedback likely to
be beneficial?' (rather than asking 'Does multisource feedback work?') (p. 60)." Their meta-
analysis of 24 longitudinal studies looks at individual and organizational moderators that point to
many potential determinants of behavior change, including positive feedback orientation,
positive reactions to feedback, goal setting, and taking action.
Bracken et al. (2001b) and Bracken and Timmreck (2001) focus on process features that are
likely to also have major effects in creating behavior change and offer best practices in those
areas. Some of these factors have been researched and been shown to have significant impact.
Greguras and Robie (1998) document how the number of raters used in each rater category
(direct report, peer, manager) affects the reliability of the feedback, with direct reports being the
least reliable and therefore requiring more participation. Multiple pieces of research (Bracken &
Paul, 1993; Kaiser & Kaplan, 2006; Caputo & Roch, 2009; English, Rose, & McClellan, 2009)
have demonstrated that the response scale can have a major effect on the results, and some
response scales are indeed better than others. Goldsmith and Underhill (2001) report the
powerful influence of the participant behavior of following up with raters to discuss their results.
Other potentially powerful moderators of behavior change include how raters are selected,
manager approval, instrument quality (reliability and validity), rater training and orientation,
participant training, manager (supervisor) training, coaching, integration with HR systems, and
accountability (Bracken et al., 2001b).
Others authors state that the use of multi-rater assessment does not improve company
performance. One 2001 study found that 360-degree feedback was associated with a 10.6 percent
decrease in market value, while another study concludes that "there is no data showing that [360-
degree feedback] actually improves productivity, increases retention, decreases grievances, or is
superior to forced ranking and standard performance appraisal systems. It sounds good, but there
is no proof it works." (Pfau & Kay, 2002) Similarly, Seifert, Yukl, and McDonald (2003) state
that there is little evidence that the multi-rater process results in change.
Additional studies (Maylett, 2005) found no correlation between an employee's multi-rater
assessment scores and his or her top-down performance appraisal scores (provided by the
person's supervisor), and advised that although multi-rater feedback can be effectively used for
appraisal, care should be taken in its implementation (Maylett, 2009). This research suggests that
360-degree feedback and performance appraisals get at different outcomes, and that both 360-
degree feedback and traditional performance appraisals should be used in evaluating overall
performance.[1]
Collecting data is the second step in the Workforce Planning process. Data collection includes
conducting an Environmental Scan andSWOT Analysis (PDF file) and a Supply/Demand Analysis
(PDF file).

An Environmental Scan can be commonly defined as:

an analysis and evaluation of internal conditions and external


data and factors that affect the organization.

In workforce planning, environmental scanning helps an agency develop the understanding of


the internal and external environment needed to determine whether the business needs of the
agency are in sync with the availability and competency of the workforce.

An Environmental Scan requires identifying the internal and external Strengths, Weaknesses,
Opportunities and Threats (SWOT) that will affect the short- and long-term goals of the
agency.

A comprehensive Environmental Scan includes:


• Forecasting business trends.
• Conductng internal and external scans.
• Describing the current workforce.
• Projecting workforce supply and demand.
• Identifying current and needed competencies (knowledge, skills, abilities and
behaviors).
While the Environmental Scan is about collecting information and data to gain understanding,
the SWOT Analysis is about categorizing this information into action buckets.

Information and trends discovered during the Environmental Scan process can provide the
foundation for a SWOT Analysis finding. For example, if the Environmental Scan predicts that
there will be a shortage of trained child welfare workers, this shortage would likely be identified
as a Threat in the SWOT Analysis.

A Supply Analysis is the process of creating a profile of an agency's existing workforce. A


Demand Analysis identifies the workforce and competencies needed to carry out the agency's
mission.
SIDBI has set up Technology Development and Modernisation Fund Scheme for direct
assistance of small scale industries to encourage existing industrial units in the small scale
sector to modernise their production facilities and adopt improved and updated technology so as
to strengthen their export capabilities.

Assistance under the scheme is available for meeting the expenditure on purchase of capital
equipment, acquisition of technical know how, upgradation of process technology and products
with thrust on quality improvement, improvement in packaging and cost of TOM & acquisition of
ISO-9000 series certification.

Units which are already exporting their products or have the potential to export at least 25% of
the output by adopting modernisation scheme would be eligible for assistance from the fund
provided they have been in operation of atleast 3 years and are not in default to bank/financial
institutions. Assistance under the scheme will be need-based subject to a minimum of Rs. 10
lakh per unit.

Technical upgradation and quality improvement (including TSC - 9000)

Technical Development Trust Funds for Technology upgradation / acquisition / transfer


in the small scale sector.

Towards facilitating Industry Associations and NGOs in the programme of technology


upgradation and transfer in small scale sector, a Plan Scheme has been approved for providing
grants including assistance to Technology Development Funds to be created in various states
with the involvement of State Governments and Industry Associations. Total Outlay for this
scheme is Rs. 150 lakhs during Eighth Plan period.

The ratio of contribution to the fund could be 60% from Government of India, 40% from State
Government, Industry Associations and other developmental agencies including banks put
together. The initiative could be with the state Govts./Industry Associations, to rise their
contribution of 40% by mobilising resources at the State level. Assistance to such a fund is
restricted to Rs. 30 lakhs per fund. The scope and activities to be generated out of the Fund are
as follows :

This technology fund, inter-alia, is to bring about technological upgradation in selected areas of
the SSI Sector with the involvement of CEIR Labs, Tool rooms, Testing Centres, PPDC etc. It
will also help in development of prototypes, designs, drawing and dissemination of the
information through seminars, workshops, Consultancy etc.

Arranging of technology transfer between SMEs within the country and also by way of arranging
tie-ups for technology transfer between large and small industries, particularly for ancillarisation
and vendor development.

Arranging of technology transfer from Indian small enterprises to small enterprises in other
developing countries.

Arranging of technology transfer to the Indian SMEs in other developed countries.

Sponsoring/sponsoring studies related to upgradation of technology in specified SSI sector,


clusters.

It is also proposed that following activities could be included for making a portion of the fund,
more useful for getting faster results :
i. The fund at the disposal of the Government/DCSSI could also be utilised for sponsoring
technology related training programmes in India & abroad.

- Organising Seminars/Workshops: Participation of experts/SSI industry/concerned


institutions in seminars/workshops, symposiums etc, held abroad by various
international and national level agencies. With regard to SSI industries, the fund could
support upto 50% of the cost not exceeding Rs. 50,000 covering travel, per diem etc.
per unit.

- Provide partial funding support upto 50% of the cost acquisition of technologies,
negotiating transfer for technology agreement, and such related activities.

- Meeting expenditure for experts participation from other developed countries in various
seminars/workshops related to technology transfer/acquisition.
ii. Conducting of technology related studies including cluster studies etc.
MODERNISATION OF SELECTED SMALL SCALE INDUSTRIES

During the Eighth Five Year Plan, a sum of Rs. 70 lakhs was earmarked for the programme of
modernisation of selected small scale industries. Under this scheme it is proposed to prepare
modernisation guides, status reports, technology upgradation reports, cluster study reports,
units specific study reports and organise contact programme in the form of seminars/workshops
for dissemination of information.

LOCK-OUT means the temporary closing of a place of employment or the .... compel the
workmen to accept the terms and conditions of the employer .

QL - Acceptance Quality Level

The AQL (Acceptance Quality Level), the maximum % defective that can be considered satisfactory as a
process average for sampling inspection, here is 1%. Its corresponding Pa is about 89%. It should
normally be at least that high.

RQL - Rejectable Quality Level


The RQL (Rejectable Quality Level) is the % defective, here at 5%, that is associated with the established
β risk (which is usually standardized at 10%). It is also known as the Lot Tolerance Percent Defective
(LTPD).
LTPD - Lot Tolerance Percent Defective
The LTPD of a sampling plan is a level of quality routinely rejected by the sampling plan. It is generally
defined as that level of quality (percent defective, defects per hundred units, etc.) which the sampling plan
will accept 10% of the time.

* The hyper geometric and binomial distance are also used. The alpha risk is the probability of rejecting
relatively good lots (at AQL). The beta risk is the probability of accepting relatively bad lots (at
LTPD/RQL). It is the probability of accepting product of some stated undesirable quality; it is the value of
Pa at that stated quality level.

The OC curves are a means of quantifying alpha and beta risks for a given attribute sampling plan. The
Pa value obtained assumes that the distribution of defectives among a lot is random – either the
underlying process is in control, or the product was well mixed before being divided into lots. The samples
must be selected randomly from the entire lot. The alpha risk is 1 − P a. The shape of the OC curves is
affected by the sample size (n) and accept number (c) parameters. Increasing both the accept number
and sample size will bring the curve closer to the ideal shape, with better discrimination.
Linear Regression
Scatter Diagrams
We often wish to look at the relationship between two things (e.g. between a person"s height and
weight) by comparing data for each of these things. A good way of doing this is by drawing a
scatter diagram.
"Regression" is the process of finding the function satisfied by the points on the scatter diagram.
Of course, the points might not fit the function exactly but the aim is to get as close as possible.
"Linear" means that the function we are looking for is a straight line (so our function f will be of
the form f(x) = mx + c for constants m and c).
Here is a scatter diagram with a regression line drawn in:

Correlation
Correlation is a term used to describe how strong the relationship between the two variables
appears to be.
We say that there is a positive linear correlation if y increases as x increases and we say there is a
negative linear correlation if y decreases as x increases. There is no correlation if x and y do not
appear to be related.
Explanatory and Response Variables
In many experiments, one of the variables is fixed or controlled and the point of the experiment
is to determine how the other variable varies with the first. The fixed/controlled variable is
known as the explanatory or independent variable and the other variable is known as the
response or dependent variable.
I shall use "x" for my explanatory variable and "y" for my response variable, but I could have
used any letters.
Regression Lines

By Eye
If there is very little scatter (we say there is a strong correlation between the variables), a
regression line can be drawn "by eye". You should make sure that your line passes through the
mean point (the point (x,y) where x is mean of the data collected for the explanatory variable and
y is the mean of the data collected for the response variable).
Two Regression Lines
When there is a reasonable amount of scatter, we can draw two different regression lines
depending upon which variable we consider to be the most accurate. The first is a line of
regression of y on x, which can be used to estimate y given x. The other is a line of regression of
x on y, used to estimate x given y.
If there is a perfect correlation between the data (in other words, if all the points lie on a straight
line), then the two regression lines will be the same.
Least Squares Regression Lines
This is a method of finding a regression line without estimating where the line should go by eye.
If the equation of the regression line is y = ax + b, we need to find what a and b are. We find
these by solving the "normal equations".
Normal Equations
The "normal equations" for the line of regression of y on x are:
Σ y = aΣ x + nb and
Σ xy = aΣ x2 + bΣ x
The values of a and b are found by solving these equations simultaneously.
For the line of regression of x on y, the "normal equations" are the same but with x and y
swapped.

What are the problems involved in measuring


national income?
There are 3 main problems involves in measuring National Income
These are:

Errors and Omissions - this is a problem in collecting and calculating


statistics. This is a problem as people hide what they earn and firms hide
their output, to avoid paying tax, this is the black economy also known as
the "ray gun"

Over recording of figures (Double Counting) - This is losing all perks as


you are not revived and incomes are being counted multiple times. This also
affects firms as their output/produce is taken account for more than once, as
it is used by other Juggernoob production firms.

Over Recording of incomes (Double Counting) - As people pay taxes


their incomes are taking into account, and used to pay such things as
benefits and pensions, if these are also counted sleight of hand is in
progress. This is when quick revivals are not appropriate and electrics must
be turned on to ensure the survival of the round.

The need for competency mapping


Finding the right fit for the right job is a matter of concern for most organisations
especially in today’s economic crisis. As meeting an individual's career aspirations
are concerned, once the organisation gives an employee the perspective of what is
required from him/her to reach a particular position, it drives them to develop the
competencies for the same.

Competencies enable individuals to identify and articulate what they offer


-regardless of the job. Competency mapping is a process of identifying key
competencies for a particular position in an organisation, and then using it for job-
evaluation, recruitment, training and development, performance management,
succession planning, etc. Introduction of competency mapping has also involved
introducing skill appraisals in performance appraisals.

Measures Undertaken For Rehabilitation Of ‘Sick' Msmes


In a bid to accelerate the revival of ‘sick' MSMEs, the Reserve Bank of India (RBI) has
issued comprehensive guidelines to banks, asking them to undertake measures that would
enable non-performing units to access adequate credit.
In a written reply to a question in Lok Sabha, Dinsha Patel, minister of state (independent
charge) for MSMEs, stated, "Financial support in the form of debt restructuring as well as
extending fresh loans has been offered by primary lending institutions (PLIs) in order to
rehabilitate sick MSMEs that are potentially viable."
Earlier, the apex bank had issued guidelines on identification of sickness in MSMEs at an
early stage as well as on debt restructuring mechanism for small firms based on the ‘Policy
Package for Stepping up credit to Small and Medium Enterprises'.
"The RBI and the government have in the past undertaken several remedial measures
required to help ailing small industrial units function smoothly by laying down guidelines to
banks and PLIs to provide adequate and affordable finance to the sector," said A Somani,
senior analyst at Balaji Securities, an equity broking firm in New Delhi.
According to the latest available data compiled by the apex bank, 2,330 ailing micro small
enterprises out of 8,168 viable sick units had been put under rehabilitation by the end of
March 2009.

Corporate Social Responsibility: An Implementation Guide for Business


The critical role of companies in implementing sustainable development internationally is widely
recognized. Increasingly, corporate social responsibility (CSR) is being acknowledged not only
as a key to risk mitigation but also as a core element for building corporate value. This guide,
designed for businesses operating in the international context, provides an overview of the basic
steps to, and instruments for, implementing a CSR strategy adapted specifically to your business
or organizational context.

Explain The Circular Flow Of National Income?


Accordingly, NI equations will be as
Y= C+I and Y= C+S
An economy may consist of three sectors; the economy is consisting of consumers,
producers and government. Accordingly, NI equation will be
Y= C+I+G and Y= C+S+T
An economy may consist of three sectors the economy is consisting of consumers,
producers and foreign trade. Accordingly, NI equation will be as
Y= C+I+G+M and Y= C+S+T+X
The circular flow of NI is based upon two principles.
As a result of each economic transaction the seller gets how much is spent by the
buyer.
If goods and services have a flow towards a particular direction, the money has also
a flow towards the other direction.

As we are constructing the circular flow of NI in two sector economy, then


whatsoever is producer by the producers is sold out to the consumers. The goods
produced represent NI while the consumers spend all of their earning on the
consumption of goods produced by the producers such expenditures also represent
NI. The consumers or household provide the services of four factors to the producers.
Against such services, the factors of production get the remunerations such
earnings also represent NI. So these earnings and expenditures represent the NI.
Anonymous

Balanced Scorecard Basics


The balanced scorecard is a strategic planning and
management system that is used extensively in business and
industry, government, and nonprofit organizations worldwide
to align business activities to the vision and strategy of the
organization, improve internal and external communications,
and monitor organization performance against strategic goals.
It was originated by Drs. Robert Kaplan (Harvard Business
School) and David Norton as a performance measurement
framework that added strategic non-financial performance
measures to traditional financial metrics to give managers
and executives a more 'balanced' view of organizational
performance. While the phrase balanced scorecard was
coined in the early 1990s, the roots of the this type of
approach are deep, and include the pioneering work of
General Electric on performance measurement reporting in
the 1950’s and the work of French process engineers (who
created the Tableau de Bord – literally, a "dashboard" of
performance measures) in the early part of the 20th century.
The balanced scorecard has evolved from its early use as a
simple performance measurement framework to a full
strategic planning and management system. The “new”
balanced scorecard transforms an organization’s strategic
plan from an attractive but passive document into the
"marching orders" for the organization on a daily basis. It
provides a framework that not only provides performance
measurements, but helps planners identify what should be
done and measured. It enables executives to truly execute
their strategies.
This new approach to strategic management was first detailed
in a series of articles and books by Drs. Kaplan and Norton.
Recognizing some of the weaknesses and vagueness of
previous management approaches, the balanced scorecard
approach provides a clear prescription as to what companies
should measure in order to 'balance' the financial perspective.
The balanced scorecard is a management system (not only a
measurement system) that enables organizations to clarify
their vision and strategy and translate them into action. It
provides feedback around both the internal business
processes and external outcomes in order to continuously
improve strategic performance and results. When fully
deployed, the balanced scorecard transforms strategic
planning from an academic exercise into the nerve center of
an enterprise.
Kaplan and Norton describe the innovation of the balanced
scorecard as follows:
"The balanced scorecard retains traditional financial
measures. But financial measures tell the story of past
events, an adequate story for industrial age companies for

which investments in long-term capabilities and customer


relationships were not critical for success. These financial
measures are inadequate, however, for guiding and
evaluating the journey that information age companies must
make to create future value through investment in customers,
suppliers, employees, processes, technology, and innovation."

Adapted from Robert S. Kaplan and David P. Norton, “Using the Balanced
Scorecard as a Strategic Management System,” Harvard Business Review
(January-February 1996): 76.

Perspectives
The balanced scorecard suggests that we view the
organization from four perspectives, and to develop metrics,
collect data and analyze it relative to each of these
perspectives:
The Learning & Growth Perspective
This perspective includes employee training and corporate
cultural attitudes related to both individual and corporate selfimprovement.
In a knowledge-worker organization, people --
the only repository of knowledge -- are the main resource. In
the current climate of rapid technological change, it is
becoming necessary for knowledge workers to be in a
continuous learning mode. Metrics can be put into place to
guide managers in focusing training funds where they can
help the most. In any case, learning and growth constitute
the essential foundation for success of any knowledge-worker
organization.
Kaplan and Norton emphasize that 'learning' is more than
'training'; it also includes things like mentors and tutors
within the organization, as well as that ease of
communication among workers that allows them to readily
get help on a problem when it is needed. It also includes
technological tools; what the Baldrige criteria call "high
performance work systems."
The Business Process Perspective
This perspective refers to internal business processes. Metrics
based on this perspective allow the managers to know how
well their business is running, and whether its products and
services conform to customer requirements (the mission).
These metrics have to be carefully designed by those who
know these processes most intimately; with our unique
missions these are not something that can be developed by
outside consultants.
The Customer Perspective
Recent management philosophy has shown an increasing
realization of the importance of customer focus and customer
satisfaction in any business. These are leading indicators: ifcustomers are not
satisfied, they will eventually find other
suppliers that will meet their needs. Poor performance from
this perspective is thus a leading indicator of future decline,
even though the current financial picture may look good.
In developing metrics for satisfaction, customers should be
analyzed in terms of kinds of customers and the kinds of
processes for which we are providing a product or service to
those customer groups.
The Financial Perspective
Kaplan and Norton do not disregard the traditional need for
financial data. Timely and accurate funding data will always
be a priority, and managers will do whatever necessary to
provide it. In fact, often there is more than enough handling
and processing of financial data. With the implementation of a
corporate database, it is hoped that more of the processing
can be centralized and automated. But the point is that the
current emphasis on financials leads to the "unbalanced"
situation with regard to other perspectives. There is perhaps
a need to include additional financial-related data, such as
risk assessment and cost-benefit data, in this category.
Strategy Mapping
Strategy maps are communication tools used to tell a story of
how value is created for the organization. They show a
logical, step-by-step connection between strategic objectives
(shown as ovals on the map) in the form of a cause-andeffect
chain. Generally speaking, improving performance in
the objectives found in the Learning & Growth perspective
(the bottom row) enables the organization to improve its
Internal Process perspective Objectives (the next row up),
which in turn enables the organization to create desirable
results in the Customer and Financial perspectives (the top
two rows).
Balanced Scorecard Software
The balanced scorecard is not a piece of software.
Unfortunately, many people believe that implementing
software amounts to implementing a balanced
scorecard. Once a scorecard has been developed and
implemented, however, performance management software
can be used to get the right performance information to the
right people at the right time. Automation adds structure and
discipline to implementing the Balanced Scorecard system,
helps transform disparate corporate data into information and
knowledge, and helps communicate performance information.

What are the Primary Implementation Success


Factors?
 Obtaining executive sponsorship and commitment
 Involving a broad base of leaders, managers and
employees in scorecard development
 Agreeing on terminology
 Choosing the right BSC Program Champion
 Beginning interactive (two-way) communication first
 Working through mission, vision, strategic results, and
strategy mapping first to avoid rushing to judgement on measures or software

Viewing the scorecard as a long-term journey rather


than a short-term project
 Planning for and managing change
 Applying a disciplined implementation framework
 Getting outside help if needed
Definitions of Balanced Scorecard
Strategic Planning & Management Terms
Customer Value Proposition
The Customer Value Proposition is the unique added value an
organization offers customers through its operations; the logical link
between action and payoff that the organization must create to be
effective. Three aspects of the proposition include Product/Service
Attributes (Performance/ Functionality considerations such as
quality, timeliness or price), Image and Relationship.
Mission
A mission statement defines why an organization exists; the
organization's purpose
Performance Measures
Performance Measures are metrics used to provide an analytical
basis for decision making and to focus attention on what matters
most. Performance Measures answer the question, 'How is the
organization doing at the job of meeting its Strategic Objectives?'
Lagging indicators are those that show how successful the
organization was in achieving desired outcomes in the past.
Leading indicators are those that are a precursor of future success;
performance drivers.
Perspectives
A Perspective is a view of an organization from a specific vantage
point. Four basic perspectives are traditionally used to encompass
an organization's activities. The organization's business model,
which encompasses mission, vision, and strategy, determine the
appropriate perspectives.
Strategic Initiatives
Strategic Initiatives are programs or projects that turn strategy into
operational terms and actionable items, provide an analytical
underpinning for decisions, and provide a structured way to
prioritize projects according to strategic impact. Strategic
Initiatives answer the question, ‘What strategic projects must the
organization implement to meet its Strategic Objectives?’
Strategic Objectives
Objectives are strategy components; continuous improvement
activities that must be done to be successful. Objectives are the
building blocks of strategy and define the organization's strategic
intent. Good objectives are action-oriented statements, are easy to
understand, represent continuous improvement potential and are
usually not 'on-off' projects or activities.
Strategic Result
Strategic results are the desired outcome for the main focus areas
of the business. Each Strategic Theme has a corresponding
Strategic Result

Strategic Theme
Strategic Themes are key areas in which an organization must
excel in order to achieve its mission and vision, and deliver value
to customers. Strategic Themes are the organization's "Pillars of
Excellence."
Strategy Map
A Strategy Map displays the cause-effect relationships among the
objectives that make up a strategy. A good Strategy Map tells a
story of how value is created for the business.
Strategy
How an organization intends to accomplish its vision; an approach,
or “game plan”.
Targets
Desired levels of performance for performance measures
Vision
A vision statement is an organization's picture of future success;
where it wants to be in the future

The circular flow of income


National income, output, and expenditure are generated by the activities of the two
most vital parts of an economy, its households and firms, as they engage in
mutually
beneficial exchange.
Households
The primary economic function of households is to supply domestic firms with
needed
factors of production ‐ land, human capital, real capital and enterprise. The factors
are
supplied by factor owners in return for a reward. Land is supplied by landowners,
human
capital by labour, real capital by capital owners (capitalists) and enterprise is
provided by
entrepreneurs. Entrepreneurs combine the other three factors, and bear the risks
associated with production.
Firms
The function of firms is to supply private goods and services to domestic households
and
firms, and to households and firms abroad. To do this they use factors and pay for
their
services.
There are several types of firm, including:
Sole traders, 1. which are common in retailing and services
2. Partnerships, common in legal and financial services
3. Private limited companies (‘Ltd), common in small to medium sized enterprises
4. Public limited companies (Plcs), common with larger enterprises
Factor incomes
Factors of production earn an income which contributes to national income. Land
receives
rent, human capital receives a wage, real capital receives a rate of return, and
enterprise
receives a profit.
Members of households pay for goods and services they consume with the income
they
receive from selling their factor in the relevant market.
Production function
The simple production function states that output (Q) is a function (f) of: (is
determined by)
the factor inputs, land (L), labour (La), and capital (K), i.e.
Q = f (L, La, K)
The Circular flow of income
Income (Y) in an economy flows from one part to another whenever a transaction
takes
place. New spending (C) generates new income (Y), which generates further new
spending
(C), and further new income (Y), and so on. Spending and income continue to
circulate
around the macro economy in what is referred to as the circular flow of income.

The circular flow of income forms the basis for all models of the macro‐economy,
and
understanding the circular flow process is key to explaining how national income,
output
and expenditure is created over time.
Injections and withdrawals
The circular flow will adjust following new injections into it or new withdrawals from
it. An
injection of new spending will increase the flow. A net injection relates to the overall
effect
of injections in relation to withdrawals following a change in an economic variable.
Savings and investment
The simple circular flow is, therefore, adjusted to take into account withdrawals and
injections. Households may choose to save (S) some of their income (Y) rather than
spend it
(C), and this reduces the circular flow of income. Marginal decisions to save reduce
the flow
of income in the economy because saving is a withdrawal out of the circular flow.
However,
firms also purchase capital goods, such as machinery, from other firms, and this
spending is
an injection into the circular flow. This process, called investment (I), occurs
because
existing machinery wears out and because firms may wish to increase their capacity
to
produce.

The public sector


In a mixed economy with a government, the simple model must be adjusted to
include the
public sector. Therefore, as well as save, households are also likely to pay taxes (T)
to the
government (G), and further income is withdrawn out of the circular flow of income.
Government injects income back into the economy by spending (G) on public and
merit
goods like defence and policing, education, and healthcare, and also on support
for the poor and those unable to work.

Including international trade


Finally, the model must be adjusted to include international trade. Countries that
trade are
called ‘open’ economies, the households of an open economy will spend some of
their
income on goods from abroad, called imports (M), and this is withdrawn from the
circular
flow.
Foreign consumers and firms will, however, also wish to buy domestic products,
called
exports (X), and this is an injection into the circular flow.

You might also like