Professional Documents
Culture Documents
Economics
Review
Vol. III, No. II
Fall 2013
25
000
,
The $
Steak
Fall 2013
Columbia Economics
Review
would like to thank
its donors for their
generous support of
the publication
2013-2014 E D I T O R I A L B O A R D
EDITOR-IN-CHIEF
James Ma
PUBLISHER
EXECUTIVE EDITOR
Daniel Listwa
Rui Yu
SENIOR EDITORS
CONTENT
Ashwath Chennapan
Matthew Chou
Diana Ding
Hong Yi Tu Ye
Samantha Zeller
LAYOUT EDITORS
Daniel Listwa
Victoria Steger
CONTRIBUTING ARTISTS
Sarika Ramakrishnan
Dani Brunner
Gemma Gen (Cover)
Desi Petkova
Ariel Rahimzada
Natasha Stolovitzky-Brunner
ASSOCIATE EDITORS
Anna Etra
Adriano Henrique
David Froomkin
Zepeng Guan
Victoria Steger
Julie Tauber
WEB EDITORS
David Froomkin
OPERATIONS
RELATED NOTES
EXECUTIVE OFFICER
Rachel Rosen
MANAGING OFFICERS
Mitu Bhattatiry
Erin Bilir
Marc Lemann
Allison Lim
Cem Zorlular
OPERATING OFFICERS
Cindy Ma
Theodore Marin
Daniel Morgan
MEDIA/DESIGN EDITOR
Theodore Marin
Maren Killackey
Fall 2013
TABLE OF CONTENTS
Labor Economics
4
16
So Sue Me
Market Reaction to Patent Litigation Verdicts and Patent Appeal Results
TaperPedic
An Analysis of the Federal Reserves Forecasted Asset Losses in the Face of Looming
Interest Rate Shocks
29
Microeconomic Theory
32
Rounding It Out
The Effect of Round Number Bias in U.S. and Chinese Stock Markets
41
Mixed Messages
How Firms Should Utilize Private Information
Fall 2013
Samuel Zakay
Columbia University
Introduction
Restaurant staffs in New York are generally compensated by hourly wages and
derive a majority of their salary from
tips. Restaurants in New York City have
two main methods of distributing tips.
Pooled houses are based on a share
system. At the end of the shift, each staff
member receives an amount equivalent
to the value per share multiplied by the
total shares allotted to them. In this system, servers receive the greatest percentage of shares. In non-pooled houses,
tips are collected by individual servers.
In this system, there could be large differences in the amount a server may be compensated on a given night and also large
differences between total compensation
amounts for different servers.
Barkan, Erev, Zinger and Tzach (2004)
argue that in a pooled system, servers are
given the incentive to provide good service and to work as a team. There is no
incentive to compete with servers at the
expense of other servers customers, as
each servers income is dependent on all
customers. However, a pooled system is
undermined by the problem of free riders. Since servers earnings do not solely
depend on their own efforts, they can
grow reliant on colleagues and become
unmotivated to work at a high level.
Fall 2013
5
falls. While the negative relationship is
not statistically significant, it remains
when controlling for other fixed effects
associated with a restaurants organizational and compensation structure.
I also examine the relationship between the quality of service and visibility amongst the non-pooled restaurants
to see whether non-pooled restaurants
yield higher quality service as visibility
decreases. I find it difficult to observe
any directional relationship with statistical significance that holds when controlling for other restaurant characteristics.
Part of the reason for the lack of results
could stem from the small sample of nonpooled restaurants.
Literature Review
Within a restaurant, managers and
owners wish to establish a culture
amongst the service staff to treat the
customer in the best way possible. If the
manager chooses to create a group-incentive structure, as with pooling, he or she
can run into the problem of moral hazard
amongst his team.
6
Alchian and Demsetz
(1972) argued that by
bringing in an individual
to monitor the actions of
workers, a firm can limit
the amount of free riding, and thus alleviate the
problem of moral hazard.
The monitor should be
able to renegotiate and
terminate contracts with
individual workers and
hold a residual claim on
the profitability of the
firm.
Rather than a observing each action, the central monitor needs only to
possess the ability to provide a productivity bonus
or terminate a contract of
an employee. Similarly
restaurant management is faced with the
issue of optimizing profits by establishing wages based on servers efforts. In the
case of the restaurant industry the owner
and general manager can often be considered a monitor of service staff actions.
The idea of mutual monitoring extends
the idea of the monitor as presented by
Alchian and Demsetz to a role that isnt
strictly centralized. Service staff may also
monitor the actions of their peers to ensure everyone is working within a pooled
system. This is consistent with Lazear
and Kandels (1992) argument that mutual monitoring is more effective when
profits are shared amongst a smaller
group. According to this logic a smaller
restaurant would be a stronger candidate
for mutual monitoring.
B.E.Z.T (2004) discuss the problem of
free riding within a pooled system. They
found that servers in both types of restaurants were bothered by the problem of
free riding. If a restaurant attracts fewer
customers, the negative effect of less income should reach both high effort and
low effort servers in the long run. With
regard to the problem of overuse of limited resources, they found that servers in
non-pooled restaurants were more bothered by co-workers relationships with
shift managers.
Bandiera, Barankay and Rasul (2008)
found that managers who were not given
a performance incentive tended to favor
workers who they were friendly with.
However, when a performance incentive
was issued that relationship tended to
disappear in favor of giving more tasks
and hence more opportunity to those
who were better. A residual claim for the
Fall 2013
monitor or manager is essentially a performance incentive for effective management, and should thus limit favoritism
within the restaurant.
After surveying cafe servers and noting
their concerns over free riding and competition, B.E.Z.T (2004) tested whether
the concerns manifested themselves in
the quality of service at the two types of
cafes. They recorded the level of visibility, service time and the number of service problems. As the number of service
spaces increases, visibility decreases. A
pooled system resulted in a higher quality of service in a higher visibility environment, while a non-pooled system resulted in a higher quality of service in a lower
visibility environment. Increased visibility within a pooled restaurant makes it
easier for workers to monitor the actions
of their peers and thus discourage free
riding, which is consistent with the general moral hazard literature. They also
argue that low visibility alleviates the
problem of overuse of limited resources
in non-pooled cafes, while the individual
tip system alleviates the problem of reduced-effort associated with free riding.
This study is similar to the research done
by B.E.Z.T (2004), however I use diners
actual ratings to measure service quality. In this survey, I also collect the general compensation structure for multiple
types of employees within the restaurant,
and general features about the restaurant
that were not collected by B.E.Z.T.
One dominant theory as to why customers tip is a fear of social disapproval.
Another dominant as to why customers
tip is in order to secure equitable relationships. A third theory for why customers
tip is in order to receive good service in
the future upon returning.
Columbia Economics Review
Fall 2013
where rA is the coefficient of risk aversion of the agent. A principal pays the
wages of the worker and is thus faced
with a different utility function:
BP represents the piece-rate that a server collects from his primary table and BS
represents the piece-rate that a server
collects from his secondary table. Each
server is also faced with a quadratic cost
associated with effort represented by:
where rP is the coefficient of risk aversion of the principal. The linear quadratic
model also assumes cost to be quadratic
or
where ri is the coefficient of risk aversion for server i and i is the variance
associated with table i. For simplicity, I
assume that the covariance between table
A and B is zero. The principal, restaurant
manager or owner, has a profit column
vector, P = [PA PB], that transforms
output into profit. The principal agent
problem in the context of the two servers
and two tables can be stated as follows:
subject to:
Columbia Economics Review
where Wi0 is the servers next best alternative wage. For simplicity, I assume that
each agent choose his effort level independently of the other. In order to solve
the problem, I first solve for the optimal
effort of agent 1:
8
= 0, a server does not derive any piece
rate wage from his secondary table. In a
pooled house BP = BS, as each server earns
the same fraction of his total wage from
each table.
Proposition 1: As the ability of both
servers rise, both servers work harder
and earn higher wages.
The coefficient on ability in the numerator is greater than the coefficient on ability in the denominator in the equation for
BP. More skilled workers should be paid a
higher wage by the restaurant as they are
producing more for the restaurant.
Proposition 2: If the ability of server 2
equals the ability of server 1, each server
puts no effort into his secondary table
and earns no wage from his secondary
table. In this case pooling is not efficient.
If 2=1, then E1B = E2A = 0, hence BS = 0.
If each server has equal abilities, then it
would be detrimental for each server to
devote any effort on a table where they
would not actually be helping the other
server and expending a greater cost, as
the cost of serving a secondary table is
larger than the cost of serving a primary
table. If this happens, management will
choose a non-pooled system as servers
will not be putting in any effort to their
secondary tables, and thus management
will not be deriving any benefit from issuing a team incentive of pooling.
Proposition 3: If the ability of server 2
differs from the ability of server 1, each
server will put some effort into his secondary table. Thus, the optimal contract
features a degree of pooling wages across
tables.
This result can be seen by observing
that E1B, E2A, BS 0 if 21. In the case of
unequal abilities, management wishes to
offer a form of pooled wages so the higher ability server is incentivized to work
on both tables. Since management offers
the same wage to both servers, both the
higher ability server and lower ability
server receive a wage from their secondary table. Thus, both servers have an incentive to expend effort on serving both
tables and the secondary server will seek
to supplement the efforts of the primary
server. While the optimal contract is a degree of pooling, in the real world, restaurant management will only choose pooling if the optimal contract more closely
resembles the pooled contract, BP = BS,
than the non-pooled contract, BS = 0.
Proposition 4: As the servers become
more risk averse, their wages fall and
hence effort falls.
This is consistent with economic theory
and the idea that risk poses a real cost for
the agents. If servers become more risk
Fall 2013
Fall 2013
same. A busser out-earns only the host.
While many managers often cited that
bussers were an important part of the service experience, a busser never earned as
much as a server. Finally, the host often
earns the least in a restaurant.
Restaurants tend to hire more servers
than other members of the service staff.
Furthermore there is a higher standard deviation for the number of servers
than number of other members of the
service staff, which reflects the greater
perceived role that servers play in customer service. Bussers and runners seem
to the longest tenured employees within
a restaurant. An extremely high percentage of them are full-time (nearly 90%).
Bartenders spend a similar amount of
time relative to servers employed within
a restaurant, yet there is a higher probability that a restaurant will have fulltime bartenders than full-time servers.
Management values honesty amongst the
bartenders, and if it finds bartenders that
it can trust, it may prefer employing fewer trustworthy bartenders on a full-time
basis to seeking outsiders for part-time
positions. Within the context of a restaurants, the host position is by far the most
transient.
Of the 71 restaurants that chose to participate in the survey, 54 can be categorized
as pooled houses, while 17 can be categorized as non-pooled houses. This percentage is consistent with managers who
guessed that 80% of New York City restaurants practiced pooling. This proportion is also consistent with the 42 restaurants that were also rated by OpenTable,
where 33 of them practiced pooling. The
pooled system appears to be the default
9
trix between all of the diner perception
variables. ZAGAT Service was positively
related to all of the ZAGAT measures of
restaurant quality. A similar phenomenon exists when considering OpenTable Services relationship with the other
OpenTable variables.
The preceding relationships support
the conclusion that restaurants with better food, higher cost, better decor and better ambiance tend to provide better service. When considering whether the two
measures of diner perceptions are correlated, the results are mixed. ZAGAT Service is positively correlated to OpenTable
Service. Furthermore, ZAGAT Food is
positively correlated to OpenTable Food.
None of the other cross variables between
ZAGAT and OpenTable is directly measuring the exact same thing, but all of the
variables are positively correlated.
Although ZAGAT has 29 more restaurant observations than OpenTable, I decided to examine both in my analyses.
The two measures of service and food are
positively correlated, however the correlation is far from 1. In several analyses,
especially concerning non-pooled restaurants, OpenTable is less reliable given the
relatively small data set.
As part of the survey, managers and
owners were asked the amount servers
could expect to earn in a shift and the percentage of a servers salary derived from
tips. A percentage of a servers income
derived from tips can serve as a good
measure for how dependent a server is
on customer approval. Several managers
declined to answer the question.
Common sense suggests that servers
who produce better service choose to
10
work at restaurants where they could
earn more. Likewise, servers who produce better service would choose to be at
a restaurant where a greater percentage
of their income would be derived from
tips. Both measures describe whether
there is a positive relationship between a
servers earnings and the quality of service provided. I regressed ZAGAT Service on a servers gross pay per shift and
found a positive and statistically significant relationship (t(68) = 2.764). I found
a similar positive relationship, although
it was not quite statistically significant,
when regressing OpenTable Service on a
servers gross pay per shift (t(39) = 1.563).
I suspect that if I had surveyed more restaurants that were rated by OpenTable,
the positive relationship would further
approach a level of statistical significance.
The positive relationship and hence
statistical significance between the quality of service and a servers gross pay
per shift diminished when controlling
for cost using ZAGAT Cost in both measures of service quality (ZAGAT (t(68) =
1.112) and OpenTable (t(39) = 0.538)). The
positive relationship and relative level of
non-statistical significance remains when
controlling for other fixed effects that
measure organizational and compensation structure.
I regressed ZAGAT service on the percentage of a servers income derived from
tips and found a positive and statistically
significant relationship between them
(t(70) = 3.079) as shown in Appendix Table 8.7. Furthermore, the positive and statistically significant relationship remains
even when controlling for cost and the
same fixed effects that measure organizational and compensation structure as
the regressions on servers gross pay per
shift. Curiously enough, the positive relationship does not exist when considering OpenTables measure of service quality (t(41)= -0.602). The preceding results
support common sense that servers who
produce better service will tend to work
at restaurants that will afford them better pay.
Results and Analysis
Endogenous Analysis of the Pooling Choice
Given the greater difficulty in monitoring free riding in a lower visibility environment, it is interesting to examine
whether owners and general managers
behave as the theory and literature predict by shying away from establishing a
pooled system as the size of the restaurant increases and visibility decreases.
I analyzed the question by performing a series of linear probability regressions of the dummy variable of whether
Fall 2013
a restaurant was pooled on the variable
of the seating capacity of the restaurant.
The variable of seating capacity is used
as a variable for assessing the visibility
within a restaurant. As the seating capacity within a restaurant increases, its size
increases and its visibility decreases.
Consistent with the theory and literature, capacity is negatively related to
whether a restaurant decides to institute
a pooled system, and the negative relationship is statistically significant (t(71)
= -1.900). The negative relationship persists and becomes even more statistically
significant when I control for the relative
price point of a restaurant by using ZAGAT Cost (t(71) = -2.600). Although diners of the restaurant assess cost, it should
be considered an endogenous variable as
the management of the restaurant sets
the prices. ZAGAT Cost is different from
other ZAGAT measures because it is observed by customers rather than rated by
customers, as is the case with ZAGAT Service, Food and Decor. Thus across both
cheaper and more expensive restaurants,
managements choice to pool is negatively related to the size of the restaurant.
The negative, statistically significant relationship persists when controlling for
other fixed endogenous effects of the organizational and compensation structure
(as well as cost) of the restaurant such as:
whether a restaurant is over three years
old, whether ownership is involved on a
day to day basis (a dummy variable that
suggests a higher degree of monitoring),
whether most of the servers are full-time,
whether some servers are paid a higher
hourly wage than others, whether a restaurant is a full-service restaurant with
all five types of service staff (a dummy
variable) and whether the restaurant is
located on the Upper West Side.
The negative statistically significant
persistent relationship suggests management is aware that servers are presented
with a greater incentive to free ride as
the size of the restaurant grows and their
income is less and less dependent on the
effort they expend at their own table.
Furthermore it suggests that restaurant
management is aware of the greater challenges of monitoring free riding (in the
context of management monitoring and
mutual service staff monitoring) within
a pooled restaurant consistent with the
theory of B.E.Z.T (2004). Aside from capacity, ZAGAT Cost also is related to
whether management decides to institute a pooled system. When controlling
for capacity, the relationship between
pooling and ZAGAT Cost is statistically
significant (t(71) = 2.513). The free ridColumbia Economics Review
Fall 2013
structure (as well as pooling) of the restaurant. Given the relatively small data
set, there is merit in further researching
the positive relationship between servers
per capita and cost to see whether the relationship further approaches statistical
significance.
Poolings Effect on Service Quality
Pooled restaurants offer a real incentive
11
as well as conversations with restaurant
owners and general managers suggest
that pooling is the dominant form of
structuring compensation within New
York City restaurants. Poolings relative
dominance supports the idea that restaurant management wishes to foster a spirit
of teamwork amongst the service staff
and discourage competition that could
lead to unwanted behavior. However,
the empirical results suggest that restaurant management is less likely to choose
pooling as the seating capacity or size
of the restaurant increases and visibility
decreases. Although non-pooling suffers
from the problem of competition, restaurant management will more likely select it as visibility decreases because the
free riding costs associated with pooling
dominate as monitoring workers (both
management and mutual) becomes more
difficult. Furthermore, managements reluctance to institute a pooled system in a
higher capacity environment is apparent
even when one considers restaurants that
target different types of customers across
the socio-economic spectrum.
In general, diners dont appear to rate
pooled restaurants as providing superior
service to non-pooled restaurants. However, when considering only restaurants
that choose to institute a pooled compensation structure, the results suggest that
restaurants with a higher seating capacity and hence lower visibility are rated
as having inferior service by customers
consistent with B.E.Z.T (2004). While the
relationship is not statistically significant across the two measures of service,
it remains when controlling for different
categories of restaurants such as price,
whether the restaurant is over three years
old and whether ownership is involved
on a day to day basis. The result of inferior service within pooled restaurants
as capacity increases is further supported
by managements reluctance to institute a
pooled system in a higher capacity environment. Given the relatively small data
set, there is merit in further researching
the negative relationship of service quality and capacity in pooled restaurants by
collecting more data points in order to
see whether the relationship further approaches statistical significance. n
Fall 2013
12
Gerard R. Fischetti
Bentley University and University of California, Berkeley
Introduction
Lobbyists act on behalf of corporations
and unions to persuade Congress to vote
in a particular way.1 In doing so, lobbyists build relationships with Congresspersons which have resulted in a revolving
door between K Street and Capitol Hill,
in which staffers become lobbyists, lobbyists become staffers, and Congresspersons become lobbyists. By examining the
revolving door phenomenon, I am able
Fall 2013
Table 2 shows the seat changes in Congress from 2000-2008. I focus on the 109th
Congress of 2005-2006 and the 110th Congress of 2007-2008 to understand how
lobbyists gain from being connected to
a politician whose party is in power, as
well as how a politicians seniority affects
lobbyist revenue.
The 2004 reelection of George W. Bush
corresponds with the 109th Congress and
the midterm election of 2006 corresponds
to the 110th Congress. I focus on these
final four years of the Bush presidency
as the multiple seat changes provide a
useful case study. The Republicans lost
30 House seats in the 2006 election and
Democrats gained 31. The Democrats
also gained all six Senate seats that were
previously occupied by Republicans.
Ideally, Id like to also include the 2008
election in my analysis, but my dataset
does not currently have lobbyist information for 2009-2010.
Empirical Strategy
I outline a replication strategy and introduce my own model for addressing
seniority effects in Congress and consider
some theoretical limitations of my empirical strategy.
Model Replication
Following Blanes i Vidal et al. (2012), I
estimate:
(1) Rit=i+SPitS+HPitH+Xit+tpc+vit
where is the log of revenue; i is the individual lobbyist fixed effect; PitS denotes
a lobbyist who is connected to a senator;
PitH denotes a lobbyist who is related to
a representative; Xit is a vector of control variables such as experience; tpc are
time period, by party and chamber, fixed
effects; and vit is the idiosyncratic error
term. I also define a vector:
which is a vector
related to senator
tive. The parameters
and H. Under the
of the variables
and representaof interest are s
assumption that
13
Fall 2013
14
for individual fixed effects, i. Additional control variables are added in each column: model (2) adds a control for party
(Democrat, Republican), model (3) adds
a control for chamber (Senate, House),
model (4) adds a control for experience
(in semesters) and model (5) adds controls for individual fixed effects.
The connection to a senator is not statistically significant until models (4) and
(5), while the connection to a representative becomes insignificant in model (5).
The addition of controls in models (1) to
(5) has little change in the coefficient for
senators, which shows that differences
across lobbyists were not biasing the effect. However, for representatives, the
addition of controls causes the coefficient
to drop dramatically; the drop from (4) to
(5) is the most notable because the coefficient becomes insignificant. This shows
that unobserved heterogeneity among
lobbyists biases the effect of their connections to representatives. Ultimately, I
cannot conclude that there is any benefit
to being connected to a representative.
My findings are consistent with the intuition that being connected to a senator is
more valuable than being connected to a
representative.
For the extension, my dataset includes
all 454 Congresspersons who served in
the 109th and 110th Congresses.
Table 4 gives the results of the model
given by Equation (7) and shows the effects of majority status and rank on lobbyist revenue. The number of observations refers to the number of groups of
politicians who share the same ranking.
In the first column, model (6), I estimate
the effect of rank, majority and their interaction. Model (7) adds a control for
lobbyist experience. Models (8), (9), (10),
and (11) all control for lobbyist experience and control for chamber and party
in the following ways: (8) estimates the
effects for Republican representatives; (9)
for Democratic representatives; (10) for
Republican senators and (11) for Democratic senators.
Controlling for lobbyist experience in
model (7) shows that being connected to
a politician in the majority party leads to
21% higher revenue, significant at the 5%
level. The politicians rank in model (7)
is not significant. I consider this the most
important model in Table 4 because it establishes that connection to a more experienced politician isnt necessarily indicative of higher revenue.
Interestingly, being connected to a
politician in the majority party has significant effects for lobbyist revenue. Table
4 uses the 109th Congress, when Repub-
Fall 2013
15
status as a high-ranking Democrat or Republican.
Conclusion
My study finds that being connected
to a senator leads to 24% higher revenue,
while the value of a connection to a representative is more complicated. Being
connected to a representative is not important unless the representative belongs
to the majority party, in which case the
lobbyists revenue increases (33% in my
16
Fall 2013
So Sue Me
Market Reaction to Patent Litigation Verdicts and Patent Appeal Results
Lorna Zhang
Columbia University
Introduction
Patents are one of the most significant legal
instruments that protect intellectual property
(IP) rights. A patent gives the inventor the exclusive rights to their patented idea for a limited period of time, typically 20 years from
the date the patent was filed. A patentable
invention must be new, useful, and nonobvious to a person skilled in the field of application at the time. Though new ideas resulting
in useful products have an obvious economic
value, these ideas are often pure public goods,
rendering them unprofitable for the inventor.
Patents alleviate this problem by creating a
legal means of conferring excludability upon
otherwise non-excludable innovations.
Recently, we have seen a steady rise in patent litigation, with the number of patent litigations rising from 2,281 in 2000 to 5,484 in
2012, a 140%1 increase. These litigations are
costly affairs, causing an average decrease in
firm value of -2% upon the initial litigation
announcement for 26 biotechnology suits,
representing a median shareholder value loss
of $20 million. Additionally, numerous studies have shown that firms often choose not to
apply for a patent due to the expected cost and
likelihood of defending the patent in court.
Given its high cost, litigation can only serve
1
Statistic obtained from the Lex Machina intellectual property (IP) litigation database.
to reevaluate their holdings to reflect new expectations about cash flow and risk, revaluing
the firm accordingly.
Any changes in the stock prices of the companies involved in the litigation after a verdict is publicized will consist of two components: an uncertainty removal component
and a surprise component. The uncertainty
removal portion arises from the fact that after the verdict is made public, the uncertainty
surrounding the outcome of the litigation is
removed, and stock prices will shift to reflect that. The surprise component of the
stock price change measures whether or not
the verdict was in line with investor expectations. Raghu et al finds that, at least for the
defendant, the market reaction at the time of
a settlement/termination of the IP litigation
largely reflects discrepancies between the expectations of the investors and the actual outcome. If we assume that markets are capable
of accurately assessing the optimal scope and
value of a patent, then any deviations from
market expectations would suggest a problematic ruling. The presence and magnitude
of a surprise component could indicate that
there is something troubling about the ruling
and that it might be more likely to be overturned in the appeals process.
Studying the predictive power of this market response will allow us to determine both
whether or not market reaction is a potential
Fall 2013
anticipating an appeal, and have therefore already factored some of that anticipation into
the reaction. This is very plausible given the
fact that over 80% of the cases in my initial
sample were later appealed. However, this is
only the case when the event study is done on
defendant firms. When I split my sample into
claimant and defendant firms and run separate
regressions, the coefficient on the market reaction variable when the event study firm is
the claimant is insignificant, with a p-value of
around 70%. This indicates that only market
reaction for defendant firms relates to probability of reversal. This is likely because defendant firms have a much larger downside
than claimant firms. A defendant firm may
experience significant financial distress if
it is ruled to have infringed upon the claimants patents. A decrease in their wealth and
competitiveness is inevitable if the ruling is
not later overturned. On the other hand, if
the claimant loses, it does not have to cease
production of its product or pay royalties
Columbia Economics Review
17
18
Fall 2013
Fall 2013
19
vided into two parts. The first part is an
event study. The second part is a regression utilizing the results from the event
study. There are four parts to every event
study: (1) Defining the event day(s); (2)
measuring the stocks return during the
event period; (3) estimating the expected
return of the stock during this period,
and (4) computing the abnormal return
(actual return minus expected return).
The event date was defined as the first
20
be free of firm-specific shocks that might
cause its returns to deviate from its baseline level as these will resulted in biased
estimated parameters3. However, it is not
necessary that the market as a whole be
free of shocks. So long as the shock is not
specific to the firm that the event study is
being conducted on, then we can expect
that the market returns will also reflect
the effect of these events.
I used the estimated i and i along
with the return on the market portfolio
to calculate an expected return on the
event study firms stock. The abnormal
return was calculated by subtracting the
actual return from the expected return.
As I used a two-day event window, for
each event study, the abnormal returns
for day 0 and day 1 were summed to give
a cumulative abnormal return (CAR).
The standardized CARs were calculated
using the formula CARit/sit, where sit is
the standard deviation of the regression
residuals.
Data
This study examines both the claimants (the firm(s) seeking damages for infringement) and the defendants (firm(s)
that have allegedly infringed) of patent
infringement litigation. In court documents, the first firm to file a lawsuit is
labeled the claimant regardless of who
owns the patent. For consistency, I define
the person or firm that owns the patent in
question as the claimant and the person or firm that has allegedly infringed
on the patent the defendant, regardless of who filed first. As we are interested in observing the market reactions
to the announcement of a verdict, at least
one firm involved in the litigation must
be a publicly traded company. I have
excluded subsidiaries of publicly traded
companies from this study as there is no
data readily available that points to the
importance of the subsidiary to the parent firm.
The data collection process was twofold: first, litigation data was obtained
and the characteristics of the case ascertained. Once a useable sample was assembled by paring down the dataset to
cases where an appeal was filed with at
least one publicly traded firm, an event
study analysis was done for each publicly traded company, for a total of 142
event studies. As there was no consolidated list of appealed cases, I created my
own dataset using litigation data from
the Lex Machina database. I looked at
562 cases from 2000-2012, determining
Fall 2013
whether each case had been appealed
and the outcome of the appeal.
A large majority (81%) of cases that
went to trial were appealed; this is unsurprising given that only cases where
both firms felt strongly about their positions would go to trial. The true value is
actually probably even higher, as firms
involved in cases resolved in 2012 likely
have not had the time file an appeal.
Of the 142 event studies, 64 were reversed upon appeal. Oftentimes, an appeal will not result in a simple decision
to affirm or reverse, rather an appeal
will be reversed-in-part, affirmed-inpart, and/or remanded. In these cases,
I have chosen to denote any case where
any portion was reversed or vacated
upon appeal as reversed, because a reversal of any part of the original verdict
indicates that there was something problematic about the initial ruling that I expect the market to have captured.
Analysis
Table 1 displays three different results
of my regression estimations using a
probit regression. Table 2 displays the results of an ordinary least squares (OLS)
regression, to interpret the results of Table 1 only at a specific point.
4
I do not have any results for when
neither company is publicly traded as in order to run my regression and calculate an abnormal return, at least one company had to be
publicly traded
3
This can be anything from the
commencement of a litigation to a change in
upper level management of the firm
Columbia Economics Review
Fall 2013
This might be expected when we consider that, for two reasons, the burden
of proof is much higher for defendants
than for plaintiffs. Firstly, over the past
few decades, there has been a trend in
policy to strengthen patent protection;
as a result, patent rates, claimant success rates in infringement suits, and the
number of infringement suits filed have
all increased. If a firm knows that there
is a high probability that the court will
rule either invalidity or non-infringement
when it tries to defend its patent in court,
then there would be no incentive for the
firm to patent their innovation. Thus, necessarily, the patent litigation system is be
set up so that the probability that an invalid or non-infringed patent will be declared valid and infringed upon is higher
than the reverse situation.
Secondly, when a firm is sued for infringement, it will often argue that the
patent in question is invalid or unenforceable. However, any patent that has
been issued has already been screened
by the US Patent Office and as a result of
this, proving invalidity is difficult. These
two facts mean that when the non-patent
holding firm does win, due to the high
burden of proof required to achieve such
an outcome, the verdict is less likely to be
overturned upon appeal.
It is interesting that the magnitude of
the defendant firms abnormal market returns is much more effective in predicting the probability that a verdict will be
reversed than the claimants; however,
this is not completely unexpected. As
mentioned earlier, studies have shown
that the defendant has much more at
stake in a patent infringement case than
the claimant because there is a much larger downside for the defendant. The worst
case scenario for the claimant is that their
patent is declared invalid and they lose
royalties, but they would not have to stop
producing their product. Additionally,
it is also unclear whether the claimant
will be able to take full advantage of the
reduced competition should they win.
Since there are usually more than two
firms competing in single market, it is
highly likely that other firms might come
in and take advantage of the reduced
competition as well.
On the other hand, if the defendant loses, the firm could experience significant
financial distress due to the damages and
royalties they would be ordered to pay to
the claimant firms. Even if the defendant
could afford these costs, they will likely
have to cease producing and marketing
the offending product. This fact could
significantly damage their long-term
21
22
Fall 2013
Fall 2013
when the smaller firm is the patent holder, it has no recourse but litigation5. The
significant difference in the coefficient
on whether or not a firm is in a complex
technology industry between the claimant and defendant firm samples, may in
fact be capturing the divergence between
how a large, publicly traded firm deals
with other firms of varying sizes. Further work might look into the magnitude
and significance of these differences and
whether smaller firms choose to litigate
because they have no other option or because they want to take advantage of the
potential royalties that would result from
winning a patent litigation suit against a
large, well-endowed firm.
These results further elucidate the relationship between patent litigation and
financial markets. I have shown that
markets exhibit some capabilities in predicting whether an initial verdict will be
overturned upon appeal. This suggests
that in some cases market forces may
be more capable of rendering unbiased
rulings than district courts. This is corroborated by the fact that my results also
show that courts are consistently handing
down too many initial rulings in favor of
the patent holder. While this is partly due
to the way that the court system has been
designed, it is in reality counter-productive. If potential patent holders know that
there is a significantly higher probability
that a ruling in their favor will be overturned upon appeal than a ruling against
them, there will still be incentives against
patenting. I would argue that slightly
stricter requirements should be placed on
claimant firms to prove that the patents
in question have been infringed upon so
that more accurate rulings will be made
more often, thus reducing costs for all
parties involved and increasing overall
welfare. n
5
When the larger firm is the patent holder, there is a higher
possibility that it may determine that the costs of litigation outweigh the benefits and thus decide not to litigate the infringed
patent.
23
24
Fall 2013
TaperPedic
An Analysis of the Federal Reserves Forecasted Asset Losses in the Face
of Looming Interest Rate Shocks
Jenna Rodrigues
Princeton University
Introduction
The size of the Federal Reserves balance sheet has increased significantly
throughout the recent economic recession. In this study, I use data for average coupon returns on mortgage-backed
securities, short-term Treasury Bills, and
long-term Treasury Bills in order to create a forecasting model to assess how
much the value of the Feds core assets
will decline when they raise interests
rates. Looking at the predicted change in
the value of the core assets on the Feds
balance sheet that will come with an interest rate hike will serve as a means of
assessing the risk they took on during the
crisis when they expanded their holdings
of each of these three assets. My empirical results suggest that when the Fed decides to raise interest rates in an effort to
initiate an exit strategy from the recent
balance sheet expansion, the value of
their core assets overall will drop significantly in value, with the majority of loss
coming from their holdings of long-term
Treasury Bills. If the Fed is still holding a
large quantity of these assets when they
begin to tighten monetary policy, they
will experience a significant decrease in
their net income. A drop in net income
could have serious negative implications
on the economy and the scope of mon-
etary policy.
The unconventional policy decisions
that the Federal Reserve made during the
recent economic recession have been under the microscope for the past few years.
While such an active attempt to ease
Fall 2013
25
ASSET VALUE
OVER TIME
$
$
ASSET VA
LUE
RAT
EST
ER
INT
etary policy tightening. Through analyzing the difference between the two series
of accumulated returns for each asset, I
will analyze the forecasted valuation loss
that will occur with the Feds decision to
initiate contractionary monetary policy
as it begins to implement their exit strategy. After running all three assets through
this forecasting model, I will combine the
three projections to simulate an overall
level of net income loss that the Fed will
undergo when it tightens monetary policy. In the case that the model predicts that
the Fed will in fact undergo a significant
loss, I will proceed to analyze potential
macro-level ramifications of the resulting
capital loss that can be attributed to the
increased risk-taking on behalf of the Fed
during the crisis.
Literature Review
Balance Sheet Expansion
Some top fed officials point to the fact
that the risk is under control and does not
pose a huge threat to Federal Reserve. For
quite some time, Bernanke attempted to
minimize the idea of imposing risk to the
public in stating that the Federal Reserve
loans that are collateralized by riskier securities are quite small compared with
our holdings of assets with little or no
credit risk (Bernanke: The Federal Reserves Balance Sheet 1). However, In
a noteworthy research article, Fed analyst Asani Sarkar sets the stage for the
ways in which the Fed portfolio changed
throughout different stages of the crisis;
beginning with providing short-term liquidity to sound financial institutions,
proceeding to provide liquidity directly
to borrowers and investors in key credit
markets, and finally expanding to the
purchase of longer-term securities for
the Feds portfolio (Sarkar 3). The author proceeds to briefly discuss the risk
associated with some of these programs
as they are incorporated in the Federal
Reserves balance sheet and the differences in the level of credit risk in the different stages of portfolio expansion and
diversification. In his analysis on changes
in the Feds balance sheet over time, Kenneth Kuttner discusses in great depth the
massive shift out of Treasuries and into
private sector assets, which means the
Columbia Economics Review
26
covery strengthens and monetary policy
normalizes, the Federal Reserves net interest income will likely decline (Yellen
1). Losses may particularly occur in the
case that the Federal Reserves interest
Fall 2013
lenges for the Fed and could very well
impact the conduct of monetary policy.
(Greenlaw, David, et al 4).
The economists demonstrate that one
major area of potential loss is the duration (average term to maturity) of the
Feds portfolio (which) peaks at nearly 11
years in 2013 with the end of new quantitative easing.... The duration is important because as it rises, it increases the
potential losses on the Feds portfolio as
the Fed strives to return its portfolio to a
more normal level upon exit in a rising
rate environment (Greenlaw, David, et
al 65). The authors also focus a significant amount of energy on this overlap
between monetary and fiscal policy, and
come to the conclusion that [a] fiscal risk
shock occurring within the next five years
would complicate matters greatly for a
Fed that will be in the process of exiting
from its massive balance sheet expansion (Greenlaw, David, et al 81).
While the economists took a large scale
approach to obtain overall loss forecasts
given a set of conditions, my model will
serve as a means of examining how an
interest rate hike could shift the forecasted expected returns of certain assets
in the Feds asset portfolio, thus contributing to these levels of losses that many
economists have recently been alluding
to. Since it is not clear where fiscal policy
will be at the time of a rate hike or the
stage at which the Fed will precisely begin to sell assets and drain reserves, the
results of my study will serve as a more
narrow lens of examination that can be
utilized to provide a starting point for examining potential losses given any set of
conditional variables.
Methodology/Data
In order to quantify the level of
interest rate risk the Fed took on through
its changing asset portfolio over time in
a roundabout manner, I constructed an
econometric forecasting model to assess
how a rise in interest rates could shift the
value of the three core assets on the Feds
balance sheet. Since the Fed is holding
a substantial amount of these assets, a
decrease in the value of these core assets
would contribute to significant losses in
the Feds net income given that they are
still holding a large amount of these assets in their portfolio at the time that they
raise interest rates. In order to reach my
final stage of analysis, I performed a vector autoregression of the core set of assets
on the Feds balance sheet during the relevant time span. I will break up the model that I used into multiple stages in order
rs
rm
rl
Fall 2013
rmt = gmsrst-1 + gmmrmt-1 + gmlrlt-1 + emt
rlt = glsrst-1 + glmrmt-1 + gllrlt-1 + elt
where the gs are unknown coefficients
and the ext are error terms.
I proceeded to calculate the following
[3 x 3] matrix (1) based upon the g coefficients after running the vector autoregression:
ss =.9661373* sm = -.1582282*
sl = .1150768* ms= .0079022*
mm= .9879826* ml= .0022588
ls = .0224532
lm =
.0490017
ll = .9141076*
1 =
In using the constants from the regression outputs for each of the three time
series regressions, I constructed the following matrix that will be held constant
through the remainder of the forecasting
model:
0 =
When constructing a one-year forecasting model for the (r) values of the
three assets in consideration, I will look
at a case where I will perform a baseline
analysis where no shocks are introduced.
This will allow me to see where the value
of assets would naturally trend without
the introduction of shocks to the VAR beyond monetary policy tightening.
One-year forecasting Model Without
Shocks:
{Assume et = 0}
I will begin my forecasting model using the (r) values from February of 2013
to construct the Rt-1 matrix in the equation
to follow. I will utilize the 0 and 1 matrices that were constructed above based
upon the output from the vector autoregression.
Rt = 0 + 1 * Rt-1
In the first stage of my forecasting
Rt
Rt =
27
study, I will only run the case where the
vector of shocks is held at zero.
After the (r) values were obtained for
the 120 individual month periods to acquire a ten-year forecast for the model
without additional shocks introduced, I
proceeded to utilize these forecasted (r)
values of the individual assets to determine the anticipated projected losses in
the values of short-term Treasury Bills,
long-term Treasury Bills, and mortgagebacked securities. The implied mean asset returns for each of the three asset series can be represented as follows:
m = (II A1)-1 * A0
where the values of m, A1, and A0 are
predicted values.
Discussion
In examining the accumulated returns
on mortgage-backed securities, it seems
that the forecasted accumulated returns
are actually marginally lower when the
interest rate spike is taken into account,
which would make the valuation of MBS
slightly higher than under benchmark
conditions. Thus this particular asset
class would not be negatively affected by
monetary policy tightening by the Fed.
28
cumulated asset returns for short-term
Treasury Bills, the gap is significantly
smaller between the benchmark and
VAR forecast. While the VAR forecast is
slightly larger than the benchmark over
the first three month period, signifying a
slight gain in accumulated returns of this
asset (and thus a decrease in valuation)
when interest rates are raised, the differential is not large enough to be of great
significance. In calculating the value loss
of short-term treasuries with more precision, the loss in the value of the threemonth security is .0058%, which is somewhat insignificant. This valuation can be
calculated as follows:
Benchmark: P0 = 1 / [1 + (.001/12) * 1 +
(.001/12) * 1 + .001/12)] = .99975
VAR Forecast: P0 = 1 / [1 + (.001/12) * 1
+ (.001293/12) * 1 + .001402/12)] = .99969
Valuation Loss of Asset: P0 - P0 = .99975
- .99969 = .000058 * 100% = .0058% loss
While it is not the case that an increase
in interest rates would have a significant effect on the accumulated returns
of mortgage-backed securities or shortterm Treasury Bills, this is not the case
for long-term Treasury Bills. In looking
at the accumulated asset returns graph
below, it becomes apparent that over
then ten-year maturity period of a longterm Treasury bill, there is a significant
divergence between the benchmark accumulated returns and the VAR forecast.
This divergence that occurs as the asset
approaches its time to maturity demonstrates that the Fed would undergo significant gains in the accumulated returns
of long-term Treasury Bills when they
raise interest rates. Such large increases
in accumulated returns of long-Term
Treasury Bills would contribute to a significantly lower valuation of this asset
class, and a significant capital loss when
the valuation decrease is brought to scale.
In examining the degree of divergence on
the graph, it seems that the Fed would experience approximately a ten percent loss
in the value of their holdings of long-term
Treasury Bills.
In examining the degree to
which the decrease in valuation will affect the Feds portfolio value on a larger
scale, it is essential to revert back to the
discussion of the how the asset distribution and size of the Feds balance sheet
has changed. Since the vector autoregression does not predict a significant increase
in accumulated reserves for short-term
Treasuries and actually predicts a slight
decrease for mortgage-backed securities
in the set of periods relevant to their re-
Fall 2013
spective valuations, the majority of losses
in reserves to the Feds asset portfolio
will be attributed to the increase in accumulated returns on long-term Treasury Bills. Assuming that approximately
fifty percent of the Feds asset portfolio is
made up of long-term Treasury Bills, and
there is a ten percent decrease in the valuation of this asset as forecasted through
the VAR model, the Fed can anticipate
an expected five percent decline in their
overall reserves. While a five percent loss
in overall reserves may not appear to be
significant at first glance, it is essential
to examine the overall size of the Feds
portfolio holdings to see the scale of a
five percent loss. Given the significant
expansion of the Feds overall asset holdings and balance sheet size over time, the
five percent loss due to their holdings of
long-term Treasury Bills alone could lead
to billions of dollars in losses.
This forecasted decline in overall reserves of the Fed is quite substantial
and could have significant effects on the
Feds decision-making process moving
forward. The fact that the Fed could and
most likely will lose a significant amount
of capital when they raise interest rates
is a clear indicator that they took on a
significant amount of risk through their
innovative use of monetary policy tools
throughout the crisis.
Conclusion
While it is highly likely that the Fed is
going to see valuation decreases in their
asset holdings when they raise interest
rates, it is essential to question whether
or not this is a relevant concern to the
Federal Reserve.
What implications
would a significant decrease in the value
of the Feds assets have on the Fed itself and the financial system as a whole?
Would this undermine the credibility of
the Fed in the publics eyes and shift the
lens of monetary policy going forward? A
group of economists argue that [many]
observers have expressed concern about
the magnitude of balance sheet expansion that the Fed and other central banks
have engaged in, noting the potential
costs in terms of market interference, disruption to the markets upon exit, potential losses effectively resulting in a drain
on the Treasury, and rising inflation expectations, eventually leading to rising
inflation (Greenlaw, David, et al 62). A
decrease in Federal Reserve net income
would leave room open for turmoil in
a variety of areas that could raise larger
economic concerns.
In Greenlaws analysis, the authors consider what would happen if remittances
from the Treasury became negative with
Columbia Economics Review
Fall 2013
29
Rahul Singh
Yale University
When the 2013 Cypriot banking crisis culminated in a proposal to tax insured deposits an unprecedented violation of Eurozone
deposit insurance, policymakers questioned
the sacrosanctity of deposit insurance, its role
in the macroeconomy, and how depositors
across Europe would interpret the unfulfilled
threat.
We attempt to answer these questions
using the popular framework proposed
by Diamond and Dybvig in 1983.1,2 The
Diamond-Dybvig model provides an explanation of how deposit insurance prevents bank runs if all depositors are fully
insured. But when the model is applied
to the Cypriot case, we find that it cannot
capture some essential features: strategic
uncertainty and bank run likelihood. For
these reasons, we call for an extension of
the Diamond-Dybvig model in that direction.
Diamond-Dybvig Model
Liquidity Creation
Banks create liquidity by investing in
an originally illiquid asset and offering
liquid deposits to consumers. We can de1
Bank Run
Even so, there are multiple Nash equilibriums. Most importantly, there is an
equilibrium when both Type 1 and Type
2 consumers decide to withdraw at T=1,
causing a bank run. Suppose that a frac-
r1)R
by T=2. At this
r1)R]/(1-
> (R-r1) /
30
Fall 2013
bank at T=1 if
r 1. 7
tr2(f ) <
endogenous increase in f
3
The Guardian. Cyprus Eurozone Bailout Prompts Anger
as Savers Hand Over Possible 10% Levy. March 16, 2013.
5
Reuters. Cyprus Banks Remain Closed to Avert Run on
Deposits. March 25, 2013.
11
The problem banks to which Lagarde referred are Laiki
and Bank of Cyprus. Under the final agreement, policymakers
split Laiki into a good bank consisting of its insured deposits
and a bad bank consisting of its nonperforming loans and
uninsured deposits. The good bank assets will transfer to Bank
of Cyprus. Shareholders, bond holders, and uninsured depositors of Laiki will take losses as their frozen assets are used to
resolve Laikis debts. Uninsured depositors in Bank of Cyprus
will take forced equity conversions.
4
The New York Times. Cyprus Passes Parts of Bailout Bill,
but Delays Vote on Tax. March 22, 2013.
6
BBC News. Cyprus Bailout: Parliament Postpones Debate. March 17, 2013.
12
Eurogroup. Eurogroup Statement on Cyprus. News release. March 25, 2013.
Fall 2013
13
31
Fall 2013
32
Rounding It Out
The Effect of Round Number Bias in U.S. and Chinese Stock Markets
Tiansheng Guo
Princeton University
Introduction
The exploitation of round number
bias is ubiquitous in retail and grocery
markets (grocery retailing). Prices are
most often set just slightly less than a
round number ($9.99, $9.95), exploiting
the irrational way in which our minds
convert numerical symbols to analog
magnitudes for decision-making: prices
just below a round number will be perceived to be a lot smaller than the round
number price due to the change in the
Figure 1
Columbia Economics Review
Fall 2013
33
Figure 2
Columbia Economics Review
Fall 2013
34
Figure 3
and other metrics. Pope and Simonsohn
(2010) found that in baseball, the proportion of batters who hit . 300 (1.4%) was
four times greater than those who hit
.299 (0.38%), and there were also more
instance of .301 than of .298. Thomas and
Morwitz (2005) found that nine-ending
prices affect perception when the leftmost digit changes, and that these effects
are not limited to certain types of prices
or products. In financial markets, if the
same preferences hold for certain numbers, we should see certain price levels
appear in trades more frequently than
numbers that have no preferences, and
perhaps even excess returns. In a 2003
paper more germane to our question,
Sonnemans examines the Ducth stock
exchange from 1990-2001 and concludes
that price levels cluster around round
numbers, and round numbers act as price
barriers. Finally, Johnson, Johnson and
Shanthikumar (2008) found that investors were trading differently depending
on whether closing prices were marginally below or above a round number.
In light of these encouraging findings
from past studies, we analyze how the effect of the bias differs in two drastically
different countries, U.S. and China. Although round number bias is caused by
an innate cognitive flaw that is present in
societies using Arabic numerals, U.S. and
China have very different sets of investors, laws, financial systems and culture,
all of which can influence the degree of
round number bias present in their respective stock markets.
Data
To analyze price clustering around
round numbers and next day returns
conditional on round number prices, we
will study daily closing prices and daily
returns with cash dividend reinvested,
of a set of 36 U.S. stocks traded on NYSE
and 36 Chinese stocks traded on the SSE
(A shares only), for the decade 6/1/2001 to
5/31/2011, which are all found on Wharton Research Data Services. The starting
date of 6/1/2001 is after the complete deci-
Figure 4
Columbia Economics Review
Fall 2013
are especially susceptible to security analysts rounding of forecasts or investors
targets. We use daily closing prices because they attract more investor attention
than a random price level during the day,
and can linger in the minds of investors
after a day of trading, capturing much of
the behavioral biases.
The reasons for drawing data from
U.S. or China and large cap or small
cap, are that there is plentiful data
from the two countries, and the financial markets of these two countries are
so different in terms of listed companies,
investors, and regulations, that many extensions and further studies can be done
based on this finding; we expect different
sets of investors to be trading in large cap
and small cap stocks, and different number of analysts covering the stocks, so we
expect the magnitude of round number
biases to differ across market caps and
countries. For Chinese stocks, we draw
from A shares listed on the SSE because
it has a large market capitalization and is
not open to foreign investors, unlike the
HKSE.
We choose the period June 2001 to June
2011 because the NYSE reported prices in
fractions (1/16) before 2001. The benefit of
this decade is that we see a rise out of the
dot-com bubble and another rise and fall
in prices from the Great Recession, which
would allow a larger range of price levels
and potential for certain prices to cluster.
This decade is interesting to analyze because the advent of online trading allows
many more unsophisticated traders to
participate in the stock market, but at the
same time, institutional investors become
more sophisticated.
Methodology
The paper will use a two-part analysis. The first part will analyze U.S. and
Chinese stock data for price clustering
around round numbers. The second part
will analyze next day returns conditioning on round number closing price.
Round number will be defined as prices
ending in one round decimal ($XY.Z0) or
two round decimals ($XY.00).
Price clustering is defined as prices lev-
35
Figure 5
els at which proportionally more trades
occur, and abnormal next day returns
as a significant regression coefficient on
a variable measuring round number.
If there were no price clustering, then
the decimals of stock prices should be
distributed uniformly from .00 to .99. If
there were no abnormal returns, then a
previous day closing price that ended in
a round number would have no significant explanatory power in the next day
returns.
The price clustering analysis will be
graphically presented in a frequency
chart, tallying the occurrences of round
number closing price, categorized by
country (U.S. vs. China) and size (large
cap vs. small cap), followed by a linear
regression (with binary dependent variable). The next day returns analysis will
be conducted with linear regressions, as
opposed to probit, for easier interpretation of the coefficients. It uses ifone as a
binary variable for the last decimal being
a round number, iftwo for both decimals, China for Chinese firms, and big
for large cap stocks. The two binary vari-
Table 1
Columbia Economics Review
36
Fall 2013
Table 2
resenting expected frequency given a uniform distribution. The findings of two
decimals analysis support that of one
decimal: round number bias in U.S. data
is manifested much more than in Chinese
data, and for Chinese data, bias in small
cap stocks is much more than in large
cap stocks. Most of the prices ending in
a 0 as the last decimal have another 0
or a 5 as the decimal before it, so that
much of the occurrences of WX.Y0 are
accounted for by WX.00 and WX.50. In
the U.S., round number bias is so strong
that prices ending in .00 occurred twice
as often as in a uniform distribution. Prices ending in .X0 (.10, .20, .30 etc.), and
especially .50 all occurred more than
the uniform distribution in both U.S. and
China, and additionally in U.S. only, all
prices ending in .X5 occurred more than
uniform.
Note that Chinese investors preferred
prices ending in .X0 and not .X5, while
U.S. investors strongly preferred both.
Additionally, in both U.S. large and small
caps, .25 and .75 had the greatest occurrences of all prices ending in .X5,
and are the only two price endings that
are greater than their round-number
counterparts, .20 and .70. This preference for quarter prices in the U.S. and
not China can be explained by the pervasive use of the Quarter coin as a currency, which is a foreign concept to the
Chinese. Frequent use of the Quarter
among the U.S. population strengthens
their familiarity and affinity for the .25
values. Another explanation for the clustering around quarter values is the lingering effects of the time period prior to
decimalization of stock prices, which occurred right before our sample period, so
U.S. investors are used to trading in 2/8,
12/16.
It is also interesting to see that Chinese
data, especially for small caps, had a preference for .99 and .98 and .88 that is
not seen at all in U.S. data. In Chinese
small cap data in particular, .88 had
more occurrences than any other non-
Fall 2013
Table 3
Levels
First, we show that liquidity and price
level effects can confound our price clustering analysis. More liquidity should
mean less round number bias, while
higher price levels should allow for more
The weighted variables reflect degree
bias. If data sets with lower liquidity
of bias more accurately: for smaller price
and higher price levels happen to have a
levels, weighted variables are greater,
higher level of round number bias, then
representing the higher percentage costs
bias can actually be driven by liquidity
that investors incur for being biased by
and price level effects, and not inherent
one-cent.
round number bias of investors. If there
Table 3 presents a measure of
are confounding effects, we need to adinherent bias in each of the four data
just for liquidity and price level. For insets:. The effects can be seen as a rescaled
stance, in Table 2 we demonstrate an inmeasure of degree of bias, with higher
terestingrelationship between US bid-ask
meaning more bias, though it is hard to
spreads and round number closing prices
interpret. The interpretation is as folfrom 2001-2011:
lows: given the same volume, being a
Consequently, to adjust for liquidity
U.S. small-cap stock (weightedifone =
and price level effects, we add volume
1.86229, weightediftwo = 0.26531) on
into the regression, and use frequency
average increases the chance of seeing
weighted by price level. The weighted
the last one (or two) decimals as round,
frequency variables weightedifone and
as a one-cent fraction of their closing
weightediftwo are calculated using the
price, by 1.86229% (or 0.26531%). For
following:
example, (holding constant volume), a
37
U.S. small stock trading around $23 has
an increased 10.60% chance of seeing a
first decimal round than if it were a U.S.
big stock:
weightedifone=(100*ifone)/
Pricelevel,
(1.86229-1.40144)=(100*ifo
ne)/23, ifone= .10600.
But if it were trading around $4, the
difference would be 1.8434% for a small
stock over a big stock. We also notice
that volume has a negative coefficient as
expected, since it reduces the amount of
bias through a tighter bid-ask spread. An
increase in a million shares per firm-day
on average reduces its weightedifone
and weightediftwo by 0.01464% and
0.00125%. This is a substantial impact
given that average volume of the four
data sets vary widely.
After controlling for liquidity and price
levels, Figure 16 shows that the ranking of degree of bias has changed: (from
weakest to strongest) U.S. large-cap, Chinese small-cap, Chinese large-cap, and
U.S. small-cap. The result suggest that in
the U.S., small-cap stocks exhibit more
bias than large-caps, but in China, it is
the reverse. This apparent contradiction
is explored in the later Discussion section.
Because volume may not have an equal
impact across U.S. and China, for the
U.S., we can use bid-ask spread data,
which directly measures the window
of prices surrounding a possible round
number. The variable usbidaskfrac and
its powers are calculated as:
Regression (2) in Table 4 takes into account that usbidaskfrac may have a nonlinear effect on degree of round number
bias. It also includes interaction variables
that accounts for the possibility that bidask window may not have an equal impact on U.S. large-caps and small-caps.
The results here support the previous
results that were found using volume as
a proxy for liquidity. The negative coefficient on big means that U.S. big stocks
exhibit less bias holding constant price
level and bid-ask ratio. Note that the coefficients on usbidaskfrac is positive as
expected, so that higher spread induces
more bias. However, the coefficients on
the interaction term bigxusbidask is
negative, so that higher bid-ask spread
induces bias for small stocks more so
than for big stocks, possibly due to the
already narrow spread in big stocks. All
this is consistent for prices that end in
two round decimals or just one.
In conclusion, U.S small-cap investors
seem to be inherently more biased to-
Figure 7
Columbia Economics Review
Fall 2013
38
Table 4
ward round numbers.
Discussion Price Clustering Adjusted for Liquidity and Price Levels
It seems contradictory that in the U.S.,
smaller stocks exhibit more bias, while
in China, smaller stocks exhibit less bias.
This finding can be explained by the fact
that investors of large-caps and smallcaps are different in U.S. and China, in
characteristics and motives. Kumar (2009)
shows that in U.S., individual investors
with lower income and less education
tend to gamble in small and local stocks,
giving small-cap stocks more speculative
qualities and more room for bias. Also
small-cap stocks are more likely to sellout or buy-in completely; their investors
are more likely to take a new position
or exit entirely, while turnover in largecaps are driven by existing holders who
are merely trading around their positions (Cevik, Thomson Reuters). U.S.
large-caps have more analyst coverage
(Bhushan, 1989) and more information
available than small-caps, with prices adjusting faster to new information (Hong,
Lim, Stein 2000), reducing round number bias. On the other hand, Hong, Jiang,
Zhao (2012) find that in China, small local stocks are traded more by richer, more
educated households in developed areas
for status reasons (Keeping up with the
Wangs). These investors may actually be
more sophisticated than investors who
trade large-caps, resulting in less bias in
Chinese small-caps.
Despite accounting for liquidity and
price level effects, it is surprising to see
how overall U.S. data would still be
as biased as Chinese data, even when
Table 5
Columbia Economics Review
Fall 2013
39
Figure 8
and weightediftwo are meant to capture
degree of bias net of price levels, so that
the greater the variables, the more serious the biases.
The variables weightedifoneCHN and
weightediftwoCHN, are not statistically
significant in any of the regressions, and
has little explanatory power on next day
returns. Volume surprisingly has a positive effect on next day returns, and does
not seem to be capturing liquidity premium (see later Discussion).
For U.S. data sets, we use bid-ask fraction instead of volume, with next-day returns in percentages. Regression (2) in the
Table 6 illustrates that round number bias
variables have significant effects on next
day returns. For small-caps, more bias (in
both one and two decimals) means lower
next day returns, with two-decimals having even more effect. For large-caps, more
bias in one-decimal similarly means lower returns. However, for large-caps, the
effect of having both decimals as round
is surprisingly large and positive, strong
enough to overwhelm the usual negative
effect from round number bias, generating higher next day returns.
Due to weighing of the variables, coefficients may be hard to interpret. For example, holding constant bid-ask fraction,
40
Fall 2013
Table 6
ment. Donaldson and Kim (1993) found
support and resistance levels in round
numbers in DJIA, which is only an index
that is arbitrarily scaled and round numbers do not say much about fundamentals. They also found that there were no
support and resistance levels in less popular indices. Future studies can look into
this by taking more lagged returns- for
example, next day returns may be higher,
but excess returns two days or a week
later may be negative.
Conclusion
Although many previous studies have
found positive results with different data
sets and older time periods, we expected
to find similar robustness in clustering
in newer data. Yet, we were uncertain
whether the effect would be weaker or
greater. The increase in sophistication
and narrowing of bid-ask spread should
give investors less chances to manifest
round number bias, but may be countered by increase in noise trader participation.
Indeed, price clustering effect was
significant and robust, across China and
U.S., large and small caps. However, seeing how U.S. data clustered significantly
more than Chinese data indicated the
possibility that U.S. investors are inherently more biased. After observing each
year individually in the 2001-2011 data,
Fall 2013
41
Mixed Messages
How Firms Should Utilize Private Information
Jian Jiao
University of Pennsylvania
Introduction
The strategic interaction between firms
and consumers can be considered a signaling game, in which consumers are not
completely aware of the quality of the
goods or services they get from a firm. It
is worthwhile to study the conditions under which firms have enough incentive to
fully reveal private information.
has increasingly emphasized the real option and its application in financial economics. The real option approach states
that having an opportunity to take on
an investment project is similar to holding a perpetual American call option.
Moreover, it has been demonstrated that
the timing of investment is equivalent to
the timing of exercise. Both McDonald
and Siegel and Dixit and Pindyck show
the optimal timing of an irreversible investment project when future cash flows
and investment cost follow geometric
Brownian motion (McDonald & Siegel,
Dixit & Pindyck). Under the real option
setup, the decision no longer depends directly on comparing the present value of
the benefits with the present value of the
total cost. Rather, one of the models suggests that it is optimal to undertake the
project when the present value of the
benefits from a project is double the investment cost (McDonald & Siegel). Research done by McDonald and Siegel and
by Dixit and Pindyck solves the optimal
timing quantitatively under both the riskneutral and risk-averse cases. However,
they do not pursue any signaling game in
which firms have various options regarding the timing decision.
Grenadier and Wang argue that both
the standard NPV rule and the real op-
42
tion rule described above fail to take
into account the presence of information
asymmetries. They introduce the idea
of optimal contracting into real option
studies; in their model, the firms owner
and manager are two separated interest groups. The manager has a contract
with the owner, under which he provides
costly effort when exercising the option.
He also has private information about the
true value of the underlying project. The
conclusion differs greatly from the case
in which there is a single agent. Since
the manager only receives part of the options payoff, he will exercise the option
later than earlier models suggest. My research differs from that of Grenadier and
Wang since I model the companys interactions with outside buyers rather than
the interaction between two parties inside the company. Even so, their research
provides solid background for the interaction between the information provider
and the firm.
Compared to Morellec and Schurhoff,
Grenadier and Malenko provide a brand
new perspective for the real option signaling game. In their model, the firm has
private information on the true value of
the project while outsiders have to interpret the firms true type. The company
cares about outsiders beliefs and will
take them into account when making
investment decisions. The whole model
characterizes the optimal timing of the
exercise under both perfect information
and asymmetric information. Using the
standard real option framework depicted
by Dixit and Pindyck, the paper suggests
that the optimal timing depends on the
sign of a belief function, which quantifies the firms concern about outsiders
belief of its type.
This paper is the benchmark of my research; however, the outsiders in my
model are a group of consumers, who
make decisions after observing the firms
behavior. Grenadier first does such an
analysis in his paper Information Revelation Through Option Exercise, where he
introduces the component that directly
affects the stochastically evolved value,
which is also an important factor in my
model. The major difference is that his
paper characterizes the strategic interactions between N > 2 firms, where the
signal sent by each individual agent distorts other agents exercise triggers. In
contrast, my research designs a scenario
where only one firm and one group of
consumers exist. This distinction sheds
light on the essence of the real option approach to investment and other corporate
finance decisions in financial economics.
Fall 2013
The Model Setup
A simple market is designed in which
there exists one firm and a number of
consumers. The firm possesses an investment opportunity to obtain some non-
Fall 2013
(4)
So that,
(5)
Let T* denote the first time X(t) hits certain level L:
Then by the Laplace transform of drifted Brownian motion, Theorem 8.3.2 in
(Shreve, 2010):
(6)
Note that T* is interpreted to be infinite
if X(t) never hits L.
Now return to the maximization problem. The firms objective is to choose the
optimal investment trigger P*. As soon
as the stochastic market price hits P*, the
firm will exercise the option. Let P0 de-
43
(7)
where,
(8)
Note that is the positive root of the
quadratic equation:
(9)
By the pre-specified requirement on
parameters in Part II, the positive root of
this equation is always greater than 1.
Now the maximization problem in
the perfect information case becomes
straightforward. Taking the FOC of (P*)
Fall 2013
44
in equation (7) with respect to P* gives:
yields:
company, or
(10)
Meanwhile, the following conditions
must be satisfied in order to obtain an optimal investment trigger,
(15)
and,
(16)
Therefore, as quality increases, will
decrease, causing a decrease in and a
higher critical investment trigger. Meanwhile, CH > CL because the per-unit cost
of producing these nondurable goods
should increase along with quality improvement. In other words, the firm
should spend more on each unit of the
product it invests if the product is of
higher quality.
Other comparative statics cases are also
interesting to study, as in the following
set of equations:
(11)
Note that the optimal investment trigger depends only on the companys true
type and the per-unit production cost. It
is not affected by the interaction between
the firm and consumers, so this optimal
trigger can perfectly reveal the companys type. Plug this result back into equation (7) gives the expected present value
of profit,
(12)
Now let us compare the optimal solution for high type and low type respectively under perfect information:
(17)
(18)
Remember in section II, it is assumed
that
.
Therefore,
(19)
Proposition 3: The steeper the upward
trend of the stochastic price, the higher
the optimal investment trigger.
Proposition 4: The more volatile the
stochastic price, the higher the optimal
investment trigger.
The intuition behind proposition 2 is
quite clear. The high type company must
pay a larger amount of money to obtain
this call option, so it would like to wait for
longer time, expecting a higher market
price to recover its cost. Meanwhile, its
lower spoil/depreciation rate gives it the
opportunity to do so. Proposition 3 implies that if the price grows more rapidly
over time, company can obtain a higher
price without waiting for too long and
letting their goods get exposed to the risk
of being spoiled. Furthermore, proposition 4 suggests that the firm is not against
high volatility because larger variance in
price gives the firm opportunity to reach
a higher price level in advance. Similar to
proposition 3, it also does not necessarily
request the firm to wait for longer period.
The classic NPV rule no longer applies
in the real option framework. By the NPV
rule, the firm should investment as long
as > 0, which implies P (t) > C. However, in the real option framework, exercising the option in advance incurs a loss
in opportunity benefit. These benefits
stem from the uncertainty of the market
price. By waiting for a shorter period of
time, the firm also sacrifices the possibility of obtaining a higher market price.
The Asymmetric Information Case
With private information, the company
might fool consumers by exercising at a
different output price. In particular, a low
type company could pretend to be a high
type company if it exercises the option
at higher price. The firms objective is to
maximize:
in which,
(13)
(14)
So which type of company has a higher
investment trigger under perfect information? As for the coefficient , differentiating equation (8) with respect to
(20)
and B(P*) is the outsiders belief funtion.
Separating Equilibrium
In a separating Perfect Bayesian Equilibrium (PBE), each type of company
uses a strategy that fully reveals its own
type. It is reasonable to assume that the
Fall 2013
high type company distinguishes itself
from the low type company by waiting
longer and exercises the option at PH*.
Meanwhile, the low type firm does not
have enough of an incentive to mimic the
high type in case the nondurable goods
in which it considers investing spoil or
depreciate too much. It will exercise the
option at PL*. Therefore, by Bayes rule,
the consumers belief is derived:
(22)
Equation (22) specifies the condition
under which neither type has enough
incentive to deviate from the original
optimal investment trigger. To further
demonstrate the separating equilibrium
constraint, the numerical example is provided here. Suppose there is a corn market in which the stochastic price in equation (2) has parameter.
As a result, we have .
With these values, inequality (22) becomes:
And similarly,
(21)
Consumers are willing to buy QH units
of product if the price they observe corresponds to the high type optimal trigger.
Any lower trigger will be considered as
low type.
Constraints for obtaining separating
equilibria are of the following:
(23)
Both conditions put strict constraints
on the relationship between the quantities demanded ratio and the per-unit production cost. It has been assumed in the
model that low type company sells less
than high type company, so the inequality above suggests that the difference in
quantities demanded between low type
and high type firm should not be too
large. Otherwise, the first condition in
(22) fails and low type firm will try to
mimic the high type by exercising the option later. In fact, if CL is normalized to
be 1, the relationship between quantities
ratio and per-unit cost ratio can be plotted (fig. 1).
For any given value of , as long as
ies in the region between two bounds, we
will be able to obtain a separating equilibrium. If
is too low, the low type
firm will benefit more from waiting because it will be able to sell a much higher
quantity than before if it imitates the high
quality firm. Note that the upper bound
is always 1. This is because
is
the assumption of our model. Furthermore, the upper bound shows that the
high quality firm will never have incentive to deviate from PH*. because for them
high trigger strategy strictly dominates
the low trigger strategy2.
Conclusion
In this paper, the real option approach
2
Unlike low type firm that can sell more if deviate,
high quality firm does not enjoy such advantage. It will
45
and the signaling game framework are
combined to study how uncertainty affects firms behavior under perfect and
asymmetric information cases. The geometric Brownian progression of price is
implemented throughout the paper, and
it is particularly popular for nondurable goods that are frequently traded in
the market, such as corn, wheat, etc. As
hypothesized in section IV, uncertainty
is seen to give firms the opportunity to
explore higher prices and allows them
to make decisions later. It is also discovered that under perfect information, high
type firms wait longer, primarily because
of the lower risk of losing the option associating with waiting and the firms incentive to obtain higher prices in order
to recover cost. Note that under perfect
information, firms do not need to consider the quantity demanded when making
decisions.
With asymmetric information, however, firms behavior might be distorted
due to consumers belief. As pointed out
in section V, if the quantity demanded by
consumers fails to meet the equilibrium
constraints, a low type firm will deviate
from its original optimal strategy and
choose to mimic a high type firm.
This paper provides rigorous quantitative tools for companys decision making.
It gives theoretical suggestions for the
company about how to utilize the advantage of private information, with the goal
of maximizing profits. Further research
should explore the empirical side of this
real option signaling model to decide
what specific factors might affect firms
strategy in the real world. n
Environmental Competition
Congratulations to the winners of the 2014 National Environmental Policy Competition: Raymond de Oliveira, Andy Zhang, and Francesca
Audia of Columbia University.
Taking second-place is the team of Wina Huang,
Scott Chen, Caroline Chiu, and Chengkai Hu
from Columbia University. Receiving third-place
is the team of Andrew Tan and Peter Giraudo
from Columbia University.
Forty-six individual and team participants from
universities across the United States competed
for a grand prize of $500, a second prize of $250,
and a third prize of $125 by setting forth a robust
policy model on addressing present climate and
energy challenges.
The competition took place between December 24th and January 15th. Each team of 1-5 submit ted a
15-minute presentation and an extensive commentary outlining the state of climate change, the economic
Special thanks for the generous support of the Columbia University Economics Department, the Program for Economics Research, and The Earth Institute.
GUIDELINES
CER is currently accepting pitches for its upcoming issue. You are encouraged to submit your article proposals, academic scholarship, senior seminar papers, editorials, art and photography broadly relating to the field of economics.
You may submit multiple pitches. CER accepts pitches under 200 words, as well as complete articles.
Your pitch or complete article should include the following information:
1. Name, school, year, and contact information (email and phone number)
2. If you are submitting a pitch, please state your argument clearly and expand upon it in one brief paragraph. List
sources and professors/industry professionals you intend to contact to support your argument. Note that a full source
list is not required.
3. If you are submitting a completed paper, please make sure the file is accessible through MS Word.
Pitches will be accepted throughout the fall and are due by February 15; contact us for the exact date. Send all pitches
to economics.columbia@gmail.com with the subject CER Pitch- Last Name, First Name. If you have any questions
regarding this process, please do not hesitate to e-mail us at economics.columbia@gmail.com. We look forward to
reading your submissions!
Online at
ColumbiaEconReview.com
Read, share, and discuss | Keep up to date with web-exclusive content