You are on page 1of 5

Performing Event Studies

Peter Caya
June 17, 2017

Often we read in the news about some major news regarding a company. This is often something spectular
like Ubers recent troubles, but often involves something much more mundane such as the directors of a
company misleading stockholders about the financial health of a company. This often queues discussion on
financial news networks like CNBC about the way that the news impacts the value of the company and the
way that the company behaves as a result of the disclosure.
Some parties (lawyers in the case of my job) or academics attempting to discern the way that some events
impact a company must find a way to objectively determine just to what degree an event (or type of event)
effects a company. In the realm of legal cases for securities fraud this may be whether real harm was caused
and what damages are. For an academic this may instead be one specific type event (like stock splits) over a
range of companies.

What Issues Need to Be Addressed

The methodology employed has to meet the following requirements:


1. Follow generally the generally accepted literature on the manner in which security returns behave.
2. Be feasible in term of available data.

What is an Event Study?

Terms Underlying Event Studies

The most crucial (and obvious) part of the analysis is determining what we are examining. Common topics
for event study analysis include:
The impact that announcements can have on the value of a company.
Admissions of fraud.
Restaements of financial performance.

Measuring Event Impacts

Fitting a Market Model

To determine what unusual impact an event has, we first have to come to some kind of understanding of
what is normal. Lets take the scenario where we are flipping a coin 100 times. Well do this 1000 different
times and see the number of heads or tails we get each time.
So, each of these trials will be governed by the binomial probability distribution function:

1
Where:
p = .5
n = 100
Typically we can expect the number of heads in each of ourtheme(text = element_text(size=20), leg-
end.position=none) trials as being between _______ and _______. We will count games as rigged if
they land two or more standard deviations away from the mean.
So, note some of the basic concepts here:
1. We define a probability distribution which describes the phenomena we observe.
2. We also define some central measure of normality as well as a tolerance about this central measure
which is defined as normal.
Given that we are doing 1000 of these coin flipping trials, we may want to identify the trials where somebody
was cheating. We would expect the number of heads to be between ___ and ___ 99% percent of the time.
So, well call a game rigged if it is falls outside of this range.
Lets take a look at our numerical example:
set.seed(13)
sample_size <- 1000
trials <- rbinom(1000,size = 100,prob = .5)
trials[c(22,516,843)] <- c(9,88,31)
trial_probs <- pbinom(trials,size = 100,p=.5)

stdev <- sqrt(100*.5*(1-.5))

plot_data <- data.frame(1:1000,rep(x = 50,1000),


rep(x = 50,1000)+stdev,
rep(x = 50,1000)+2*stdev,
rep(x = 50,1000)-stdev,
rep(x = 50,1000)-2*stdev,
trials)

names(plot_data) <- c("Ind","Mean","psd","2psd","nsd","2nsd")


meltedplot <- melt(data = plot_data,id.vars = "Ind")

# Shift the y-axis to 0 to 100.


# Color outliers.
# Remove labels.

ggplot(meltedplot) + geom_point(aes(x = Ind,y= value,colour = variable))

2
75

variable
Mean
psd
value

50 2psd
nsd
2nsd
NA

25

0 250 500 750 1000


Ind

Anomaly Detection with a Market Model

The previous example is simple: We use the mean and some simple properties of the probability distribution
function to identify cheating in our 1000 trials. As it turns out, we can identify suspicious games by simply
looking for the values which lie far enough from the mean as to be extremely unlikely. In the case above, our
artificial cheater experiments have the probabilities of r, r, and r.
Lets add one other detail to make things slightly more complicated: Lets say that instead of doing simple
binomial trials like above, we perform a set of trials where the mean of the distribution changes over time.
In this example let:



The strategy above wont work as well since the central measure that we are using to determine outliers is
changing. In this case, we need to:
As it turns out, employing a regression model employs similar reasoning as this second numerical example
above. In the same way, we attempt to identify what the center of the distribution should be and the size of
the standard deviation for the distribution. In this case, the statistical model we use to describe the returns
tends to be ordinary least squares:

y = X

3
We find the difference between the actual returns versus the model returns in the same manner as example 2,
first by calculating the abnormal returns (residuals):

R = y y

However, the regression equation has the disadvantage of [DISCUSS THE REASONS FOR STUDENTIZED
RESIDUALS HERE]
Because of this, we scale the residuals according to the standard error of the regression equation by studentizing
the residuals. In our application, the results are called T-stats and are the ruler by which we attempt to
discern significant events:

Ri
T Stat(Ri ) =
SE

Abnormal Returns/Residuals

Studentized Residuals/T-stats

Event Window

The less obvious factor is the time period over which we are examining the event. Often the impacts of an
event can take place over several different days. Due to this, we need to account for cumulative effect that a
disclosure may have on a company during the event window.
The period that we use for the event window is critical as it determines how we calculate the cumulative
abnormal returns.

Cumulative Access Returns

A Simple Numerical Example

symb_eval <- function(x){as.data.table(eval(parse(text = x)))}

pct <- function(x){


inds <- (is.na(x))
returns <- x[!inds]/Lag(x[!inds])-1
ret_x <- x
ret_x[!inds] <- returns
return(ret_x)
}

getSymbols(Symbols = "GOOG",src = "google")

## 'getSymbols' currently uses auto.assign=TRUE by default, but will


## use auto.assign=FALSE in 0.5-0. You will still be able to use
## 'loadSymbols' to automatically load data. getOption("getSymbols.env")
## and getOption("getSymbols.auto.assign") will still be checked for
## alternate defaults.
##
## This message is shown once per session and may be disabled by setting
## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for details.

4
## [1] "GOOG"
tickers <- c("GOOG","YHOO","GS")
stocks <- lapply(tickers,function(x){getSymbols(x,src = "google")})

Company <- symb_eval(stocks[[1]])


Company[,Returns := pct(GOOG.Close)]

Other Issues Involved with Event Study Analysis

Sources Cited:
[1] The Econometrics of Financial Markets: Campbell, Lo, MacKinlay [2] https://pdfs.semanticscholar.org/
aac6/83a678a12a3dcd73389aac7289868847ea73.pdf

You might also like