You are on page 1of 7

Addendum to The Analysis of a Momentum Model and

Accompanying Portfolio Strategies


Robert T. Samuel III

Draft version: 27 June 2013


1 Univariate Tests & Statistics
1.1 Jarque & Bera Test
Jarque & Bera (1987) developed a Lagrange Multiplies statistic to test for non-normality
which is dened as
JB = N
_
(

b
1
)
2
6
+
(b
2
3)
2
24
_
(1)
where

b
1
is the sample skewness, b
2
is the sample kurtosis and N is the sample size. The
JB statistic follows a
2
2
distribution with the null hypothesis of a normality. In R we use
the jarque.bera.test function with default parameters from the the {tseries} package.
1.2 Ljung & Box Test
Ljung & Box (1978) developed a portmanteau test to determine if observations within a time
series were independent with their Q test statistic dened as
Q = n(n + 2)
h

k=1

2
k
(n k)
(2)
where n is the sample size, the sample autocorrelation and h the number of lags to be
tested. The test statistic follows a
2
h
distribution with a null hypothesis that the data are
independently distributed. In R we use the jbtest function with default parameters from
the the {lmtest} package.
1.3 Generalized Autoregressive Conditional Heteroscedasticity
If we let

t
|
t1
N(0, h
t
) (3)

Correspondence: rtsamuel3@gmail.com
1
be a discrete, real-valued stocashtic variable then it follows a Generalized Autoregressive
Conditional Heteroscedasticity (GARCH) process [Bollerslev (1986)],
h
t
=
0
+
q

i=1

ti
+
p

i=1

i
h
ti
(4)
for specied values of q and p. In R we use the garch function with default parameters from
the the {tseries} package.
1.4 Chow & Denning Test
Let the Variance Ratio (VR) statistic for a time series y
t
be
V R(k) =

2
(k)

2
(1)
(5)
where k is a specied lag period and

2
(k) = [k(T k + 1)(1
k
T
)]

1
T

t=k
(y
t
y
tk
)
2
(6)
is the k-period variance estimate,

2
(1) = (T 1)

1
T

t=1
(y
t
y
t1
)
2
(7)
is the one-period variance estimate given a time period T, = T

T
t=1
x
t
and x
t
= y
t
y
t1
.
Lo & MacKinlay (1988) showed that the proper test statistic under homoscedasticity was
M
1
(k) =
V R(k) 1
(k)
1
2
(8)
where
(k) =
2(2k 1)(k 1)
3kT
(9)
is the asymptotic variance. In cases of heteroscedasticity they proposed
M
1
(k) =
V R(k) 1

(k)
1
2
(10)
where

(k) =
k1

j=1
_
2(k j)
k
_
2
_
T

t=j+1
(x
t
)
2
(x
tj
)
2
_

_
T

t=1
(x
t
)
2
_
(11)
is the appropriate asymptotic variance. Given these statistics and their distributions, Chow
& Denning (1993) proposed calculating m Variance Ratio statistics where
MV
1
=

T max
1im
|M
1
(k
i
)| (12)
2
is the appropriate test statistic under homoscedasticity and where
MV
2
=

T max
1im
|M
2
(k
i
)| (13)
is the appropriate test statistic under heteroscedasticity
1
. In R we use the Chow.Denning
function with values k = {2, 5, 10} from the the {vrtest} package.
2 Multivariate Tests
2.1 Kruskal-Wallis Test
Kruskal & Wallis (1952) developed a test to determine if samples are from the same popu-
lation. They proposed the test statistic of
H =
12
N(N+1)

C
i=1
R
2
i
n
i
3(N + 1)
1

O
j=1
T
j
(N
3
N)
(14)
where C is the number of samples, n
i
is the sample size for the ith sample, N =

C
i=1
n
i
is the
total observations, R
i
is the sum of the ranks for the ith sample, O is the number of groups
with ties, t
j
the number of ties in the jth group and T
j
= t
3
j
t
j
. This test statistic follows
a
2
C1
distribution with the null hypothesis that all of the samples come from the same
distribution and an alternate hypothesis that at least one sample has a dierent distribution
than the other samples. Note that this test is an extension of the MWW test where C > 2.
In R we use the kruskal.test function with default parameters from the the {stats} package.
2.2 Mann-Whitney-Wilcoxon (MWW) Test
Mann & Whitney (1947) developed a test, which extended a test developed by Wilcoxon
(1945), that attempted to determine if one random variable was stochastically larger than
the other. Let x ad y be two random variables, then the test statistic is
U = mn +
m(m + 1)
2
T (15)
where m is the sample size for y, n the sample size for x and T is the sum of the ranks of
the y variable. The null hypothesis is that the two variables are equal with the alternate
that x is stochastically smaller than y
2
. In R we use the wilcox.test function with default
parameters from the the {stats} package.
1
Charles & Darne (2009) provide an overview of the dierent Variance Ratio tests from which this section
borrowed.
2
A random variable, x, is said to be stochastically smaller than another random variable, y, if f(a) > g(a)
for any a and where f() and g() are their respective continuous cumulative distribution functions.
3
3 Ordinary Least Squares (OLS) Diagnostic Tests
3.1 Equal Variances
If we dene = Y X as the residuals from an OLS estimator, then one of the assumptions
of Gauss-Markov theorum is that the variance should be constant, formally noted as
V (
i
) =
2
< (16)
where V () is the variance estimator and
2
= V ar(Y )
3
. If hetereoscedasticity is present
then our standard errors will be incorrect and we can not make accurate inferences about the
statistical signicance of the parameter estimates from OLS. The Breusch & Pagan (1979)
test statistic is of the form
BP = nR
2
(17)
where n is the sample size of the original regression equation to be analyzed and R
2
represents
the square of the coecient of determination from the auxiliary regression
e
2
i
=
0
+
1
z
1,i
+ ... +
p
zp, i +
i
(18)
where e
i
represents the residual, z
p,i
represents an independent variable from the original
regression and p is the number of independent variables from the original regression. The
test statistic from (17) follows a
2
(p1)
distribution with a null hypothesis of homoscedasticity
and in R we use the bptest function with default parameters from the the {lmtest} package.
3.2 Autocorrelated Errors
Another assumption of the Gauss-Markov theorum is that the s are uncorrelated, formally
noted as
cov(
t
,
s
) = 0, t = s (19)
where cov() is the covariance estimator. The Breusch & Godfrey (1978) test statistic is of
the form
BG = nR
2
(20)
where n is the sample size of the original regression equation to be analyzed and R
2
represents
the square of the coecient of determination from the auxiliary regression

Y
t
=
0
+
1
X
1,t
+ ... +
k
X
k,t
+
1

t1
+ ... +
p

tp
+
t
(21)
where

Y is an estimate of the independent variable, X
k
is a dependent variable and an
estimate of the residual all from the original regression and p is a specied number of lags
to be tested. The null hypothesis is of no autocorrelation with a
2
p
distribution and in R
we use the bgtest function with default parameters from the the {lmtest} package.
3
Formally, if all three assumptions of the Gauss-Markov theorem are met than the OLS estimator is the
Best Linear Unbiased Estimator (BLUE) and no other estimator has a greater eciency.
4
3.3 Non-normality of Errors
Although Gauss-Markov does not assume normality of the error terms, the Maximum Like-
lihood Estimator (MLE) is no longer the Uniformly Minimum Variance Unbiased Estimator
(UMVUE) if the error terms are non-normal. For our purposes, we use the Jarque-Bera test
to test for normality of our OLS residuals.
4 Portfolio Performance Statistics
4.1 Sharpe Ratio
The Sharpe Ratio, Sharpe (1966), is dened as
SR =
E[R
p
R
rf
]
V ar[R
p
R
rf
]
(22)
where R
p
represents the return of a portfolio and R
rf
is the appropriate risk-free rate
4
4.2 Omega
Formally, if we let F be the specied CDF for a data series, x, which is dened on the range
(a, b) then
() =
_
b
L
1 F(x)dx
_
L
a
F(x)dx
(23)
is the Omega statistic for a specied threshold [Keating & Shadwick (2002)]. Given that we
do not know a priori the distribution of our data series, we take a non-parametric approach
to estimate F via kernel density estimation techniques of which Li & Racine (2007) provide
an excellent overview. In particular, we are interested in estimating the leave-one-out kernel
density estimator

F
i
(x) =

n
j=i
G(
xXj
h
)
(n 1)
(24)
where G(x) =
_
x

k(v)dv is the CDF for a specied kernel k() and h is the bandwidth size.
For our analysis we select the Gaussian kernel which we dene as
k
h
(x) = exp
_

x X
j

2
2h
2
_
(25)
where is the Euclidean norm
5
. In R we use the density() function, with a bandwidth
of bw.nrd and default parameters, to estimate

f(), the probability density function (pdf)
4
Note, in his original formulation, Sharpe did not include the risk-free rate in the calculation of the
statistic as it was assumed to be constant. In a later formulation, he updated his statistic to allow for a
benchmark which could vary with time and was not necessarily a risk-free asset.
5
The Epanechnikov is asymptotically the most ecient kernel; however, it can be ill-behaved in some
situations. In addition, the Gaussian kernel is about 5% less ecient than the Epanechnikov kernel so there
isnt a great loss in eciency [Wand & Jones (1995, p.31)].
5
of x. With this pdf estimate we use the approxfun() to approximate the CDF with zero
probabilities given to those values outside the range of x. Next we use the integrate()
function to evaluate the two integrands, with (a, b) = (, ), and estimate our Omega
statistic as their ratio
6
. It should be noted that all of the listed functions reside within the
base {stats} package of R.
4.3 Maximum Drawdown (MaxDD)
The Max Drawdown (MDD) statistic is dened as
MDD = max
t[0,T]
[ max
s[0,t]
W(s) W(t)] (26)
where W(t) is the wealth of a portfolio at time t. A heuristic viewpoint of MDD is that it
attempts to quantify the amount of money a strategy could lose before it corrects itself and
is a complimentary statistic to the cumulative returns for a strategy. Magdon-Ismail et al
(2004) provide an excellent overview of some of the properties of the Maximum Drawdown
for Geometric Brownian Motions (GBM).
4.4 Jensens
Let us dene Jensens , Jensen (1968), for the jth asset as the parameter derived from the
probabilistic model

R
j,t
R
F,t
=
j
+
j
[

R
M,t
R
F,t
] +
j,t
(27)
where

R
j,t
is the return of the jth portfolio at time t given a model & strategy combination,
R
F,t
is the appropriate risk-free rate and

R
M,t
is the return of the market portfolio.
References
[1] Bollerslev, T. (1986) Generalized Autoregressive Conditional Heteroskedasticity, Journal of
Economics 31, 307-327
[2] Breusch, T. and Pagan, A. (1979) A Simple Test for Heteroscedasticity and Random Coe-
cient Variation, Econometrica 47, 1287-1294
[3] Breusch, T. (1979) Testing for Autocorrelation in Dynamic Linear Models, Australian Eco-
nomic Papers 17, 334-355
[4] Charles, A. and Darne, O. (2009) Variance-Ratio Tests of Random Walk: An Overview,
Journal of Economic Surveys 23(3), 503-527
[5] Chow, K. and Denning, K. (1993) A Simple Multiple Variance Ratio Test, Journal of Econo-
metrics 58, 385-401
6
Given that, due to approximation, the sum of the two integrands may not equal 1, we use the property
that
_
b
L
1 F(x)dx is the complimentary CDF of
_
L
a
F(x)dx to evaluate the numerator in ().
6
[6] Godfrey, L.G. (1978) Testing Against General Autoregressive and Moving Average Error
Models when the Regressors Include Lagged Dependent Variables, Econometrica 46, 1293-
1302
[7] Jarque, C. and Bera, A. (1987) A Test for Normality of Observations and Regression Resid-
uals, International Statistical Review 55(2), 163-172
[8] Jensen, M. (1968) The Performance Of Mutual Funds In The Period 1945-1964, The Journal
of Finance 23(2), 389-416
[9] Keating, C. and Shadwick, W. (2002) A universal performance measure, Journal of Perfor-
mance Measurement 6(3), 59-84
[10] Kruskal, W. and Wallis, W. (1952) Use of Ranks in One-Criterion Variance Analysis, Journal
of the American Statistical Association 47(260), 583-621
[11] Li, Q. and Racine, J. Nonparametric Econometrics. Princeton: Princeton University Press
(2006)
[12] Ljung, G. and Box, G. (1978) On a Measure of Lack of Fit in Time Series Models, Biometrika
65, 297-303
[13] Lo, A. and MacKinlay, A. (1988) Stock Market Prices do not Follow Random Walks: Evidence
from a Simple Specication Test, The Review of Financial Studies 1(1), 41-66
[14] Magdon-Ismail, M., Atiya, A., Pratap, A. and Abu-Mostafa, Y. (2004) On the Maximum
Drawdown of the Brownian Motion, Journal of Applied Probability 41(1)
[15] Mann, H. and Whitney, D. (1947) On a Test of Whether one of Two Random Variables is
Stochastically Larger than the Other, Annals of Mathematical Statistics 18(1), 50-60
[16] Sharpe, W. (1966) Mutual Fund Performance, The Journal of Business 39(1), 119-138
[17] Wand, M. and Jones, M. Kernel Smoothing. London: Chapman & Hall (1995)
[18] White, H. (1980) A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct
Test for Heteroskedasticity, Econometrica 48(4), 817-838
[19] Wilcoxon, F. (1945) Individual Comparisons by Ranking Methods, Biometrics Bulletin 1(6),
80-83
7

You might also like