You are on page 1of 80

Issue 68

The CFA helped me get my job, the CMT helped me keep my job.
Craig Johnson, CMT, CFA Piper Jaffray
Every step of my career: working as an analyst, publishing my book,
becoming an instructor, can all be traced back to the MTA and the CMT Program.
Hima Reddy, CMT New York Institute of Finance

646-652-3300
go.mta.org/cmt

GreetinGs from the editor!


The mission of the Journal of Technical Analysis (JOTA) is to advance the knowledge and understanding of
the practice of technical analysis through the publication of well-crafted, high-quality papers in all areas
of technical analysis. While the MTA membership is the primary audience for the journal, the readership
reaches far beyond the organization. The journals presence in libraries and electronic databases allows
both practitioners and academics around the world to access its content.
Included in this issue are two Charles H. Dow Award winning papers. George Schade, author of the 2013
winning paper, The Repeating Story of On Balance Volume, provides insight into the development of
an important technical tool. The 2014 Dow Award paper, An Intermarket Approach to Beta Rotation:
The Strategy, Signal and Power of Utilities, by Charles Bilello, and Michael A. Gayed shows how a basic
switching model based upon intermarket analysis and momentum would have outperformed a buy-andhold strategy since 1926.
These two papers, along with the additional six articles in this issue, highlight the diversity of topics which
are of interest to the technical analyst. These articles consider a range of topics, from the connection of
Keynes writings to modern day behavioral finance to the trading of leveraged ETFs. The MTA is a leader
in providing a record of historical developments in the field for future generations as well as encouraging
the study of new products, tools, and techniques to increase our understanding of the market.
The production of an issue of the Journal of Technical Analysis is a complex process, requiring the
assistance of many individuals. I would like to thank all of those who contributed to this processthe
authors who have shared their knowledge, expertise, and experience with the broader community of
technical analysts, those reviewers who provided valuable feedback and suggestions during the doubleblind review process, and the staff at the MTA office who provided significant support in the production
and distribution process.
If you have an idea for a paper you would like to submit for consideration for publication in a future issue,
please let me know. The editorial board and I are here to help you throughout the process.

Julie Dahlquist, Ph.D., CMT

TABLE OF CONTENTS
LETTER FROM EDITOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
AN INTERMARkET AppROACh TO BETA ROTATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Charles V. Bilello, CMT and Michael A. Gayed, CFA

ThEREpEATINgSTORyOFONBALANCEvOLuME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
George A. Schade, CMT

IDENTIFICATION AND uTILIzATION OF BALANCED LADDER LEvELS . . . . . . . . . . . . . . . 23


Carl Aspin

SuppORT AND RESISTANCE LEvELS DEFINED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35


Carl Aspin
KEYNES AND THE PSYCHOLOGY OF THE MARKETS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Stella Osoba, CMT

LOOkINg TO OuR OwN pLANET FOR MARkET INSIghTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51


Tom McClellan

pARAMETER-RESuLTS STABILITy: A New Test of Trading Strategy Effectiveness . . . . . . . . . . . 61


Larry Connors and Matt Radtke

hARNESSINg vOLATILITy FOR pROFIT ThROugh LEvERAgE . . . . . . . . . . . . . . . . . . . . . . . . 69


G.L. Biff Robillard III, CMT and Thomas C. Pears

EDITORIAL BOARD

Stanley Dash, CMT

Richard J. Bauer, Jr., PhD., CMT, CFA

Applied Technical Analysis


TradeStation Securities

Professor, Finance
St. Marys University

Fred Meissner, CMT

Mr. Jeremy du Plessis, CMT

Founder & President


The FRED Report

Head of Technical Analysis and Product Development


Updata

David Aronson, CMT

Cynthia A. Kase, CMT, CTA, MFTA

President
Hood River Research Inc. & TSSB Software.com

President
Kase and Company, Inc.

Kevin Lapham, CMT

Sid Mokhtari, CMT

Portfolio Manager

CIBC World Market

Kristin Hetzer, CMT, CIMA, CFP

Carson Dahlberg, CMT

Royal Palms Capital, LLC

Partner
Northington Dahlberg Research

Publisher
Market Technicians Association, Inc.
61 Broadway, Suite 514
New York, New York 10006
646-652-3300
www.mta.org
Journal of Technical Analysis is published by the Market Technicians Association, Inc., (MTA) 61 Broadway, Suite
514, New York, NY 10006. Its purpose is to promote the investigation and analysis of the price and volume
practicitioner) and libraries in the United States, Canada, and several other countries in Europe and Asia. Journal of
Technical Analysis is copyrighted by the Market Technicians Association and registered with the Library of Congress.
All rights are reserved.

5IF.BSLFU5FDIOJDJBOT"TTPDJBUJPO&EVDBUJPOBM
'PVOEBUJPO .5"&'
XBTFTUBCMJTIFE*OBTB
OPOQSPUPSHBOJ[BUJPOUPDSFBUFBOEGVOE
FEVDBUJPOBMQSPHSBNTJOUIFFMEPG5FDIOJDBM
"OBMZTJT0WFSUIFZFBST UIF.5"&'IBTCFFOBCMFUP
DBSFGVMMZEFWFMPQBDPNQSFIFOTJWF5FDIOJDBM"OBMZTJT
DVSSJDVMVNUIBUJTCFJOHUBVHIUJODPMMFHFTBOE
VOJWFSTJUJFTBSPVOEUIFDPVOUSZ
VOJ

"4"6/*7&34*5:

(BJOBDDFTTUPBDPNQMFUFMFDUVSF5FDIOJDBM"OBMZTJTDPVSTFJODMVEJOHBMM
DIBSUT FYIJCJUT FYBNT BOEBDPNQSFIFOTJWFHMPTTBSZPGUFSNTVTFEJO
5FDIOJDBM"OBMZTJT4VSSPVOEZPVSTFMGXJUIBTVQQPSUUFBNPGOBODJBM
QSPGFTTJPOBMTBOETFUZPVSTFMGBQBSUGSPNPUIFSOBODFQSPHSBNT

"4"456%&/5

(BJOBOJNQPSUBOUDPNQFUJUJWFFEHFCFGPSFHSBEVBUJOHDPMMFHFCZFYQPTJOH
ZPVSTFMGUPTPNFPGUIFJOEVTUSZTUPQOBODJBMUFDIOJDJBOTBOEHBJO
PQQPSUVOJUJFTUPFBSOTDIPMBSTIJQTGPSUIFDPWFUFE$IBSUFSFE.BSLFU
5FDIOJDJBO $.5
&YBNTUISPVHIUIF.5"

MFBSONPSFBCPVUIPXZPVDBOHFUJOWPMWFEBUNUBFGPSH

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

an intermarket approach to beta rotation


The strategy, signal and Power of utilities
charles v. bilello & michael a. Gayed

bioGraPhy
Charles V. Bilello, CMT
Charlie Bilello, winner of the 2014 Charles H. Dow Award and 3rd Place Winner of the 2014 Wagner Award, is the Director of Research
themes and portfolio positioning to clients. Prior to joining Pension Partners, he was the Managing Member of Momentum Global Advisors,

As winner of the 2014 Charles H. Dow Award and 3rd Place Winner of the 2014 Wagner Award, Michael helps to structure portfolios
to best take advantage of various strategies designed to maximize the amount of time and capital spent in potentially outperforming
investments. Prior to this Chief Investment Strategist role, Michael served as a Portfolio Manager for a large international investment group,

to structure client portfolios. In 2007, he launched his own long/short hedge fund, using various trading strategies focused on taking
advantage of stock market anomalies.

charles h. doW aWard 2014 submission Winner


absTracT

inTroducTion

It is often said by proponents of the Efficient Market Hypothesis that no


strategy can consistently outperform a simple buy and hold investment in
broad stock averages over time. However, using a strategy based on the
principles of intermarket analysis, we find that this assertion is not entirely
accurate. The Utilities sector has many unique characteristics relative to
other sectors of the broader stock market, including its higher yield, lower
beta, and relative insensitivity to cyclical behavior. Our analysis suggests that
rolling outperformance in the sector is not only exploitable, but also provides
important signals about market volatility, seasonality, and extreme market
movement. We explore historical price behavior and create a simple buy
and rotate strategy that is continuously exposed to equities, positioning into
either the broad market or the Utilities sector based on lead-lag dynamics.
Absolute performance and risk-adjusted returns for this beta rotation approach
significantly outperform a buy and hold strategy of the market and of the
Utilities sector throughout multiple market cycles.

Buy and hold is often touted as the ultimate investment strategy when it comes
to stock market investing. The reasoning for this relates to the belief in the
Efficient Market Hypothesis, which states that because all known information
is factored into price, there is largely no edge to active and dynamic trading.
Indeed, numerous studies have documented the inability of investment
managers bench marked to a market average to consistently outperform
passive strategies through stock selection.1 However, academic studies have
also noted persistent anomalies and phenomena in the marketplace which are
consistent and exploitable, putting the Efficient Market Hypothesis in doubt.2
Many of these studies focus on momentum and seasonality, and tend to be of
intense interest for technical traders.

1
2

See Day, Wang, and Xu (2001).


See Philip and Torbey (2002).

However, long before the power of momentum and seasonality was


discovered through various white papers on those subjects, market technicians
intuitively noticed price behavior which could lead broad market averages.

Using intermarket analysis, a branch of technical analysis that has grown


tremendously in recent years, technicians have uncovered relationships
between asset classes which can be predictive of economic and market cycles.
One of the more recognized relationships is between bonds and stocks, where
bonds tend to lead preceding equity market tops and bottoms. It stands to
reason then, that Utilities, the most bond-like sector of the stock market, would
also show such leadership characteristics.
John Murphy, winner of the 2002 MTA Annual Award and a pioneer in the field
of intermarket analysis, explored this concept in depth.3 Murphy has stated
that prior to a stock market top the interest rate sensitive stocks, like the utilities
and banks, usually start to break down. The most prominent and reliable are
the utilities. 4 Martin Pring, winner of the 2004 MTA Annual Award and also
an innovator in the field of intermarket analysis, wrote of the tendency for
Utilities to put on their best performance relative to the market on either side
of the bear market low. 5
Edson Gould was another technician who wrote of the power of Utilities many
years earlier. Gould, who was referenced in 1977 as the dean of technicians
by Forbes magazine and received the MTA Annual Award in 1975, focused
specifically on the lead-lag relationship between Utilities and the market. In
his 1974 writing, Gould referred to the Dow Jones Utilities Average as one of
the best early indicators of the stock market.
By noting how Utilities price action moved, Gould was able to make several
accurate broad market forecasts. He postulated that the Utilities reflect to a
greater extent than the Industrials the investment demand for stock and argued
thatUtilities are money sensitive. Their steady growth requires huge and insistent
capital investment so that their position and outlook is more dependent on interest
and capital rate changes than are Industrial shares. 6
The observations of Murphy, Pring, and Gould relating to the Utilities sector
provided a roadmap for us to quantitatively test if Utilities lead broad stock averages
over multiple market cycles. In this paper, we illustrate the results of that test,
documenting the persistent and exploitable industry momentum in the Utilities
sector relative to the broad market. We find that a strategy which positions either
into the Utilities sector or the broad stock market based on leadership significantly
outperforms a buy and hold strategy of both. In addition, we note that strength in
the Utilities sector increases the probability of experiencing near-term tail events
and higher overall stock market volatility. We also explore seasonality and find
that the sell in May, go away strategy may be largely explained through beta
rotation during summer and fall months. Finally, we illustrate how to execute the

strategy today using Exchange Traded Funds (ETFs) as the vehicle of choice.
Our findings are consistent with other studies which reference sector momentum
and the gradual diffusion of information across and within markets, a major
component of intermarket analysis. However, to our knowledge, no study has
yet to show quantifiably how to outperform the stock market through Utilities
rotation over time, nor has explored the signaling power of low beta leadership as
a leading indicator of heightened volatility.

liTeraTure
The idea that one can generate excess returns through defensive beta rotation is
not new. The concept is appealing in that it is intuitive to position into lower beta,
non-cyclical sectors during corrections, recessions, and bear markets, and rotate
to higher beta and more cyclically-sensitive sectors in favorable economic and
market environments.
However, some studies have put into question this approachs feasibility. Davis and
Philips (2007) argued that implementing a defensive investment strategy based
on the leading signals of bear markets and recessions (e.g., forward price/earnings
ratios, momentum indicators, and the shape of the U.S. Treasury yield curve)
would not have resulted in better results than following a buy-and-hold strategy.
However, the strategy assumptions made in this study are entirely different than
our suggested approach. Davis and Phillips used macro cyclical indicators (such
as the yield curve), valuation (such as forward P/E), and an arbitrary definition
of momentum (a 5% or 10% drop in the market over the trailing 12-months) as
their risk triggers. These assumptions are quite different than our approach, which
purely focuses on the relative price momentum of the Utilities sector and over a
much shorter time frame.
Momentum is a well-documented characteristic of markets, through both
individual stock movement over longer time periods and in the persistence of
sector strength in short-term time periods. Moskowitz and Grinblatt (1999)
note that unlike individual stock momentum, industry momentum is strongest
in the short-term (at the one-month horizon), and then, like individual stock
momentum, tends to dissipate after 12 months, eventually reversing at long
horizons. The specific time-frame of momentum drift at the one-month horizon
may be due to large-cap stocks leading small-cap stocks within a sector, and
because weekly portfolio returns are strongly positively auto correlated as
documented by Lo and MacKinlay (1990). As information by market leaders
gradually diffuses down to smaller competitors, investors act with a lag in trading
such companies, causing the aggregate to continue in its prior direction.

3
The Market Technicians Association (MTA), founded in 1973, is a not-for-profit professional regulatory organization servicing over 4,500 market analysis professionals in over 85
countries around the globe. See www.mta.org.
4
See Wilkinson (1997).
5
See Print (2002).
6
See Gould (1974).

an intermarket approach to beta rotation


The strategy, signal and Power of utilities

Combining one-month momentum with defensive signaling through relative


outperformance also has important implications on seasonal findings. A wellknown strategy is sell in May, go away, also known as the Halloween Effect.
This strategy focuses on the stock markets relatively poor performance during
May through October as compared to the November through April period.
Jacobsen and Visaltanachoti (2006) find a substantial difference between
summer and winter returns in different sectors and industries over the period
of 1926-2006. The effect is almost absent in sectors and industries related to
consumer consumption, but is strong in production sectors and industries. The
Utilities sector in their work exhibits the highest probability of all sectors in
terms of summer and winter returns being indifferent. Their findings confirm
that throughout multiple cycles, Utilities exhibit very different behavior than
many other sectors of the stock market, and are unaffected by the calendar.
This allows for more consistent, exploitable lead-lag characteristics in Utilities.
Persistent strength in sectors, however, is about far more than simple trend
following of leaders and laggards. The information contained in sector
movement can be important from the standpoint of asset allocation and
risk positioning. To the extent that sector movement can be indicative of
future inflation, credit risk, and monetary policy, overall market averages
might act with a lag to coming macro changes. Hong, Torous, and Valkanov
(2005) argue that an industry will lead the market if it has information about
market fundamentals. They also find that stock markets react with a delay to
information contained in industry returns about their fundamentals and that
information diffuses only gradually across markets.
Utilities are unique in this sense due to their behavior as a risk-averse, low beta
sector, and their connection to interest rates as a driver of demand and earnings
due to historically high debt/equity ratios. During periods of economic fragility
and volatility in financial markets, the Utilities sector tends to outperform
broader cyclical trades. The less cyclical nature of the sector is largely due to
prior regulation which limited pricing power, much of which began with the
Public Utility Holding Company Act of 1935.7 This Act regulated the parent
or holding companies of Utilities by limiting rate increases and preventing
speculation in riskier businesses with ratepayers money. In preventing this
speculation, Utilities became more insulated from idiosyncratic increases in
their cost of borrowing money.
Thus, after the Acts passing, the earnings of the Utilities sector became more
and more driven by the cost of capital rather than revenue growth prospects.
When expectations for falling interest rates increased, Utilities tended to
outperform the market due to a less robust growth period for the overall

economy anticipated by investors in the sector. Conversely, when expectations


for rising interest rates increased, Utilities tended to underperform. Therefore,
the direction of interest rates became a major driver of earnings growth and beta
sentiment, which caused investors focused on the sector to heavily consider the
expected term structure of interest. The yield curve in and of itself is considered
a leading indicator of the economy. By extension, Utilities might be considered
a leading indicator of the stock market, inflation, interest rates, and volatility.

The sTraTeGy
Edson Gould largely focused on the Dow Utilities relative to the Dow Industrials
because that was the most readily available dataset for him to make his forecasts.
However, because Dow indices are price-weighted, it stands to reason that a
more comprehensive data set should be used to not only include more stocks,
but also to more appropriately weight companies based on capitalization.
In addition, the Dow Utilities and Dow Industrial averages are not total return
indices. Dividend yield information on Dow averages is limited, but clearly has
a significant impact on investor wealth. Clarke and Statman (2000) estimated
that if a total return calculation were done using dividend approximations, the
Dow Jones Industrial Average in 1998 would have been 652,230 versus 9,181
for capital appreciation alone. Since dividends have such a large impact over
time through compounding, and because the Utilities sector tends to have a
higher dividend yield than the market average itself, a true strategy must
incorporate total return8 data.
Using data provided by Fama-French resolves both of these issues by being
market weighted and total return. Using the Fama-French price data going
back to July 1926, we developed a simple trading strategy:
When a price ratio (or the relative strength) of the Utilities sector to the
broad market is positive over the prior 4-week period, position into Utilities
for the following week. When a price ratio (or the relative strength) of the
Utilities sector to the broad market is negative over the prior 4-week period,
position into the broad market for the following week.
The basis for using the 4-week rate-of-change interval is the research
illustrating monthly momentum among industry groups9 In order to achieve
a more tactical strategy that is better able to adapt to intra-month volatility,
we converted the monthly time frame into a weekly signal. We have named
the approach the Beta Rotation Strategy (BRS) as it attempts to rotate into
Utilities when the investing environment is more favorable towards lower-beta

Source: http://www.citizen.org/cmep/energy_enviro_nuclear/electricity/deregulation/puhca/
Source: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html
9
See Moskowitz and Grinblatt (1999).
7
8

2014 . Issue 68

equities and into the market when the investing environment is more favorable
towards higher-beta equities. Such a rotation translates the classic intermarket
relationship of Utilities relative strength as a leading indicator of market cycles
into an actual trading strategy.
Using the weekly signal from July 1926 through July 2013, the BRS shows
significant outperformance versus a buy-and-hold portfolio of both the
market and the Utilities sector. As illustrated in Chart 1 below, a $10,000 initial
investment in the strategy in July 1926 grows to $877 million in July 2013
versus $34 million for the market and $17 million for the Utilities sector.10

Second, in Table 2, we break down the performance of the BRS by decade. Here
too we observe outperformance in all decades, with some normal variability.
Importantly, we see the greatest outperformance during periods of market
turmoil (1926-1929, 1970-1979, and 2000-2009). We discuss this finding in
further detail in the volatility signal section below.

CHART 1: Growth of $10,000: July 1926 - July 2013


This translates into a substantial 4% outperformance per year with a 13.9%
annualized return for the BRS compared to a 9.8% return for the market and a
9.0% return for the Utilities sector. But long-term outperformance by itself is not
the only measure of the effectiveness of a strategy. If the predictive power of the
Utilities sector is as strong as our research suggests, the outperformance of the
BRS should be persistent over various time periods and also perform well on a riskadjusted basis. Thus, we test the robustness of the BRS in a number of ways.

Third, we evaluate the rolling 3-year outperformance of the BRS to test the consistency
of the outperformance over a shorter time frame that is more in line with how many
institutional investors judge investment performance. Chart 2 illustrates consistent
outperformance during the overwhelming majority of 3-year time periods. Overall,
the BRS outperforms the market in 82% of rolling 3-year periods.

First, in Table 1, we break down the performance into various time periods
around significant legislative events for Utilities. This is important as one
could argue that the behavior of Utilities has changed over time, making the
signal more or less powerful. What we find is the outperformance of the BRS is
observable in all time periods. While the performance of the BRS did improve
following the Public Utility Holding Act of 1935, the remaining time periods
showed consistent outperformance of over 4% per year.11

CHART 2: Rolling 3-Year Outperformance


The assumptions used in this section are no slippage or commission (more on this later).
Legislative events: 1) Public Utilities Holding Act of 1935, 2) By 1962, 2419 electric and gas distribution Utilities came under jurisdiction of the SEC (CRS Report for Congress on
Electricity Restructuring Background, Amy Abel 1999), 3) Public Utility Regulatory Policies Act of 1978, 4) Energy Policy Act of 1992.
10
11

10

keynes and the Psychology of markets

Fourth, we tested the strength of the strategy on a risk-adjusted basis by taking


the annualized returns of the BRS and dividing those returns by the annualized
volatility of the BRS. We then compared this ratio to the same ratio for the
market. What we found in Table 3 is that the strategy indeed shows superior
risk-adjusted returns in all time periods.

say the least. Additionally, a strategy based on weekly positioning would have
been prohibitively expensive due to slippage and commission costs, particularly
in the pre-May Day 1975 era.12 However, just because it may not have been
possible to follow through on such a strategy in the past does not mean one
should disregard the relative strength of Utilities and its predictive power.
The lead-lag behavior of the Utilities sector can be a critical warning sign of
higher average volatility to come in the market, and can be an early tell of
whether the odds of an extreme tail event are rising.
In proving this thesis, we first examine the volatility of the market when the
BRS is in Utilities (Utilities are leading) and compare that to the volatility of the
market when the BRS is in the market (Utilities are lagging). If Utilities relative
strength is predictive of higher volatility, then we should see higher volatility for
the market when Utilities are leading and lower volatility for the market when
Utilities are lagging.

Lastly, we tested a long/short version of the BRS. When the BRS calls for a
rotation into Utilities, the long/short strategy goes long the Utilities sector and
short the market. Conversely, when the BRS calls for a rotation into the market,
the strategy goes long the market and short the Utilities sector. Table 4 shows
that the long/short strategy produced consistently positive annualized returns
over time.

The volaTiliTy siGnal


It would have been highly challenging and unfeasible for an investor to carry out
the BRS in the past. Before the advent of ETFs, gaining exposure to the entire
Utilities sector or the broad market would have been extremely cumbersome to

12

This is indeed what we witness in Table 5, with overall market volatility of


18.1% when the BRS is in Utilities vs. market volatility of 16.6% when the BRS
is out of Utilities. The spread becomes more significant after the passage of the
Public Utilities Holding Company Act of 1935 and the changing dynamics of the
Utilities sector from then to 1962. Prior to this Acts passage, Utilities exhibited
higher volatility than the market; whereas, from 1963 until today, the sector
has exhibited consistently lower volatility. When viewed over this time period,
the spread widens to 4.4%, with a market volatility of 18.0% when Utilities are
leading versus 13.6% when Utilities are lagging.

To test whether Utilities leadership was predictive of higher volatility over shorter
time periods, we then measured the percentage of time the BRS was in Utilities
during periods of market stress.

The assumptions used in this section are no slippage or commission (more on this later).

2014 . Issue 68

11

First, in Table 6, we looked at the worst weekly declines for the market since 1926.
We found that during those weeks, the BRS was in Utilities for a significantly
higher percentage of time than overall, suggesting that Utilities strength is a
leading indicator of market volatility. In the worst 2% of weeks (declines > 5.5%)
in history, the BRS was in Utilities 58.9% of the time versus 49.7% overall.

Next, in Table 7, we looked at both the highest levels of VIX values and greatest
spikes in history. While the data set for the VIX only dates back to 1990, we find
similar results to Table 6. The BRS strategy was positioned in Utilities at a much
higher rate during periods of market stress than overall.
This is important as it may provide an additional clue as to why sell in May has
persisted over the years. If, as argued by Jacobson and Visaltanachoti (2006),
Utilities show the least differentiation among sectors between summer and
winter months, then it stands to reason that deviations in their relative strength
would have more predictive power than other sectors. Given that Utilities are
the most bond-like sector of the market, changes in interest rates are likely a
driving force behind these deviations.

The seasonaliTy siGnal: sell in may


and roTaTe aWay?
Sell in May and go way is a finding that instructs investors to sell their stock
holdings in May before the worst 6-month period for stocks and re-enter those
positions in November before the best 6-month period for stocks. A number of
studies have shown the persistence of this strategy over time in various global
markets. Reasons postulated for this phenomenon include, among others,
vacations during summer months and changes in risk aversion due to Seasonal
Affective Disorder (SAD).13
Our analysis, illustrated in Table 8, confirms that performance is significantly
higher during the November through April period than the May through
October period. This is true for the Utilities sector, the market, and the BRS. We
also observed that the percentage of time the BRS is in Utilities is significantly
higher in the summer months (51.5%) than in the winter months (47.8%).

13

See Kamstra, Kramer and Levi (2003).

12

an intermarket approach to beta rotation


The strategy, signal and Power of utilities

We indeed see this as interest rates have tended to fall during the May through
October period and rise during the November through April period (see Table 9).
This confirms the linkage between seasonal strength in bonds (falling yields)
and more time spent in Utilities during the worst six months, consistent with
the aforementioned impact of rates on sector movement.
Finally, we need to address how the BRS compares to a simpler rotation into
Utilities in May and into the market in November. As Table 10 illustrates, the
BRS shows 3.0% outperformance vs. this strategy overall and outperformance
during both summer (1.3%) and winter (1.5%). These results indicate that the
power of Utilities to detect periods of market stress in all time periods, including
the seasonally strong winter months, outweighs a strategy of simply avoiding
stress during the summer months.

versus 94% for XLU and 96% for VTI. If we make an assumption of 0.1% per trade
for commission and slippage, the ETF Strategy still shows outperformance vs. XLU
and VTI. More importantly, on a risk-adjusted basis it also is superior, with lower
annualized volatility than both XLU and VTI over time.

The outperformance is due in large part to the volatility signaling power of


Utilities relative strength, which is consistent with the results of the BRS. The
volatility of the market (VTI) when the ETF Strategy was in XLU was 21.3% versus
a volatility of 16.4% when the ETF Strategy was in VTI. Another way to illustrate
this is to look at the largest market declines and highest VIX levels during the
time period. What we found was that, similar to the BRS, the ETF Strategy was
in XLU 63.6% of the time during the worst 5% of declines for the market and
62.5% of the time during the top 5% of VIX values. This is significantly higher
than the percentage of time the ETF strategy was in XLU overall, at 48.5%.

conclusion
We find that the signaling power of the Utilities sector is a market anomaly that has
persisted over time. The Utilities sector has less economic sensitivity and is more
dependent upon the cost of capital than other sectors. Therefore, fluctuations
in its relative price movement can have broad implications on macroeconomic
factors. Yet, contrary to the Efficient Market Hypothesis, the information that
lead-lag dynamics may have about the near-term future may not be fully priced
in immediately by broad market averages. This lagged reaction is precisely what
makes their strength and weakness exploitable.

exchanGe-Traded fund sTraTeGy


With the advent of Exchange Traded Funds (ETFs) and lower trading costs in
recent years, we can now test the viability of the BRS using actual trading
instruments. The ETFs with the longest price history that best approximate the
Fama-French dataset are the Utilities Select Sector SPDR Fund (XLU) and the
Vanguard Total Stock Market ETF (VTI).
Using the same 4-week rate-of-change methodology outlined earlier and total
return data from July 2001, the ETF Beta Rotation Strategy (ETF Strategy)
outperforms a buy and hold portfolio of both XLU and VTI. Table 11 shows that
from July 2001 through July 2013 the ETF Strategy achieved total returns of 154%

The implications from both a strategy and signaling standpoint are meaningful.
We find that by using a Beta Rotation Strategy based on the principles of
intermarket analysis and momentum, one could have consistently outperformed
a static buy and hold strategy over many market cycles.
Outperformance is achieved by timing exposure to beta using a rolling 4-week
relative strength signal of the Utilities sector to the market. The strategy rotates
into Utilities when the investing environment is more favorable towards lowerbeta equities and into the market when the investing environment is more
favorable towards higher-beta equities. Importantly, because the Beta Rotation
Strategy spends roughly half of its time in Utilities, it is also able to benefit from the
compounding effect of higher dividends. We observe consistent outperformance
in the vast majority of periods, and that the rotation signal may offer further
insights into explaining seasonal patterns such as sell in May and go away.

2014 . Issue 68

13

We also find that the strength in Utilities often serves as a warning sign of
increased volatility and extreme market movement in the short-term, allowing
active traders to better manage risk during potential periods of heightened
market stress. This important finding can add to ones trading arsenal in seeking
higher risk-adjusted returns, and in reducing the possibility of tail events by simply
respecting intermarket relationships and price history.

furTher research
Although beyond the scope of this paper, there are a number of broader
implications that our findings may have on the investing and trading landscape,
particularly as it relates to volatility. Among these are: (1) implementing option
overlay strategies, (2) hedging, (3) timing of gross exposure or leverage, and (4)
tactical asset allocation.

references
Clarke, Roger G., and Meir Statman, 2000, The DJIA Cross 652,230 in 1998, The
Journal of Portfolio Management.
Davis, Joseph H., and Christopher B. Philips, 2007, Defensive Equity Investing:
Appealing Theory, Disappointing Reality, Vanguard Investment Counseling &
Research.
Day, Wang, and Xu, 2001, Investigating Underperformance by Mutual Fund
Portfolios, School of Management, The University of Texas at Dallas.
Gould, Edson, 1974, The Utility Barometer: An Early Stock Market Indicator,
Edson Goulds Findings & Forecasts Newsletter.
Hong, Torous, and Valkanov, 2005, Do Industries Lead Stock Markets? Journal of
Financial Economics.
Jacobson, Ben, and Nuttawat Visaltanachoti, 2006, The Halloween Effect in US
Sectors, The Financial Review.
Kamstra, Kramer, and Levi, 2003, Winter Blues: A SAD Stock Market Cycle,
Federal Reserve Bank of Atlanta Working Paper No. 2002-13a.
Lo, Andrew W and A. Craig MacKinlay, 1989, When Are Contrarian Profits Due to
Stock Market Overreaction? The Review of Financial Studies.
Moskowitz, Tobias J. and Grinblatt, Mark, 1999, Do Industries Explain
Momentum? The Journal of Finance.
Murphy, John J., 2004, Intermarket Analysis, John Wiley & Sons, Inc.
Murphy, John J, 1999, Technical Analysis of the Financial Markets: A
Comprehensive Guide to Trading Methods and Applications, New York Institute
of Finance.
Pring, Martin J., 2004, Technical Analysis Explained, McGraw-Hill.
Russell, Philip S. and Violet M. Torbey, 2002, The Efficient Market Hypothesis: A
Survey, Business Quest Journal.
Wilkinson, Chris, 1997, Technically Speaking: Tips and Strategies from 16 Top
Analysts, Traders Press, Inc.

14

THE REPEATING STORY OF


ON BALANCE VOLUME
GEORGE A. SCHADE, CMT

BIOGRAPHY
George A. Schade, CMT
George A. Schade, Jr., CMT has researched the origins and background of technical indicators and their creators. He has written on the
stochastic, advance-decline, and on balance volume (OBV) indicators. He received the 2013 Charles H. Dow Award for his paper on
the development of OBV. He contributed an extensive biography of Ralph Nelson Elliott compiled over several years. He coordinated
two of the largest collections of historical materials given to the Market Technicians Association. His objective is to expand and clarify the
historical record of technical analysis. Since becoming a member of the MTA in 1987, he has served on numerous association committees.

CHARLES H. DOW AWARD 2013 SUBMISSION WINNER


On February 14, 1876, Alexander Graham Bell and Elisha Gray filed competing
claims for a patent on a telephone. Because Bells application was officially
recorded a few hours before Grays caveat, the United States Patent Office
awarded Bell the first patent for a telephone. Ensuing litigation showed that Bell
and Gray, working independently, had invented the workings of a telephone.
The invention of the On Balance Volume (OBV) indicator likewise is not unique
to one person. In April 1932, Paul Clay, a prominent Wall Street investment
counselor, described a psycho-technical index that included OBV as the first
of five components. In 1951, Frank Vignola and his wife Maude Vignola Woods,
who published an investment advisory in San Francisco, California, designed
a trading system that used cumulative OBV. In 1948, Edward B. Gotthelf, a
commodities trader who later held mercantile seats in New York and Chicago,
was using a tabulation method he called OBV. Joseph E. Granville, with whom
OBV is popularly associated, claims its realization came to him in August 1961.

THE ExPRESSION

liquidating movement in American securities had forced London to sell on


balance not less than 25,000 shares2. The term is used in commodities trading.
In June 1922, it was reported that several houses with Southern connections
sold [cotton] on balance. 3
Technicians use the term OBV to describe the cumulative result, over a chosen
time period, of adding all daily trading volume on days when a securitys price
or market index rises and subtracting all the volume on days when they decline.

THE CALCULATION
Daily OBV is calculated by adding the days volume to a cumulative total when
the securitys price closes up, and subtracting the days volume when the
securitys price closes down.4 All the volume is assigned to correspond to the
direction of the days close. If price does not change from the prior days close,
yesterdays total OBV is recorded. The cumulative count can begin at a baseline
index in order to avoid negative numbers.

The term on balance means the net effect or result after considering or
offsetting all relevant factors. The expression has a long history to describe the
state of trade.

THE THEORY

On May 24, 1893, The New York Times reported that loans due had been paid
and, in recent days, the Bank of England has received 466,000 of foreign gold
on balance. 1 On October 18, 1908, The New York Times reported that another

Every price change is the result of a sale or purchase of shares of stock. The
number of shares involved in a transaction constitutes volume.

It is not price action, but volume - the amount of money, the supply and the
demand - which best tells the story. Humphrey B. Neill (1931)5

OBV is a momentum indicator that relates volume to the direction of price. Its
basic assumption is that volume precedes price, or as Granville posited in 1963,
volume often has a distinct tendency to precede price.6 The assumption is
grounded on the dynamic shifts of supply and demand for a security.
Writing in 1935, Harold M. Gartley, arguably the Eras finest market technician,
suggested why we study volume: Theoretically, the reason we study volume is
because it is believed that it is a measure of supply of and demand for shares.
(italics in original)7 A shift in the supply or demand for a stock will change the
stocks volume. In 1984, Granville wrote that:
Price will rise only after the volume equation is thrown out of balance by
quietly increased demand. Conversely, when heavier, silent selling occurs,
supply will overcome demand, and only then will price fall. In either case the
alteration in supply and demand must take place before the move in price. 8
Frank Vignola and wife Maude Vignola Woods, who in 1951 asserted that
buying and selling volume are best analyzed when measured cumulatively,
explained their Continuous Volume Curve, as they called OBV, as follows:
The CONTINUOUS VOLUME CURVE is. [e]xtremely sensitive to price
movement, and will indicate the relative balance between buying and selling
at the peaks and valleys of market trends. 9
The assumption that volume often precedes price was expressed in 1934.
The Wetsel Market Bureau, Inc. (Wetsel) was a subscription market advisory
service (Humphrey B. Neill was its Vice-President in 1931). In 1934, Wetsel
published a 26-lesson trading course. The discussion of volume and prices
relationship in Lesson 20 stated that volume tends to lead the price movement
and it is in this respect that volume may constitute a forecaster. 10
Neill (1895 - 1977) wrote in 1931 that volume will give you indications
of pending moves, often when nothing else will.11 Thirty-two years later,
Granville wrote that on-balance volume can be a particularly effective early
warning of future price movements.12
According to Wetsel,We have seen individual occasions when price would hardly
tell you anything about probable future price movement and where volume
alone would contribute possibly as much as 85% or 90% to the importance of
price and volume combined as indicators.13 Long ago, technicians recognized
that volume often precedes price is a valid working assumption.

16

FIGURE 1 - HUMPRHEY BANCROFT NEILL14

PAUL CLAY - 1932

On Tuesday evening, April 26, 1932, the American Statistical Association


(ASA) held a dinner meeting in the Hotel Governor Clinton in Manhattan. The
organization of statisticians, the ASA is the second oldest professional affiliation
in the United States having been founded in 1839.
The topic for discussion was Forecasting Methods Successfully Used Since
1928. Four speakers were invited.
The second speaker was Paul Clay, whom the minutes recorded as being an
investment counselor. The final presenter was James F. Hughes, who had
worked alongside of Leonard P. Ayres (ASAs President in 1926) to create the
Advance-Decline Line.
Addressing the audience of 233 guests:
Mr. Clay stated that he now felt that, in the past, he had underestimated the
importance of the New York Stock Market itself in the industrial and financial
affairs of the United States, and even of the world. The movements of the
stock market represent the net result of the industry of the United States and
a considerable proportion of the rest of the civilized world. Because of this
conclusion, Mr. Clay had been led to construct a new index similar, in general, to
the Dow theory, but not based upon the Dow methods. This index number he
calls a psycho-technical index. It contains five principal elements:

THE REPEATING STORY OF ON BALANCE VOLUME

1. A volume index number made by giving the sign of the price movement to
the daily volumes, and accumulating the plus and minus movements

On April 17, 1925, Clay spoke at an ASA meeting on a panel announced days
earlier:

The psycho-technical index built out of these five elements looks much like a
price chart with the false movements eliminated.
It has the very distinct merit of often moving contrary to the course of the market
itself. This index is not used independently, but rather in conjunction with the
economic indexes which formerly constituted the chief reliance of Mr. Clay. 15
Clay used OBV as one element of an index rather than as an independent
indicator. In spite of intense research, additional information about Clays index
has not been found.
Clay was a prominent economist, statistician, and investment counselor in the
years prior to 1945. From 1912/1913 to 1927, he worked at Moodys Investors
Services as an economist and statistician, rising to Vice President and Chief
Economist. In 1915, he wrote a 371-page book entitled Sound Investing.16
Moodys published a revised edition in 1920. In 1916, Clay penned an article
for Moodys Magazine - How and When to Buy and Sell - in which he proffered
detailed rules for analyzing accumulation and distribution as the really
important subject of this article is the when. (emphasis in original) 17
From August 1919, to May 1922, he was Staff Economist and columnist for
Forbes Magazine. In November 1920, Forbes published Clays column which it
regards as one of the most important articles it has ever published. 18
In 1927, Clay was retained as an expert witness on the valuation of stock by a
group of founding Ford Motor Company shareholders who were contesting a
$30 million income tax assessment on a stock sale. Clay spent February 7, 1927,
giving extremely technical testimony under forceful cross-examination.19 The
shareholders group won.
In the Thirties, Clay was an economist for the United States Shares Corporation
and Supervised Shares Corporation, investment counsel and trust companies,
and in his consultancy Clays Economic Service. In August 1931, he was elected
director and economic adviser of the General Shares Corporation. In October
1934, he became Editor of Brookmire Bulletins, published by Brookmire
Economic Service, a national business forecasting firm established in 1910.
Clay was among the first 605 applicants approved by the Securities and
Exchange Commission as registered investment advisers under the Investment
Advisers Act of 1940 in New York, New Jersey, and Connecticut.20 In this group
were renowned technicians Ralph Nelson Elliott, William D. Gann, and Harold
M. Gartley.

FIGURE 2 - The New York Times on April 12, 1925


The 1925 and 1932 ASA meetings reveal what motivated Clays interest in OBV
Pioneer technician Edson Beers Gould, Jr. (1902 - 1987) wrote a report, in about
1974, entitled A Vital Anatomy. In the first section titled My Most Important
Discovery, Gould shed light on Clays work:
My first ten years on Wall Street, during the 1920s, were spent working at
Moodys, primarily for Paul Clay, a brilliant economist and market forecaster.
Much as I respected Clay, much as I admired some of his work, especially his
long-term forecasts, it became increasingly evident to me that he was missing
something.
He concentrated primarily on forecasting business and monetary conditions,
and he was good at both, probably the best around. Then he would transmute
his findings into stock market views. His long-term forecasts of stock market
trends were excellent, his intermediate-term forecasts fair, but his short-term
views left much to be desired.
I recognized that economic and monetary forecasts and trends were vital in
projecting stock prices three and four years out, but came to the realization that
they could have little value when trying to forecast stock prices over a period of
weeks, several months, or even as many as two years.
Then, as the roaring twenties passed into history, the pace of the market increased
markedly, lending emphasis to my thoughts. (emphasis in original). 21
Gould recognized that technical indicators worked well in the short and
intermediate terms. By 1932, Clay was moving to that view which favored
technical analysis over studying business statistics.

2014 . Issue 68

17

At the 1932 ASA meeting, Clay conceded that the rules which he believed to be
dependable, based on his years of research concerning the cyclical movements
of prices, had all between 1928 and 1931 broke down.22 He concluded that
this breakdown was caused by the fact that the rules were applicable to some
eras but not to others. 23 Clay realized that his former rules had broken down
following the Great Crash of 1929, but the fermenting vigor of technical studies
was opening new frontiers.
Volume, in the early Thirties, became a fertile field for analysis. The increasing
volume of stock trading up through 1929 made volume worthy of study.
Clay, ardent economist and business statistician, saw value in technical concepts
such as OBV. By 1932, he had changed some of his strong views expressed at
the meeting held in April 1925, where he had remarked he rejected using an
automatic barometer or combining certain barometrical returns, [to] obtain
a barometer or index number which is supposed to move ahead of the stock
market and indicate its course. 24
Clays analytical work, grounded in business statistics, was shaped by the 1929
Crash as well as by the post-Crash Emergent Age of technical analysis. The
psycho-technical index showed his interest in technical analysis and crowd
psychology. Clay was thinking technically.
Although little is known about Clays index, his formulation of a cumulative
volume indicator impresses when one considers the then prevailing demands
of studying volume. Harold M. Gartley (1899 - 1972) described them in 1932.
Between September 19, 1932, and December 5, 1932, Gartley wrote twelve
articles published in Barrons National Financial Weekly headlined Analyzing the
Stock Market. The next year Gartley compiled the articles in a course entitled
Stock Market Studies. The course led to Gartleys seminal book Profits in the Stock
Market: The Gartley Course of Stock Market Instruction published in 1935.
Barrons announced that from source material supplied by one of the bestequipped laboratories in Wall Street, the articles will present the most
modern work on the interpretation of security-price movements.25 In
November 1932, Barrons featured Gartleys landmark article The Significance
of Volume of Trading.
Gartley wrote that the accuracy and completeness of volume data were
problematic. Analysts faced the inability to obtain essential data without an
almost prohibitive amount of abstraction from the official sheets which list
all of the transactions on the Stock Exchange, and compilation, as well as
that in the past few years, notably from 1928 to date, the number of shares

18

FIGURE 3 - HAROLD MCKINLEY GARTLEY


connected with every trade has not appeared on the tape.26 The NYSE data
neither included nor accurately reported all the volume.
He noted the serious mechanical problem in the study of total volume
because the New York Stock Exchange (NYSE) publishes volume at unequal
intervals: 10:30 a.m., [12:10] p.m., 1:30 p.m., 2:10 [p.m.], and 3:00 p.m.27
Traders had to track five daily tallies of volume. The laboriousness of the task
was highlighted by the fact that in 1932 there were 820 companies listed on
the NYSE.28 Moreover, analysts had to compensate for the shorter two-hour
trading sessions on Saturdays.
Analysts were debating the fine question as to whether or not volume should
be plotted on arithmetic scale or logarithmic scale, although the former was
used in the majority of studies. 29
Overlaying these challenges was the corrosive force of manipulation that
caused unusually large fluctuations in volume which tended to confuse the
analysis. The Securities and Exchange Commission had not yet been created.
And, of course, technicians were studying increasing sets of statistics with the
limited automation of Burroughs adding machines and punched cards.

THE REPEATING STORY OF ON BALANCE VOLUME

Francisco. Franks career was in the photo engraving industry in management


and sales. Later, he became an investment counselor. Maude managed their
advisory business. Over two decades, under the name M. V. WOODS MARKET
ANALYSIS AND RESEARCH, they produced an exceptional body of work that
included OBV. They described OBV at least ten years before Granville.
In the mid-Thirties, they began studying the stock market. In June 1940, using
the pseudonym M. V. Woods, Frank copyrighted their earliest advisory letter
named the Price Curve Plan of Stock Market Trading.
In 1946, M. V. Woods advertised a report entitled The Price Curve Plan and Stock
Selector with Volume Control and provided a PROFIT and LOSS SUMMARY from
1938 to date. This report is unavailable so it is unknown if it described what
Williams calls cumulative volume.

FIGURE 4 - NYSE TRADING FLOOR 193330

THE WETSEL MARKET BUREAU COURSE


(ARNOLD W. WETSEL, 1888 - 1957)
Market historian James Edgar Alphier (1947 - 1990) wrote that he had a copy of
a 1934 course which use[s] the concept of OBV. 31 He did not further describe
the course or amplify the comment.
Alphier was likely referencing Wetsels 1934 trading course. Alphier lived within
driving distance of the bookstore in Los Angeles, California, owned by the late
Donald Mack, who sold and republished technical analysis books, including
many out of print. Mack republished Wetsels course. Alphier likely obtained a
copy of Wetsels course from Mack.

FIGURE 5 - 1946 ADVERTISEMENT33

Wetsels course does not contain a calculation similar to that of OBV, but it
explains how volume tends to lead price, OBVs basic assumption. Alphier was
likely referring to the explanation of this conceptual assumption rather than to
an explicit OBV calculation that is not evident in Wetsels course.

But in 1951, M. V. WOODS MARKET ANALYSIS AND RESEARCH published a


report (authored and copyrighted by Frank Vignola) entitled The Price-Curve Plan
of Stock Market Trading with Countertrend Signal Analysis. Vignola wrote that:

FRANK VIGNOLA & MAUDE VIGNOLA WOODS 1951


Larry Williams has suggested that the idea [of OBV] was originally called
cumulative volume and was presented in a course written by Woods and
Vignolia [sic] in San Francisco in 1946. (emphasis in original)32 Until now the
evidence was lacking.
They were Francis (Frank) Vignola (1892 - 1961) and his wife Maud (Maude)
Vignola Woods (1895 - 1983), who published an advisory letter in San

Volume is the pressure gauge for measuring the balance between supply and
demand, and for determining the quality of buying and selling in a stock or market
Average. VOLUME CAN BE ANALYZED TO BETTER ADVANTAGE when data is
arranged in a time series or on a cumulative basis. (emphasis in original)34
Vignola used three series to analyze volume which he integrated with a 10-day
moving aggregate of daily price changes in stocks or market indices called the
Price-Curve.The first series was a 10-day moving total of aggregate volume called
the Aggregate Volume Curve. Saturdays volume was doubled to account for the

2014 . Issue 68

19
2

short session. The second was a 30-day moving total of aggregate volume named
the Major Volume Curve. These are time based series, but Vignolas third series
differentiated between buying and selling volume.
The third series was the Continuous Volume Curve which is made by adding the
total daily market volume of trading to a base index figure, each day the market
advances; and by subtracting the volume on days when the market declines.35
Saturdays volume was not doubled in this series. Vignola suggested a base number
of at least 50 or 100 million. He did not use the term cumulative volume.

Chart 2 shows the Dow Jones Industrials, 10-day Price-Curve, 10-day Aggregate
Volume Curve, and Continuous Volume Curve in low volume conditions which
Vignola claimed prevail in over 80% of all intermediate and major signals. Note
how trend line breaks triggered buy signals.

According to Vignola, this curve:


Is an auxiliary timing device used in connection with other technical condition
indices. It is extremely sensitive to price movement, and will indicate the relative
balance between buying and selling at the peaks and valleys of market trends.
MINOR FLUCTUATIONS OF THE CONTINUOUS VOLUME CURVE follow the daily trend of
the Industrial Average, and it is often difficult to distinguish the difference between
them. This does not hold true with intermediate and major trends. The main price
trend will often precede or lag volume action. The Continuous Volume Curve is a
key to the supply and demand equation. Interpretation of this curve is based on a
knowledge of divergence, and the breaking of established trend-lines and previously
established points of trend reversal.(emphasis in original) 36

CHART 1 - CONTINUOUS VOLUME CURVE, APRIL - AUGUST 194933

Vignola determined an up or down day by the number of issues advancing or


declining each day, which he believed to be preferable because they represent
the action of the entire market, not the price trend of a few stocks. However, he
recommended that if the number of issues traded [is] not available, use the closing
price of the Dow-Jones Industrial Average. 37 Vignolas daily OBV was based on the
direction of price or the DJIA, but he maintained a weekly Continuous Volume Curve
based on weekly advances and declines.
Chart 1 shows the Dow Jones Industrials, a Continuous Volume Curve, and several
trend lines. For simplicity, he omitted the last three digits of volume.
According to Vignola, the Continuous Volume Curve confirms an indication of strength
(weakness) in a Price-Curve and an Aggregate Volume Curve. The Continuous Volume
Curve gives a buy (sell) signal when it breaks above (below) one of its intermediate
or major trend lines confirming the strength or weakness shown in the other two
curves. All three curves have to trend in unison for a buy or sell signal to be given.
Although each of the three volume curves can be employed separately as trend
indicators, Vignola favored their collective use because all furnish valuable
advance information regarding the future course of intermediate and major
price trends. 39

20

CHART 2 - VIGNOLAS PRICE & VOLUME SERIES, APRIL - JULY 194940

THE REPEATING STORY OF ON BALANCE VOLUME

M. V. Woods published another course in 1955, but it did not further amplify
how to use continuous volume curves.

EDWARD B. GOTTHELF - 1948


Edward B. Gotthelf (1908 - 1985) was an upstairs commodities trader from
1935 through 1945. In 1950, he became a member of the New York Mercantile
Exchange and Chicago Mercantile Exchange. Later he acquired the basic
COMMODEX system, one of the first futures trading systems when it was
inaugurated in the 1950s.41 Both father and son Philip refined COMMODEX.
According to Philip, his father developed a relatively simple method of
measuring accumulation, which Edward Gotthelf called the On-Balance
Volume and Open Interest Method.42 Edward Gotthelfs notes reveal the
development of this term as far back as 1948. 43
Gotthelf assigned a + to the price for the day when price closed above
the previous days level. If volume increased on the same day, the volume
component received a +. A rise in open interest from the previous day was
assigned a +. However, if price moved up, and volume moved down, the days
volume was assigned a -. When price and volume moved down, volume was
assigned a +.
Figure 6 recreates a worksheet.

COMMODITY CONTRACT

PRICE
VOLUME

OPEN
INTEREST

= UP

Gotthelf believed his OBV method detected accumulation (open interest and
volume rise) and distribution (the reverse) which led to overbought or oversold
markets. Overextended markets, notably those following a long period of
accumulation, could experience dramatic corrections which in turn gave buy
and sell signals.
Gotthelfs method is more akin to a tabular score than to a cumulative count,
but it shows that the term OBV was used in futures trading more than fifteen
years before Granville wrote his OBV book in 1963.

CONCLUSION
Several people working in different decades and living thousands of miles from
each other conceived OBV - a remarkable and not uncommon story of creativity.
Recognition for OBVs invention must be shared.
Alphier opined, Im confident Granville came upon OBV independently, but I
have two courses in my library, one written in 1955 [Vignola] and one in 1934
[likely Wetsel], which use the concept. I also can be confident that Granvilles
two predecessors didnt have any contact with each other.45 Until proven
otherwise, the same must be said of Clay, the Vignolas, and Gotthelf.
We cannot shadow their brilliance, but these conclusions should not surprise.
The indicators simplicity and its sensible logic explain why several people (and
these are the ones we know) developed OBV from their own studies.

DATE

VALUE

would stay neutral. Trading tactics were developed based on observations of


the on balance plus and minus series.

While Granville popularized OBV, Paul Clay and clearly, Frank Vignola and Maude
Vignola Woods, had earlier originated OBV, while Edward B. Gotthelf used the
term. Volume was uniquely important to the analytical work of Granvilles
predecessors. They believed volume could presage the direction of price, and
a cumulative count was a valid way to analyze buying and selling volume. All
were intelligent observers of volume in stock and futures markets who merit
recognition.

= DOWN

FIGURE 6 - GOTTHELFS ON-BALANCE VOLUME & OPEN


INTEREST METHOD44

Over time, the net plus and minus days would be counted. If on balance net
pluses outnumbered net minuses, Gotthelf would be a buyer. If minuses
exceeded pluses, he would sell. If net pluses and minuses were about even, he

2014 . Issue 68

21

ENDNOTES
N. Y. TIMES, May 24, 1893, On the London Exchange (all NYT citations are to
the digital archives).

Gould, Jr., Edson B., 1974(est.), A VITAL ANATOMY (Anametrics, Inc., New
York, NY), 1.
21

King, 1932, 316 - 17.

22

N. Y. TIMES, Oct. 18, 1908, Financial Markets.

23

N. Y. TIMES, June 30, 1922, Cotton Men Await Condition Estimate.

Minutes, 1925, Forecasting Security Prices, JOURNAL OF THE AMERICAN


STATISTICAL ASSOCIATION, vol. 20, 150: 245.

2
3

Achelis, Steven B., 1995, TECHNICAL ANALYSIS FROM A TO Z (McGraw-Hill,


New York, NY), 209.

Neill, Humphrey B., 1931, TAPE READING AND MARKET TACTICS: THE THREE
STEPS TO SUCCESSFUL STOCK TRADING (B. C. Forbes Pub. Co., New York, NY),
repub. 1959, 1970 (Fraser Publ. Co., Burlington, VT), 41.

Granville, Joseph E., 1963, GRANVILLES NEW KEY TO STOCK MARKET PROFITS
(Prentice-Hall, Inc., Englewood Cliffs, NJ), 19. By 1984, Granville was firmer
asserting, Volume precedes price. Granville, Joseph E., 1984, THE BOOK OF
GRANVILLE (St. Martins Press, NY), 91.

Gartley, Harold M., 1935, repub. 1981, PROFITS IN THE STOCK MARKET: THE
GARTLEY COURSE OF STOCK MARKET INSTRUCTION (Lambert-Gann Pub. Co.,
Pomeroy, WA), 299.

Id., 317.

24

The Weeks Features, Sept. 5, 1932, BARRONS NATL. FNCL. WKLY., 2.

25

Gartley, Harold M., Nov. 7, 1932, Analyzing the Stock Market: The
Significance of Volume of Trading, BARRONS NATL. FNCL. WKLY., 22.
26

Id.

27

NYSE archives (http://tinyurl.com/84pmyfm).

28

Gartley, 1932, 22.

29

Library of Congress.

30

Alphier, James E., 1988, The tragic neglect of the old masters, TECHNICAL
ANALYSIS OF STOCKS AND COMMODITIES, vol. 6:10, 396.
31

Granville, 1984, 91.

Williams, Larry, Letters to Editor, 2004, TECHNICAL ANALYSIS OF STOCKS AND


COMMODITIES, vol. 22:3 (10-14). Correspondence with author, Oct. 18, 2010.

Vignola, Frank, THE PRICE-CURVE PLAN OF STOCK MARKET TRADING WITH


COUNTERTREND SIGNAL ANALYSIS, 1951, (M. V. Woods Market Analysis and
Research, San Francisco, CA), Study No. 3-B, 1.

THE MAGAZINE OF WALL STREET AND BUSINESS ANALYST, Mar. 30, 1946, vol.
77: 797.

8
9

Wetsel Market Bureau, Inc., 1934 U.S.A., repub. 1998, A COURSE IN TRADING
(Donald Mack ed., Financial Times Prof. Ltd., London, UK), 164.
10

Neill, 1931, 169.

11

Granville, 1963, 55.

12

Wetsel, 1934, 97.

13

Figures 1 and 3 are from Google Images.

14

King, Willford I., 1932, Forecasting Methods Successfully Used Since 1928,
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, vol. 27, 179: 317. The
other elements were the price movements of the twenty stocks having the
largest daily volume, adding 1 to the index on up days and subtracting 1 on
down days, resistance ratios to measure liquidation and short sales, and
relative strength.
15

Clay, Paul, 1915, SOUND INVESTING (Moodys Magazine and Book Co., New
York, NY).
16

MOODYS MAGAZINE: THE INVESTORS MONTHLY, Jan. 1916, vol. 19: 1, 21


(http://tinyurl.com/7b6z8pj).
17

Clay, Paul, Nov. 27, 1920, Confidently Predicts Boom in 1921, FORBES, 117.

18

N. Y. TIMES, Feb. 8, 1927, Battle on Figures in Ford Tax Case.

19

N. Y. TIMES, Nov. 2, 1940, SEC Lists Advisers In Investing Field.

20

22

32

33

Vignola, 1951, Study 2-B: 1.

34

Id., Study 3-B:1.

35

Id.

36

Id., Study 3-B:2.

37

Id., 3-B:3.

38

Id., Study 1-C: 1.

39

Id., 1-C:3.

40

Gotthelf, Philip, Dec. 1986, The making of a system B.C. (before computers),
FUTURES, 57.
41

Id.

42

Id.

43

Id.

44

Alphier, 1988, 396.

45

The idenTificaTion and uTilizaTion of balanced


ladder levels in The Technical analysis of sTock
and markeT index Time series
carl asPin

bioGraPhy
Carl Aspin
Carl Aspin recently retired as President of Aspin Engineering Services in Prescott, Az. He received a degree in Engineering Physics from the
Colorado School of Mines. This was followed by post graduate engineering training at General Electric in Utica. NY. He was employed
at Motorola for more than 30 years in a wide range of R&D, engineering, and management positions. He is a graduate of the Motorola/

absTracT
Stock and market index sequences are characterized by an original statistical
algorithm that identifies in real time, prices and price levels that have a
significant and recurrent influence on the magnitude and direction of price
fluctuations in their immediate neighborhood. These prices, called balanced
ladder values, provide a basis for the identification of trends, support and
resistance levels, channels, and bands. A unique off-balance volume sentiment
chart is described. The use of balanced ladder levels as a basis for standard
technical analysis is demonstrated through rule-based graphical analysis of
stock and market index examples.

i. inTroducTion
An ongoing tension exists between bullish and bearish psychology, forming
technical support and resistance levels. Price movements are driven by ever
changing market expectations and psychology, and a balance or relaxation
of the tension between the pressures of supply and demand can be detected
by means of probabilities and statistics. Consequently, on the occurrence of
a balance value, it becomes possible to detect the presence of support and
resistance barriers. Trends are observed as a drift in balance values between
bands and channels defined by support and resistance barriers.

ii. markov model


The dynamics of the business environment cause fluctuations in stock prices
and market indices. These fluctuations are unpredictable in both magnitude
and direction and are effectively random. The Markov model of stochastic time
series can be used to analyze these movements.

A Markov process requires random values occurring in successive short time


intervals to be independent and normally distributed (Hull, 2000). Let N{,
} denote a normal distribution with mean and standard deviation . Over
a sufficiently short period of time, such as from one day to the next, there is
no discernible trend in the price data. At most we can only say that tomorrows
price will be equal to todays price plus a fluctuation selected at random from
N{0, }. Similarly, without a trend in evidence, we can only predict that t days in
the future the price will be equal to todays price plus a fluctuation selected at
random from N{0, t} where t is the standard deviation at time t.
Before examining actual fluctuation sequences in detail it is important to
point out that the first differences of the natural logarithms of stock prices
are approximately normally distributed, rather than the day-to-day price
changes per se (Murphy, 1994). Symbolically, if P(t) is todays price and P(t-1) is
yesterdays price, then for a sufficiently large sample of differences, k,

EQUATION 1: X(t) = ln[P(t)] - ln[P(t-1)], t = 1, 2, , k


the sample will be approximately normally distributed. The difference in the
logarithms of two variables is equal to the logarithm of the ratio of the variables:

EQUATION 2: ln[P(t)] - ln[P(t-1)] = ln[P(t)/P(t-1)].


Therefore the set of natural logarithms of the ratios of todays price to yesterdays
price is approximately normally distributed. The random variable X(t) is assumed
to satisfy the requirements of a stochastic process.
Denote the distribution of fluctuations, X(t), measured at unit intervals, by N{0,
1}. The change after two unit intervals is the sum of two independent normal
distributions. The resulting distribution has a mean equal to the sum of the

means and a variance that is the sum of the variances of the two distributions.
Let V(T) denote the variance after T intervals. Then V(T) = V(1)*T. The increase
in the variance is directly proportional to the time. As the standard deviation is
equal to the square root of the variance it follows that = 1 T1/2. Therefore the
distribution of the sum of two distributions, each with mean zero and standard
deviation 1, is N{0, 1 21/2} and in general, the sum of T distributions, each
with mean zero and standard deviation 1, is N{0, 1 1/2} (Hull, 2000). Let:

and more generally,

variogram data, is compared to the variogram of a theoretical Markov process,


represented by the red-dashed line. Although the correlation coefficient,
R-squared, of the DJIA variogram is excellent, there is significant divergence
from the Markov variogram.

EQUATION 3: X1 = lnP(1) - lnP(0),


EQUATION 4: X2 = lnP(2) - lnP(1),
EQUATION 5: X3 = lnP(3) - lnP(2),

EQUATION 6: XT = lnP(T) - lnP(T-1).

Each of these values, Xt, is the difference in the natural logarithms of prices
separated by one unit interval. If T successive fluctuations are added, then a
record of the logarithms of the prices is generated from P(0) to P(T). Note that

EQUATION 7: X1 + X2 = lnP(2) - lnP(0)


EQUATION 8: X1 + X2 + X3 = lnP(3) - lnP(0),

and in general:

FIGURE 1

EQUATION 9: X1 + X2 + X3 + + XT = lnP(T) - lnP(0) = ln[P(T)/P(0)].


In other words, the sum of T fluctuations is equal to the natural logarithm of
the ratio of the last price in the sequence to the first price in the sequence.

real daTa vs. markov model


varioGrams
Since the applicability of the Markov model is a basic premise, at this point
we need to test real data against Markov requirements. This is done using
a variogram. In constructing a variogram we calculate the variance of sets of
differences or ratios taken T units apart where T = 1, 2, , T. Thus,

EQUATION 10: V(T) = VAR{ln[P(t+T)] - ln[P(t)]}.

t = 1, , k.

Here VAR specifies a statistical function that calculates the variance of a


set of data. To form the variogram we graph the ratio of V(T)/V(1) vs. T. If the
Markov model is an adequate representation of real data then the variogram
will reasonably approximate a straight line with unit slope. Figure 1 shows the
natural logarithms of the closing values of the Dow Jones Industrial Average
between April 1, 2008 and December 20, 2011. In order to test the Markov
assumption against actual data the differences in the logarithms of the DJIA
values are compared to what would be expected given a Markov process.
Figure 2 is the variogram based on the raw data of Figure 1. Differences for time
intervals of increasing length are shown out to an interval of T = 36 days. Eight
hundred seventy six unit intervals are included. A least squares line, fit to the

24

FIGURE 2
A reason for this divergence is shown in Figure. 3. The market index fluctuations
do not satisfy the requirements for the addition of independent and identically
distributed normal distributions. We observe anomalous fluctuations that fall
well outside 3-sigma statistical limits, the red boundaries. These limits should
enclose 99.74% of all fluctuation data. A large number of these anomalous
fluctuations occurred during the recent market crises. The probability that
such fluctuations would ever occur if sampling was from a normal distribution
is vanishingly small. Anomalous fluctuations such as these produce what are
known as fat tails.

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

FIGURE 3
In an ideal bell-shaped distribution the probability that Z = ln[P(t+1)/P(t)]/
exceeds +/-3 approaches zero much more quickly than is the case when
anomalous fluctuations are present. In the presence of anomalous fluctuations the
areas to the left and right of -Z* and Z*, respectively, Figure 4, are larger than the
corresponding areas in a standard normal distribution, hence the term fat tails.

and thus desensitizes the controls. An analogous approach is used here with
market-based data. Although we may be able to infer the underlying causes
for anomalous market fluctuations we cannot eliminate the causative factors
in general. However, knowing that anomalous data is not part of the normal
historical reference distribution we can logically make adjustments so that the
subsequent reference distribution is not unduly perturbed. Accordingly, the
fluctuations that are outside of the 99.74% probability boundaries, Figure 3,
are iteratively scaled back in size until they fall within recalculated +/- 3 sigma
boundaries. The adjusted variogram of Figure 5 is the result. The approximation
to the Markov model is greatly improved. Hence, prior to the analysis of any
fluctuation sequences, anomalous fluctuations are rescaled as described. Note,
however, the original values in the price data sequence are not changed so
there is no change in the original price chart. It is recognized that although this
procedure brings the variogram into close agreement with the results expected
from a Markov process it does not mean that the revised set of fluctuations is
ideally normal or Gaussian.

FIGURE 5

iii. balanced ladder values

FIGURE 4

manaGinG anomalous markeT daTa


In order to eliminate some of the effects of unusual fluctuations we need a
procedure for dealing with anomalous market data. In industrial manufacturing
processes statistical process control techniques are used. These techniques
flag anomalous events that fall outside of sigma-based control limits. In
industry, the cause of such anomalous data can be located. The process can
be adjusted or fixed. Anomalous industrial data is not included in subsequent
reference distributions as its inclusion increases the distribution variance

Support and Resistance Levels, Ladder Values, and Balance Points


News, or new data, enters the public domain in a nearly continuous fashion.
This news is digested, interpreted, and acted upon by traders and investors
according to their view of the future. If analysts were unbiased and equally
proficient then one would expect an efficient market to value equities such
that the prices quoted would be an immediate best estimate of the intrinsic value
of the equity. In fact this does not happen because analysts are both biased and
unequally adept in judging the value of the news and estimating its affect on
the future business environment. Those of a more optimistic frame of mind, the
bulls, take long positions and buy the future. On the other side, those of a more
pessimistic view, the bears, sell the future short. The bulls and bears bet against
each other, pushing and pulling, causing prices to wander about what arguably
could be a better estimate of an underlying intrinsic value (Fox, 2009).

2014 . Issue 68

25

According to Kahn (2006), support is that price level at which the aggressive
selling of the bears has waned sufficiently to be offset by the rising aggressiveness
of buying by the bulls. Conversely, resistance is the level at which the aggressive
buying of the bulls has waned sufficiently to be offset by the rising aggressiveness
of selling by the bears. The support level acts as a lower price barrier in a region
previously dominated by bearish activity while a resistance level acts as an upper
price barrier in a region previously dominated by bullish activity. Ladder values
are affected by the presence of support and resistance levels as these levels are
approached. Where the bulls and the bears offset one another we expect to find
a transient point of equilibrium or balance. From a technical analysts perspective
it is useful to be able to objectively determine when and where price barriers and
their associated balance points occur.

requiremenTs for idenTifyinG


balance PoinTs
As price fluctuations follow a random walk they generate ladder values where,
as noted earlier, ladder values are the highest and lowest values attained after
some elapsed period of time, given a starting point or origin. An algorithmic
process is required to evaluate the ongoing sequence of day-to-day price
fluctuations and to generate, from any arbitrary starting point, a series of
ascending (bullish) and descending (bearish) ladder values and, based on
these values, locate transient points of balance. In order to do this it is first
necessary to define bullish and bearish probability zones. Additionally, logical
constraints are required to maintain a distinction or separation between ladder
values until statistical conditions dictate otherwise, that is, when bullish
and bearish ladder values coincide at the time of balance. Upon balance
the origin of a new segment of the random walk is identified and a new
ladder generation process begins. The statistical constraints that maintain a
distinction between the bullish and bearish ladders while ladder values vary
are defined by probability-weighted gain and loss functions.

ProbabiliTy-WeiGhTed Gains and


losses: maximum exPecTed uTiliTy
Utility is defined as something that is wanted or desired and can be represented
by a numerical value. In our case utility is the benefit we receive as a result of our
position in the market, i.e., profit. Both positive and negative price changes can
result in profit. Positive fluctuations in prices have bullish utility while negative
fluctuations in prices have bearish utility. Bullish and bearish utility functions
are conditioned by the probabilities of the specific fluctuations considered.
A utility value multiplied by the probability of its occurrence yields the
expected utility.
At time t = 1, from Equations 1 and 2, X1 = ln(P1/P0). P1 represents the closing
price on any given day while P0 is the closing price a day earlier. Let U denote

26

the percent utility:

EQUATION 11: U = (P1/P0 -1)*100.


If P1 > P0 then there is a percent gain, G. If P1 < P0then there is a percent loss,
L. Thus,

EQUATION 12: G = (P1/P0 -1)*100, P1 > P0,


EQUATION 13: L = (P1/P0 - 1)*100, P1< P0.
If P1 => P0 let X(p)1 = 1 n(P1/P0); if P1 < P0 let X(n)1 = ln(P1/P0). These
definitions are introduced to maintain a distinction between positive, p, and
negative, n, fluctuations. Note that exp(X1) = P1/P0. Therefore

EQUATION 14: G = {exp[X(p)1] -1}*100


EQUATION 15: L = {exp[X(n)1] - 1}*100.
X(p)1 and X(n)1 are random variables appearing in the positive half and
negative half, respectively, of the normal distribution N{0, }. The fluctuations
X(p)1 and X(n)1 can be expressed in terms of multiples of the standard deviation
. Thus X(p)1 = Z(p)1* and X(n)1 = Z(n)1*. Hence,

EQUATION 16: G = {exp[Z(p)1*] - 1}*100,


EQUATION 17: L = {exp[Z(n)1*] - 1}*100.
Having the percent gain and loss functions in this format is helpful when
calculating probability weighted utility values. The probability of a gain or loss of
any specific amount is determined by the Z factors in the exponential functions
of Equations16 and 17. Given N{0, } there is a 99.74% probability that random
fluctuations will occur in the range between Z(n)1 = -3 and Z(p)1 = +3.
Consider the normal distribution, Figure 6. The probability that a random
fluctuation will occur in the range Z = Zb Za is equal to the area under the
normal curve bounded by Zb, Za. (Dixon, 1957). Similarly, the probability that
a random fluctuation will occur in the range Z = Zd Zc is equal to the area
under the normal curve bounded by Zd, Zc. Clearly, the probability that random
fluctuations will occur more often near the mean of the distribution is much
greater than the probability of occurrences in the tail areas. On the other hand,
Equations 16 and 17, the utility is significantly improved as the magnitude of Z
increases. The bulls and the bears want their respective utility values to be large
as possible, i.e., large values of Z. The probabilities, however, determine what
they (the bulls and bears) will get on average. The product of the utility and the
corresponding probability determines the expected value.

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

FIGURE 7

bullish and bearish ladder zones:


FIGURE 6
Let Pr[Z(p)] represent the probability that a random fluctuation in price yields
a value Z(p) in a narrow zone of width Z(p) centered at Z(p). Similarly, Let
Pr[Z(n)] represent the probability that a random fluctuation in price yields a
value Z(n) in a narrow zone of width Z(n) centered at Z(n). The expected values
are thus

EQUATION 18: E(p) = Pr[Z(p)]*{exp[Z(p)*] - 1}*100,


EQUATION 19: E(n) = Pr[Z(n)]*{exp[Z(n)*] - 1}*100.
Holding sigma constant and letting Z(p) vary incrementally from the mean to
+3 sigma and calculating the average expected utility for each increment, we
obtain the curve shown in Figure 7. A similar curve is obtained by letting Z(n)
vary incrementally from the mean to -3 sigma. Clearly, there are maximum
expected bullish and bearish utility values. Evaluating the Z values at the
maxima we find that Z(p) = 1.0 and Z(n) = -1.0. Therefore, at Z(p) = 1.0, the
maximum expected gain, Gx, given 1, is

EQUATION 20: Gx = [exp(1) 1]*100


while, at Z(n) = -1.0, the maximum expected loss, Lx, is

EQUATION 21: Lx = [exp(-1) 1]*100.


The standard deviation, 1, depends upon the volatility of the equity or index
considered and the number of data points entering into its calculation. (Hull,
2000). The choice of the number of data points is somewhat arbitrary. In this
paper, a minimum of 252 data points, equivalent to one market year, is used to
calculate sigma. If we examine the volatility of the fluctuations, i.e.,
X1 = 1n(P1/P0), of typical stocks and market indices, we find the standard
deviation is generally in a range between 0.01 and 0.04. As 1 varies between
0.01 and 0.04 the values of Z(p)1 and Z(n)1, to a first order, remain equal to
+/- 1.0. Unless otherwise stated these Z values are used in the remainder of
this paper.

The balanced ladder algorithm is described below. However, the algorithm


in its abstract conditional form may appear somewhat difficult to apprehend
at first. Therefore, in this section we will describe how the algorithm works an
example. This should provide the reader with a better understanding of the
algorithm as it performs the analysis that produces balanced ladder values.
The maximum expected utility functions are unique reference loci that
demarcate bullish and bearish probability zones stemming from ladder
values. They define boundaries between which, by definition, balanced values
are found. Earlier we showed that for the Markov distribution the standard
deviation, t, is proportional to the square root of time: t = 1*t1/2. Thus the
maximum expected utility loci, which depend upon the value of sigma, are
functions of the square root of time.
Consider Figure 8. The graph shows the two maximum expected utility loci
originating at a closing price P0 = 10. The black dashed curved represents the
bullish boundary while the red dashed curve represents the corresponding
bearish boundary. At the origin there are no separate ladder values. The
equations for the bullish and bearish maximum utility boundaries are,
respectively,

EQUATION 22: UGx(t) = exp(1nP0 + 1*t1/2) = P0*exp(1*t1/2),


EQUATION 23: ULx(t) = exp(1nP0 - 1*t1/2) = P0*exp(-1*t1/2).
Fluctuations resulting in prices above UGx(t) are considered bullish while prices
below ULx(t) are considered bearish. Prices between UGx(t) and ULx(t) vary
from somewhat bullish to indeterminate to somewhat bearish.
Each daily close is the potential starting point for a new Markov process as
ladder values are defined by closing values. Typically, between points of
balance, only one ladder value is active at a time. The active ladder is adjusted
to the current closing value and thus becomes the origin of the Markov process.

2014 . Issue 68

27

The other ladder value is necessarily inactive or passive. Adjustments to ladder


values continue on a daily basis until a balance value is detected. At this point
both ladder values are active and each is set equal to the current close.

Next refer to Figure 10. After 4 bearish days the last close is now 8.75. The
descending ladder has continued to be active. During this interval the ascending
ladder has remained inactive at 10. Note that a gap has opened up between
the two maximum expected utility loci, i.e. between 9.02 and 9.42. With a
gap between the utility functions the next closing price must fall into one of
three probability zones: the upper bullish zone, the lower bearish zone, or the
intermediate zone where balance occurs. If the closing value is in the bullish
zone then the ascending ladder is active and adjusted. Similarly, if the closing
value is in the bearish zone then the descending ladder is active and adjusted.
If, however, the closing value falls into the intermediate gap then both ladder
values are active and a balance value results.
Referring again to Figure 10, if the close is less than 9.02, then prices remain

FIGURE 8
According to the example of Figure 8, given 1 = 0.03, the maximum expected
price for the next day is 10.30; the minimum expected price is 9.70. However,
suppose a loss of 5% occurs such that the next days close is at 9.50. This
represents a significant bearish event. Figure 9 shows the status at the close of
day one. The first fluctuation after the origin has produced two ladder values,
an upper ladder value at 10 and a lower ladder value at 9.50. Since the most
recently changed ladder value is the location of the ongoing Markov process the
active ladder is 9.50 while the origin at 10 is passive. At the active descending
ladder level there are new bullish and bearish reference loci, i.e., maximum
expected utility functions. Since the ladder at 10 is passive the original bullish
and bearish reference loci are unchanged. The bullish locus stemming from the
bullish ladder and the bearish locus stemming from the bearish ladder are not
needed in any subsequent discussion in this paper. Therefore they are not shown
in the graph as their continued representation causes unnecessary graphical
clutter. Hence, only two reference loci are displayed: the bearish maximum
utility function, ULx(t), originating from the ascending ladder, and the bullish
maximum utility function, UGx(t), originating from the descending ladder.

FIGURE 9

28

FIGURE 10
in the bearish zone. It is readily calculated that the probability of this event is
greater than 84%. If the close is greater than 9.42 then conditions have become
bullish. The probability of this happening at random is less than 1% and unlikely.
However, in case of this event the bullish ladder becomes active and is set equal
to the closing price. At the same time the bearish ladder becomes passive. If the
close does not fall in either the bullish or bearish zone then it falls in the gap
between 9.02 and 9.42. The probability of this happening at random is 14.9%.
In this case the two ladder values are logically equivalent, i.e., in balance, and
define a new origin. At this point the ladder generation process begins again
with the next fluctuation. It is obvious from symmetry considerations that if the
first fluctuation is positive or bullish the process will evolve in a similar fashion
to the one just described.
The maximum expected utility reference loci that separate bullish and bearish
zones are unique statistical boundaries that are analogous to control chart
probability zones used in industry to statistically detect unnatural patterns.
(Small, 1957). In particular cases the bullish and bearish zone boundaries may
be adjusted by the analyst in order to investigate the effects of the boundary
locations on the signal generation process.

balanced ladder alGoriThm

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

Let St = 1n(Pt) define the origin of the ladder generating process by S0 and
subsequent ascending and descending ladder values by SAi and SDj, respectively.
The indices i and j are required to keep track of where changes to ladder values
occur. At the close of each trading day ladder values are compared to the closing
price and adjusted according to the rules described below.
The first two rules are immediately obvious. If todays close is equal to or greater
than the current ascending ladder value then change the ascending ladder
value to equal todays close. Similarly, if todays close is equal to or less than the
current descending ladder value then change the descending ladder value to
equal todays close. These rules, in conditional form, are as follows:

R1: If St => SA(t-1) then SAt = St.


R2: If St <= SD(t-1) then SDt = St.
Rule 1 states that if the natural logarithm of todays closing price is equal to or
greater than the natural logarithm of yesterdays ascending ladder value then
record the natural logarithm of todays closing price as todays ascending ladder
value. Rule 2 is the image of Rule 1. Rule 2 states that if the natural logarithm of
todays closing price is less than or equal to the natural logarithm of yesterdays
descending ladder value then record the natural logarithm of todays closing
price as todays descending ladder value.
Between points of balance it is necessary to maintain a distinction between
ascending and descending ladder values. The maximum expected-utility
values are used to do this. As described in the example in the prior section,
the maximum expected-loss function, originating with the ascending ladder
value, is the bearish boundary for the bullish probability zone. Closing prices
on or above this boundary are assumed to be in the bullish zone. Similarly, the
maximum expected-gain function, originating with the descending ladder
value, is the bullish boundary for the bearish probability zone. Closing prices on
or below this boundary are assumed to be in the bearish zone. In logarithmic
format the maximum expected utility boundaries are,

EQUATION 24: SA(i+m) = SAi - 1m


EQUATION 25: SD(j+n) = SDj + 1n1/2.

1/2

The factors m and n are square root of time functions. If there is no change in the
value of the ascending ladder, i.e., if the ladder is inactive, then the value of m is
increased by one unit for each inactive day. Similarly, if the descending ladder is
inactive, then the value of n is increased by one unit for each inactive day. In all
instances (i + m) = (j + n) = t where t is the current time coordinate. As long as
a ladder level is active its m or n factor remains equal to unity.
As described in the example, at the end of each business day the closing price
falls into one of the three zones: the upper bullish, the lower bearish, or in
between. The fluctuations of interest are those that result in prices falling in

the gap that forms between the two maximum expected utility boundaries. It
is assumed that in order for a fluctuation to result in a closing price falling in the
gap between the ladder probability zones then either bullish or bearish price
pressures have eased, causing prices to move in the direction of equilibrium. The
values that appear in the intermediate probability zone between the maximum
expected loci are the balance points. The algorithmic procedure described next
uniquely identifies balance values.
Suppose the logarithm of todays closing price, St, is less than the logarithm
of yesterdays ascending ladder value, SA(t-1). Then St is located in one of the
three probability zones. By definition, the ascending ladder value must remain
above the bearish probability zone, i.e., the zone bounded by the bullish
expected value locus with the descending ladder value as origin. Thus, in order
to change the ascending ladder value St must be greater than SD(j+n) = SD(j)
+ 1n1/2. Otherwise there is no change in the ascending ladder value. An equal
but opposite argument applies to the descending ladder value. If the logarithm
of todays closing price is greater than the logarithm of yesterdays descending
ladder value, SD(t-1), then St is located in one of the three probability zones.
Again, by definition, the descending ladder value must remain below the bullish
zone boundary, i.e., the zone defined by the bearish expected value locus with
the ascending ladder value as origin. Thus, in order to change the descending
ladder value, St must be less than SA(i+m) = SA(i) - S1m1/2. Otherwise there is
no change in the descending ladder value. In conditional format these rules are:
R3. If St < SA(t-1) and St > SD(t-1) + 1n1/2 then SAt = St;
otherwise SAt = SA(t-1),
R4. If St > SD(t-1) and St < SA(t-1) - 1m1/2 then SDt = St;
otherwise SDt = SD(t-1).
These rules are sufficient to identify the occurrence of values that fall in the
gap between the upper bullish zone and the lower bearish zone. Rule 5 follows
immediately from the above:
R5. If SAt = SDt then St = S0.
With the occurrence of S0 the bullish ladder value is logically equivalent to the
bearish ladder value. This equivalence defines a transient balance between
bullish and bearish sentiment. S0 marks the origin of a new segment of the
Markov process. The terms balance values and signals are used interchangeably
in the following.
Price levels where a balance occurs are observed to be both anticipated and
recurrent. Such prices are considered important by a majority of market
participants as trading activity and fluctuations are influenced when these
levels are approached. Ladder values are affected by the perceived value of
the stock or index. Perceived value includes psychological factors; hence,
ladder values are to some degree a reflection of market psychology. If market

2014 . Issue 68

29

psychology is bullish new ascending ladder values are more likely to occur; on
the other hand, if market psychology is bearish new descending ladder values
are more likely. When balance occurs it is assumed, based on the probabilities,
that the bulls and bears have a similar perception of market value.
It is statistically possible for a sequence of fluctuations to occasionally close
the gap between the utility functions such that the bullish locus stemming
from the descending ladder is above the bearish locus stemming from the
ascending ladder. If this happens, depending on where the next close falls, both
ladder levels may become passive. Until the gap reopens market sentiment is
indeterminate. This condition is observed to be infrequent and, when it occurs,
short-lived.
Two types of errors are associated with balance value signals. The first type is
caused by random noise fluctuations instead of the assumed balance between
bullish and bearish sentiment. The second type of error happens if there is a
balance in fact that goes undetected. Reducing the risk associated with spurious
noise signals will be dealt with in a later section. Addressing the second type of
error is beyond the scope of this paper.

balanced ladder value suPPorT and


resisTance barrier levels
Support and resistance barrier levels are located through balance values.
Support exists whenever enough buying strength enters the market to
generate a balance value. Equivalently, resistance exists whenever enough
selling strength enters the market to generate a balance value. Therefore the
following rules are used to define support and resistance price barriers:
R6. A support barrier is defined by the low that occurs the day before a signal
when the signal value is greater than the day earlier close.
R7. A resistance barrier is defined by the high that occurs the day before a
signal when the value of the signal is less than the day earlier close.
It will be shown that levels defined by prior support and resistance barrier levels
often reappear as balance values and conversely.

demonsTraTion: The dJia 10,000 level


As Brown (2003) points out, technical analysts recognize that certain aspects
of their profession amount to an art and depend upon experience. The same is
true when analyzing a data sequence using the balanced ladder algorithm. The
bullish and bearish zone boundaries, UGx(t) and ULx(t), are probability based
reference values. In specific cases these may need to be adjusted to account for
the buying and selling habits of the bulls and bears. Some take action before
the maximum expected utility boundary, others later, blurring the region
around the maxima. As analysts specialize in interpreting price action of various

30

stocks or indices, each stock or index must be viewed as possibly unique with
its own Z values. In practice, we start with Z factors set to +/-1. Adjustment of
the Z values, if required, aligns balanced ladder values to support and resistance
levels believed by the analyst to be valid. The way this is done is straightforward:
Z values are simply reduced in magnitude in decrements of approximately 0.05
until signals appear on or near a selected barrier point or level. By decreasing the
Z values one increases the probability of a signal and decreases the probability
that the active level will become passive and conversely. The analyst must
review the results to make sure they satisfy common sense.
Consider the DJIA, an algebraic function of a collection of price sequences, each
of which is fluctuating in an unpredictable but generally correlated manner. It
is known that there are circumstances in which a prior price, or an anticipated
price, has an influence on market activity. Such circumstances have at least
one thing in common: a large number of active traders and investors attach
meaning to these values (Cassidy, 1997). Their collective buy and sell decisions
are influenced by, and have an effect on, fluctuations in the neighborhood of
these levels. Some of these levels are known to have psychological significance.
When the Dow Jones Industrial Average was first approaching the 10,000 level
in March of 1999, a great deal of publicity concerning this event occurred in
both print and broadcast media. Obviously, the 10,000 level had psychological
and technical significance to both the investment and trading communities.
A similar significance was also attributed to the 11,000 level. (Luccheti, 2005)
Such levels are often considered technical resistance and support price barriers.
The results of the application of the balanced ladder value algorithm to more
than 4 years of daily DJIA data, from May 1, 1997 through December 31, 2001,
are shown in Figure 11. The 10,000 and 11,000 levels are shown in red. Balance
values are indicated by yellow diamonds. Resistance and support barrier values
associated with signals are shown by blue dashes. Z values were reduced in
magnitude in order to produce the double signal shown in the first ellipse. Of
the 1174 daily closing values in this graph, 67 are balance values, i.e., 5.7%
of the total number of closing values tested. Between March 18, 1999 and
November 28, 2001 eight signals, shown within the six ellipses of Figure 11,
occurred in the immediate vicinity of the 10,000 level. The mean value of the
eight signals is 10,000.3. Similar results are found at 11,000 along with levels
intermediate to 10,000 and 11,000. These results suggest that it is plausible
to argue that the signals generated by the application of the balanced ladder
algorithm are indicative of values that have special significance, psychological
or otherwise, to a plurality of active market participants.

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

vi. balanced ladder value Technical


analysis
Trends

FIGURE 11

siGnal and barrier value validaTion


rules
A basic distinction between a purely random time series and a sequence
produced by stock prices or market index data is that trends due to underlying
causes can occur in the market while in a random time series, by definition,
trends with underlying causes do not exist. However, the application of the
algorithm to series known to be random will generate signals if a sequence of
random fluctuations satisfies the statistical requirements for a balance value.
Some of these signals may suggest trends. Therefore one cannot be sure if any
particular signal occurring in a stock price or market index sequence is valid
(error of the first type). Additional screening criteria are required to reduce the
risk of error. Therefore the following rules are used to validate balance values,
barriers, and associated trends:
R8. Two or more balance values are assumed to be valid if each value is within
plus or minus % of the average of the balance values.
R9. Barrier values associated with valid balance values are assumed to be valid.
According to Kahn (2006), when defining trend lines, particularly for longer
periods of time, the best fit to the data is obtained using logarithmic scaling.
Hence,
R10. If the natural logarithms of at least four signals are collinear and the linear
regression correlation coefficient, R2, is equal to or greater than 99%, then the
signals are valid and define a trend.
R11. Trend lines drawn through two or more balance and barrier values and
parallel to a valid trend are also valid along with the individual balance and
barrier values.

According to Dow Theory, market trends entail three different components


operating simultaneously on different time scales. These consist of primary
trends that are in place from three to six years, secondary trends that exist for
weeks to months, and short term day-to-day trends. Dow Theory emphasizes
the identification of the primary trend. Balanced ladder value technical analysis
is no different in this regard. In standard technical analysis, trend lines
are drawn to encompass extreme highs and lows (Schwager, 1999, p 42).
However, these are not always representative of the principal trend and an
internal trend line may be more effective. Accordingly, An internal trend line
is drawn to best approximate the majority of relative highs or relative lows
(Schwager, 1999, p 44). In the case of balanced value technical analysis
the identification of primary, secondary, and short term trends is based on
Rule 10: the natural logarithms of at least four signals must be collinear with a
correlation coefficient R2 equal to or greater than 99%. Internal trend lines are
used to identify the primary trend.

channels and bands


A channel is simply a pair of parallel trend lines along the top and bottom of
a price series. The channel is a measure of the trending behavior. As long as
prices remain within the channel the trend is considered to be intact. In the case
of balanced value technical analysis the principal signal defined trend line is
translated parallel to itself and through the highest and lowest validated barrier
values. The barrier value extremes define the limits of the trend channel. The
slope of a channel can be positive or negative. If over some period of time a
collection of signals and barrier values falls within a range bounded by parallel
horizontal lines then prices are said to be fluctuating within a band.

aPPlicaTion examPle
Figure 12 is an example of the application of balanced ladder rules to some of
the data of Figure 10. The solid red trend line satisfies the requirements of Rule
10. The trend line correlation coefficient, R2, is greater than 0.999, practically a
perfect fit. Three other trend lines, shown in black, are drawn parallel to the
red trend line. The dashed red trend line is a projection along the slope of the
principal trend. Arrows point out a few examples of valid signals and barrier
values. Note that prior balance values often define subsequent resistance and
support values. For example, consider balance and barrier values at the 9.3
level. The converse is also observed to be the case.

2014 . Issue 68

31

following the second signal, the off-balance volume statistics indicated very
bullish market sentiment.

viii. examPles
A. DJIA Prior Signal Levels: Fluctuation Effects
Figure 14 shows some of the effects that prior signal levels have on subsequent
price fluctuations. Levels A, B, C are extensions of earlier validated signals. Levels
A and B bracket fluctuations in the range between 8826 and 8691. Note that
the high the day before the second signal shown on the graph was 8850, just

FIGURE 12

vii. off-balance volume senTimenT


analysis
Volume is one of the key factors in technical analysis. The more shares being
traded at any time, the more significant the resulting fluctuations. Not only are
changes in total volume important, so too are changes in the proportion of the
volume that can be attributed to supply or demand. Hence, let H, L, C denote,
respectively, the days high, low, and close and define the off-balance ratio by

EQUATION 26: R = [2*C-(H+L)]/(H-L).


It follows that -1<= R <= +1. If C = H, R = 1; if C = L, R = -1. If C = (H+L)/2,
R = 0. On a daily basis the scaled volume is multiplied by R and added to the
prior days total. The cumulative total represents the off-balance volume. On
the occurrence of a balanced ladder value the off-balance volume is set to zero.
Figure 13 shows an Off-Balance Volume chart for the DJIA. The ellipse encloses
the two signals that occurred at the 10,000 level in March 1999. The red arrows
locate the balance values. After the 10,000 level resistance barrier was broken,

FIGURE 14
above level A. At this point selling more than offset buying and as a result the
price was driven down sharply to 8612. A barrier at 8608 is associated with the
signal on level C. This graph provides further support for the assertion that prior
signals and barrier values influence subsequent buying and selling decisions
and thereby fluctuation statistics.

B. Illumina ILMN: Triple Top Formation


Figure 15 provides an example of a triple top formation. Parallel trend channels
are also shown. The triple top formation begins up against the upper channel
resistance barrier. The data is presented in more detail in Figure 16. The two
red dashed lines are located one-half percent, respectively, above and below
the mean of the three signals. Thus, according to Rule 8, the three signals are
valid. The upper blue dashed line connects two barrier values. The lower blue
dashed line is a valid support barrier. The black dashed line, S-1, extends from
the balance value that occurred just prior to the triple top formation.

FIGURE 13

32

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

FIGURE 17

FIGURE 15

FIGURE 16
Figure 17 shows the off-balance volume sentiment chart. The red vertical
arrows locate the points of balance. After the second arrow significant selling
pressure becomes evident. After the third arrow, as the daily volume increased
the off-balance volume was negative until the prices encountered the support
level shown in Figure 16.

FIGURE 18

c. cboe oil index oix:


head and shoulders; double ToP
Figure 18 is a graphical analysis of the Chicago Board of Exchange oil index. To the
left of the chart are valid head and shoulder and double top formations. These
formations are the basis for the resistance level A and support level B. These levels
bracket the signals that appear to the right on the chart. Once level B failed to hold
the index dropped precipitously to the major support level at approximately 6.5.
The blue arrow with negative slope represents a resistance boundary. After prices
found support and moved beyond the resistance boundary prices rose to the next
resistance level represented by the horizontal blue dashed line.

2014 . Issue 68

FIGURE 19

33

d. amTech sysTems asys

references

Figure 19 shows trends, levels and some tentative structural details for Amtech
Systems. Note that when resistance level B failed there was a sharp drop in the
price. Support was found at the level of two prior signals that are part of a head
and shoulders formation near the center of the chart. The resulting balance
value is validated by the signal to its left on the same level.

Box, George, Luceno, Alberto, 1997, Statistical Control by Monitoring and


Feedback Adjustment, (John Wiley & Sons, Inc., NY)
Brown, Constance M., 2003, All About Technical Analysis, (McGraw-Hill, NY)
Cassidy, Donald L., 1997, Its When You Sell That Counts, (McGraw-Hill, NY)
Dixon, Wilfrid J., Massey, Frank J., Jr., 1957, Introduction to Statistical Analysis,
(McGraw-Hill Inc., NY)
Feller, W., 1971, An Introduction to Probability Theory and Its Applications, Vol.
II, (John Wiley & Sons, Inc., NY)
Fox, Justin, 2009, The Myth of the Rational Market, (Harper Collins, NY)
Hull, J.C., 2000, Options, Futures, & Other Derivatives, (Prentice Hall, NJ)
Kahn, M. N., 2006, Technical Analysis Plain and Simple, (Prentice Hall, NJ)
Lucchetti, A., 12/05/2005, Go Figure: Zeroing In on Market Milestones, The
Wall Street Journal
Murphy, Jr., Joseph E., 1994, Stock Market Probability, (Probus Publishing,
Chicago, IL)

FIGURE 20
Figure 20 shows three anomalous spikes in daily volume. Associated with these
spikes are negative off-balance volumes. The last selling spike caused the failure
of support at level B shown on the previous chart. We might guess that some
large institution or fund decided that Amtech Systems had run its course and it
was time to exit.

ix. conclusion
The basic premise underlying this work is that there is an ongoing tension that
exists between bullish and bearish psychology, that ladder values associated
with a random walk are driven by ever changing market expectations and
psychology, and that a balance or relaxation of the tension between the
pressures of supply and demand can be detected by means of probabilities and
statistics. One of the innovative concepts in this paper is the identification of a
bullish locus with the bearish ladder and a corresponding bearish locus with
the bullish ladder and the classification of values that occur in the gap between
these boundaries as transient points of balance. The importance of preexisting
support and resistance levels in this process is recognized. These levels are
assumed to influence fluctuations in their vicinity in such a way that balance
values can be located using statistical analysis. Consequently, on the occurrence
of a balance value, it becomes possible to detect the presence of support and
resistance barriers. Trends are observed as a drift in balance values between
bands and channels defined by support and resistance barriers.

34

Rockefeller, Barbara, 2004, Technical Analysis for Dummies, (Wiley Publishing,


Inc. NJ)
Schwager, J. D., 1999, Getting Started in Technical Analysis, (John Wiley & Sons,
Inc. NY)
Small, Bonnie B., et. al., 1956, Statistical Quality Control Handbook, (Western
Electric Company, Inc. Indianapolis, IN)

The idenTificaTion and uTilizaTion of balanced ladder levels in

The Technical
analysis of sTock
markeT index Time series
a comparison
of support
andand
resistance
levels defined through balanced
ladder values and by Golden ratios

carl aspin

bioGraphy
Carl Aspin
Carl Aspin recently retired as President of Aspin Engineering Services in Prescott, Az. He received a degree in Engineering Physics from the
Colorado School of Mines. This was followed by post graduate engineering training at General Electric in Utica. NY. He was employed
at Motorola for more than 30 years in a wide range of R&D, engineering, and management positions. He is a graduate of the Motorola/

absTracT
Technical analysts utilize the golden mean or Fibonacci ratios to locate price
sequence support and resistance levels. Balanced ladder values have been
shown to identify such levels as well. The two methods, which are independent,
are applied to both stock and market index sequences and the results compared.
It is shown through examples that there is excellent agreement between the
support and resistance levels identified by balanced ladder values and golden
ratios. The two techniques are seen to be complementary.

inTroducTion
The paper The Identification and Utilization of Balanced Ladder Levels in the
Technical Analysis of Stock and Market Index Time Series appearing earlier in
this issue suggests that balanced ladder values correspond to prices and price
barriers that form technical support and resistance levels. Price analysts often
use Fibonacci ratios or the golden mean, (phi), to determine where stock
and market index values should find support and resistance (Brown, 2003).
The two methods for identifying possible support and resistance levels are
distinct and independent. If both methods are equally valid then the results of
each should be in reasonable agreement. Thus, the purpose of this paper is to
apply the balanced ladder algorithm to stock and market index sequences and
compare the support and resistance barrier values obtained to those associated
with golden means.

i. Golden mean
The golden mean or golden ratio, = 1.618..., and its association with Fibonacci
numbers is well known to technical analysts. Perhaps not so well known is the

fact that the convergence of Fibonacci ratios to is not unique. The Fibonacci
series is a particular case of the recurrence relation:

EQUATION 1: xi = xi-1 + xi-2,

i > 2.

As i values increase in size, the ratio of xi+1 to xi approaches as a limit. The


initial values of the Fibonacci series, x1 and x2, are both equal to unity. However,
regardless of the starting values, given the recurrence relation, Equation 1, as i
values become large the ratio of xi+1 to xi always approaches as a limit. (Livio,
2002) Consequently, rather than including a discussion of Fibonacci numbers,
their ratios, and their possible application to price analysis, we will simply start
with a basic assumption regarding the relationship between technical support
and resistance (S/R) levels and obtain the golden mean directly.

ii. supporT and resisTance levels:


basic assumpTion
Let Y1 denote the price where the expected utility of buying and owning a
share of stock has decreased to the degree that the potential supply exceeds
actual demand. The result is that prices encounter buying resistance and reach
a relative high. At this point, according to Aspin (2014), a balanced ladder value
is expected to occur. Similarly, let Y0 denote the price where the expected utility
of selling and relinquishing ownership of a share of stock has decreased to the
degree that potential demand exceeds actual supply. The result is that prices
encounter selling resistance and achieve a relative low. Again, a balanced ladder
value is expected. Consider Figure 1 which shows Y1 as a resistance level and Y0
as a corresponding support level. Assume that the ratio of the relative high, Y1,

to the relative low, Y0, is equal to the ratio of Y0 to the magnitude of the trading
range between the high and low values. Then:

EQUATION 2: Y1/Y0 = Y0/(Y1-Y0).


Hence, EQUATION 3: Y12 Y0Y1 Y02 = 0.
Therefore EQUATION 4: Y1 = Y0, = [1+SQRT(5)].

the phi levels depend only on n and are independent of any particular stock or
index. The baseline level, Y(0), is arbitrary and must be determined by means of
independent criteria. The criteria will be described next.

As Y0 represents a support level then 1.618Y0 denotes the corresponding


resistance level. It follows that in general

Yn+1 = Yn,

n => 0

The levels at n and n+1 are defined as the primary phi-based support and
resistance levels.

FIGURE 2
Figure 3 shows the natural logarithm of the closing prices for Intel from June 3,
2009 through January 14, 2013. The closing values lie in a range between 12
and 30. Superimposed on the price data are primary, secondary, and tertiary
phi levels. For this example n = 6. Examining Figure 3, we see that 18 and 29
are approximate support and resistance levels, respectively. The secondary and
tertiary phi levels also show possible regions of support and resistance.

FIGURE 1
It follows from Equation 5 that the primary trading range, Yn+1 Yn, is equal to
0.618 Yn, i.e., 61.8% of the primary support level. This range is too wide to be
generally useful for day-to-day or week-to-week analysis. Subordinate levels
are required. Secondary and tertiary trading ranges are obtained by again
assuming that between any two adjacent support and resistance levels the ratio
of the resistance level to the support level is equal to the ratio of the support
level to the trading range. Thus, the secondary levels, between Yn and Yn+1,
are 1.236Yn and 1.382Yn. Tertiary levels between Yn and 1.236Yn and between
1.382Yn and Yn+1 are, respectively, 1.090Yn, 1.146Yn, and 1.472Yn, 1.528Yn.
The half distance between two adjacent tertiary levels is 2.8% Y0. Consequently,
further subdivisions are of little practical value. According to standard practice
a centerline at 1.309Yn is also included as a level. Once an initial value, Yn, is
selected all primary, secondary, and tertiary phi-based S/R levels are fixed.
An example is shown in Figure 2. Y(0) is the baseline support level. Note that

36

FIGURE 3
Consider Figure 4. In this graph balanced value signals and corresponding
barrier values are superimposed upon the sequence of daily closing prices. The
signals and barrier values locate support and resistance levels (Aspin, 2014). By

a comparison of support and resistance levels defined


through balanced ladder values and by Golden ratios

visual inspection it is seen that a support/resistance level occurs between 19


and 20. The horizontal blue lines define a price range between 19.23 and 19.86.
The location of these range boundaries is a matter of judgment. In this example
12 signal/barrier data points fall within the selected range. The average of this
data is the best estimate of the location of the S/R level. Referring to, Figure 5,
<Y> = 19.54. The red parallel lines are the plus and minus one-sigma levels,
i.e., 19.75 and 19.32. Thus, we have a possible S/R level at 19.54 +/- 0.215.

include as many data points as is reasonable without causing adjacent ranges


to overlap. A (1+/- 0.025)*Y(0) range is used for primary and secondary levels.
Because intermediate K*Y(0) levels are not evenly spaced we do not have a
one size fits all data range. In order to prevent the overlap of adjacent tertiary
ranges a narrower (1+/- 0.020)*Y(0) range is required. An example of the nine
levels and ranges is shown in Figure 6.
Referring to Figure 7, Y(0) = 19.54 has been selected as the baseline. After
baseline selection a specified range is centered on one of the K*Y(0) levels and

FIGURE 4
FIGURE 6

FIGURE 5

FIGURE 7

In order to compare the S/R locations obtained by the two analysis techniques
we must first specify a procedure for obtaining signal and barrier value averages.
Thus, let Y(0) = <Y> define a baseline and denote <Yk> as the average that is
to be compared to K*Y(0) where K = 1, 1.090, ,1.1618.
In order to obtain the average, <Yk>, a preselected range centered on K*Y(0)
needs to be defined. The width of this range is somewhat arbitrary. We want to

the mean value of the signals and barrier values between the range boundaries
is calculated.
The first result is shown in Figure 8. The mean < Yk > = 28.76; the corresponding
K*Y(0) value is 1.472Y(0) = 28.77. This process is repeated for the next level
with the results shown in Figure 9. Here < Yk > = 26.97 and 1.382Y(0) = 27.00.

2014 . Issue 68

37

FIGURE 8

FIGURE 10

FIGURE 9

FIGURE 11

This process is continued for the five remaining K*Y(0) levels. The results are
shown in Figure 10 and summarized in Table I. The weighted mean difference
between the K*Y(0) levels and the signal and barrier value averages is -0.02%.
Based upon this first example it appears plausible to argue that the balanced
value signals and associated barrier values are locating support and resistance
levels that are equivalent to phi levels, assuming that an appropriate baseline
value has been selected.

12 summarizes the results. In this example the centerline value of 1.309Y(0) is


aligned with the baseline. The centerline is used in order to get a better overlay of
the data by the phi template. The weighted mean difference between the K*Y(0)
levels and the signal and barrier value averages, Table II, is 0.09%.

iii. examples:
Four examples follow: Wells Fargo, McDonalds, the S&P 500, and the DJIA. These
stock prices and index values cover four orders of magnitude. In each case it is
shown that < Yk > and K*Y(0) levels can be aligned to exhibit close agreement.

a. Wells farGo:
Figure 11 shows the closing prices, signals, and barrier values for Wells Fargo
between June 3, 2009 and January 14, 2013. The baseline S/R level is 27.35. Figure

38

b. mcdonalds:
Figure 13 shows the data and baseline for McDonalds. The baseline resistance
level is 89.37. The results are summarized in Figure 10. In this example, as the
baseline is set at the primary resistance level, the primary support level is fixed
at 55.23. The weighted mean difference between the K*Y(0) levels and the
signal and barrier value averages, Table III, is -0.02%.

c. s&p 500:
Figure 15 shows the data, baseline, and a summary of the results for the S&P
500. The primary support level is 914. The primary resistance level is 1479. The

a comparison of support and resistance levels defined


through balanced ladder values and by Golden ratios

weighted mean difference between the K*Y(0) levels and the signal and barrier
value averages, Table IV, is 0.13%.

FIGURE 15
FIGURE 12

FIGURE 16
FIGURE 13

d. doW Jones indusTrial averaGe:


Figure 16 shows the data, baseline, and a summary of the results for the DJIA.
The baseline is 10202. This yields a primary support level of 7794 and a primary
resistance level of 12524. The weighted mean difference between the K*Y(0)
levels and the signal and barrier value averages, Table V, is -0.05%.

iv. concludinG remarks:


The precision with which the < Yk > and K*Y(0) levels correspond depends
upon the selection of the baseline data. According to the procedure used in this
paper the selection of the baseline data is based upon the relative number of
signals and barrier values found within a range at a particular level. Experience
and judgment play a role in this selection process.

FIGURE 14

It is plausible to argue that phi analysis and the balanced ladder algorithm
identify the same support and resistance levels. Insofar as this is the case the
correspondence further validates the results of the balanced ladder algorithm.

2014 . Issue 68

39

And, assuming the correspondence holds in general, the balanced ladder


levels can be used to more accurately position the primary phi levels. Once
a satisfactory alignment has been found it is then possible to use phi levels
to anticipate the location of future balance values. In this regard the two
techniques are complementary.

references:
Aspin, Carl H., The Identification and Utilization of Balanced Ladder Levels
in the Technical Analysis of Stock and Market Index Time Series, Journal of
Technical Analysis, 2014
Brown, Constance M., 2003, All About Technical Analysis, (McGraw-Hill, NY)
Livio, Mario, 2002, The Golden Ratio, (Broadway Books, NY)

TABLE III

TABLE I
TABLE IV

TABLE II
TABLE V

40

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

keynes and The PsycholoGy


of markeTs
sTella osoba, cmT

bioGraPhy
Stella Osoba, CMT

absTracT
Keynes insights into the behavior of market participants and the irrationalities
which shapes their behavior and informs their choices in a capitalistic society
continues to be as relevant today as it was in the days when he first wrote about
them. This article is an analysis of John Maynard Keynes contributions to the
fields of behavioral finance and behavioral economics, through a study of his
writings and an overview of his own speculating and investing behavior.

inTroducTion
It was in Japan in July of 2007, as worldwide fears mounted over the growing
problems in the US subprime mortgage market, loosening credit standards and
increasing signs of a housing bubble that Chuck Prince, then CEO of Citigroup
made a comment widely believed to be the most infamous of the financial crisis
of 2008-2009. He said, When the music stops, in terms of liquidity, things will be
complicated. But as long as the music is playing, youve got to get up and dance.
Were still dancing. (Financial Times, July, 2007)
Over 70 years earlier, Keynes wrote, This battle of wits to anticipate the basis
of conventional valuation a few months hence, rather than the prospective yield
of an investment over a long term of years can be played by professionals
amongst themselves. For it is, so to speak, a game of Snap, of Old Maid, of Musical
Chairs - a pastime in which he is victor who says Snap neither too soon nor too late,
who passes the Old Maid to his neighbor before the game is over, who secures a
chair for himself when the music stops. These games can be played with zest and
enjoyment, though all players know that it is the Old Maid which is circulating,
or that when the music stops some of the players will find themselves unseated.
(Keynes, 1936)

Perhaps, there can be no clearer illustration of the validity, in the real world,
of Keyness enduring description of the psychology of market participants and
proof of the validity of behavioral finance which underpins technical analysis
than Chuck Princes statement.

backGround
Much has been written about John Maynard Keynes. He was a towering intellect
of the twentieth century. Born in 1883, his father was a philosopher and an
economist at Cambridge university, and his mother; Florence Ada Keynes was
the first female councilor of Cambridge Borough Council and she became mayor
of Cambridge in 1932. Keynes studied mathematics at Cambridge, it was there
that one of his tutors, Alfred Marshall who became his mentor encouraged him
to turn his attention to economics. As an economist, Keynes contributions to
the formation of British economic thought was impressive. He worked as a
clerk in the India office, and wrote his first book, Indian Currency and Finance
which was published in 1913. Keynes left the India Office and joined the
Treasury Department. He was the senior member of the delegation to Paris
in 1919 that helped negotiate what became the Treaty of Versailles. Keynes
resigned and left Paris when his warnings were not headed about the dangers
the harsh reparations demanded on Germany by the allies could cause. He
subsequently wrote The Economic Consequences of the Peace (1919) which
turned out to be prescient. Keynes wrote The General Theory of Employment,
Interest and Money, (1936) considered his most important book. It spawned a
revolution in economic thinking, introducing the field of macroeconomics and
Keynesianism; a distinct school of economic thought. Keynes was also part of
the British delegation to the United Nations Monetary and Financial Conference
also known as the Bretton Woods Conference at New Hampshire in 1944 which
was responsible for the establishment of the International Monetary Fund and
the World Bank.

Keynes, J.M., (1972) The Collected Writings of John Maynard Keynes: Essays in Biography, My Early Beliefs, Volume 10, Pg 447-50, London, Macmillan.

Less well known, is Keynes contributions to the field of behavioral finance; the
study of the influence of psychology on market participants. Even less is known
about the fact that Keynes was in his day, a very successful money manager and
investor, operating an investment vehicle called The Syndicate, which were it
to be around today would be appropriately described as a hedge fund. Perhaps
were Keynes achievements in economics less monumental, more attention
would have been paid to his work on the psychology of markets and to his
record as an investor. Keynes highlighted the role of psychology in economics
long before behavioral economics and finance were formed as a distinct
field of study. (Shefrin, 2011) Prior to Keynes writings on the subject, a few
significant works on psychology and market participants had been published,
some of which were, Charles Mackays Extraordinary Popular Delusions and
the Madness of Crowds (1841) about crowd behavior and financial swindles.
Gustave Le Bons influential book, The Crowd: A Study of The Popular mind
(1895), which was about the social psychology of crowds. The Psychology of
the Stock Market (1912) by George Charles Selden, on the mental attitudes of
market participants and their influence on price movements especially in the
short term. But the dominant school of economic thought was the classical
school which gave us the rational man. That human beings were rational,
fully optimizing economic agents was, and continues, to be the accepted view
among mainstream economists. But Keynes disagreed. In My Early Beliefs
he states that as time went by, the falsity of the view that human nature was
reasonable became clearer to him. He states further that this view of what
human nature is like, both other peoples and our ownwas disastrously
mistaken. We are, he says eloquently, as water-spiders, gracefully skimming,
as light and reasonable as air, the surface of the stream without any contact at
all with the eddies and currents underneath. (Keynes, 1972) This paper will be
in two parts. The first part will be a discussion about Keynes main ideas on
the irrationality underlying much of human behavior and how that influences
investment decisions. The second part of the paper will focus on some of Keynes
speculating and investing behavior.

1. keynes and The behavior of


markeT ParTiciPanTs
In forming our expectations about the future, we often will not attach great
weight to matters about which we are uncertain, even if those matters are
decisively relevant to the issue about which we are forming our expectations.
Our need to predict what might happen in the future even though the future
is uncertain, is, says Keynes, a result of our need to escape the crippling anxiety
which uncertainty, like the sense of our mortality, produces within us. To escape
this anxiety, the uncertainty is denied (Keynes, 1936). By uncertainty, Keynes
says, that he does not mean to merely distinguish what is known for certain
from what is only probable. The game of roulette is not subject, in this sense to

42

uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the
expectation of life is only slightly uncertain. Even the weather is only moderately
uncertain. (Keynes, 1973) The sense in which Keynes uses the term is that in
which the prospect of a European war is uncertain, or the price of copper and
the rate of interest twenty years hence, or the obsolesce of a new invention
about these matters there is no scientific basis on which to form any calculable
probability whatever. We simply do not know. (Keynes, 1973)

And yet, we find ourselves forced by an over abiding inability to tolerate


uncertainty, overlooking this practical impossibility and acting as if the
future were predictable. We are compelled by a certain unacknowledged
crippling anxiety to pretend that the future is more certain than it is. Never
acknowledging what we do not and cannot know, we predict the future.
Peace and comfort of mind require that we should hide from ourselves how little
we foresee. Yet we must be guided by some hypothesis. We tend, therefore to
substitute for the knowledge which is unattainable certain conventions, the chief
of which is to assume, contrary to all likelihood, that the future will resemble the
past. That is how we act in practice. (Keynes, 1973)
The assumption that the future is likely to be more similar to the past than an
examination of the past would show it to be is one of the techniques we have
devised to help us with our need to predict. We also assume that current prices
are correct and rationally derived unless something new shows us that it is not;
and, when we are unable to form individual opinions on the future, we behave
as if the rest of the world is better informed and we accept the judgement of
the majority as a true predictor of the future. (Keynes, 1973) Today, this latter
convention is more commonly known as herd mentality. These, according to
Keynes are the main techniques by which we hide from ourselves our inability
to predict the future.
The psychology of a society of individuals each of whom is endeavoring to copy
the others leads to what we may strictly term a conventional judgement. (Keynes,
1973) Conventional judgement can compound market errors, because the basis
for the judgment itself is unquestioned and accepted as correct even if is in error.
We saw evidence of this most recently in the events leading up to the credit
crisis of 2008/2009, when conventional judgment held by market participants
was that house prices never go down. Based on this assumption securities were
created, bundled and sold to investors around the globe, culminating in the
credit crunch of 2008 and the Great Recession, which was the worst financial
crisis since the Great Depression of the 1930s.
Keynes goes on to say that classical economists who base their economic models
on rational economic agents are victims of the same inability to acknowledge

keynes and the Psychology of markets

and tolerate uncertainty. He says, that these economists had not analyzed
carefully the psychology of how market participants actually behave in the
market place in forming their models. Their unwillingness to accept the fact
that our knowledge of the future is fluctuating, vague and uncertain (Keynes,
1973) renders their models useless in trying to predict human behavior in the
longer term. He says, I daresay that a classical economist has overlooked
the precise nature of the difference which his abstraction makes between
theory and practice, and the character of the fallacies into which he is likely to
be led. Keynes accuses classical economic theory of being itself one of these
pretty, polite techniques which tries to deal with the present by abstracting
from the fact that we know very little about the future. (Keynes 1973)
Keynes explains that in forming our expectations about those matters of which
we are uncertain, we are initially guided by facts about which we feel more or
less certain, which he calls, the state of confidence. We anchor our decisions on
those facts upon which we can attach a level of certainty to, even when they are
irrelevant to the decision about which we are forming opinions. We often will
look to those facts as they exist at present, which we know something about
and project them into the future in forming our long term expectations. We
only adjust by the extent to which we have reason to believe that we should
expect a change in future conditions. This state of confidence, Keynes argues
has not been studied by classical economists who instead have been content
to assume that they are able to use their models to calculate uncertainty in the
same way as they can calculate matters which are certain (Keynes, 1973).
Rational economic man, would, if he could, as has been assumed by classical
economists, optimize all his decisions by calculating probabilities. But human
decisions affecting the future, whether personal or political or economic, cannot
depend on strict mathematical expectation, since the basis for making such
calculations does not exist; and. it is our innate urge to activity which makes
the wheels go round, our rational selves choosing between the alternatives as best
as we are able, calculating where we can, but often falling back for our motive on
whim or sentiment or chance. (Keynes, 1936) Our tendency in decision making
as human beings is to ignore the laws of probability for the most part when
making decisions. In his Treatise on Probability Keynes stressed the necessity of
explicitly considering psychology to improve probability theory 2
Keynes observes that the vast majority of those who are concerned with the
buying and selling of securities know almost nothing about what they are doing.
They do not possess even the rudiments of what is required for a valid judgment,

and are the prey of hopes and fears easily aroused by transient events and as easily
dispelled. This is one of the odd characteristics of the capitalist system under which
we live, which, when we are dealing with the real world is not to be overlooked.
(Keynes, 1971) This causes markets to be precarious and subject to capricious
changes in sentiment. According to Keynes the following factors also increase
the precariousness of markets:
1) The majority of people participating in markets are ignorant of the real state
of affairs of companies in which they invest.
2) Noise or day-to-day or smaller fluctuations in stocks which are in themselves
insignificant tend to have an excessive and at times absurd influence on the
market.
3) Because large numbers of ignorant individuals arrive at a conventional
valuation of securities based on little more than on the maintenance of
convention, change is liable to be violent as a result of sudden fluctuations of
opinion due to factors which have little or nothing to do with the prospective
yield of the company, because there will be no strong roots of conviction to hold
it steady. (Keynes, 1983)
With imperfect knowledge, and in the face of uncertainty human beings
have devised conventions, which guide the decision making process. These
conventions are short cuts which allow us to pretend to ourselves that we know
more than we do and that we have some justifiable basis for our decisions
which can be relied upon. Since the convention is based on a somewhat
arbitrary selection of facts about which we have a level of confidence in arriving
at a projection for the future long term performance of a security, it is subject to
sudden and violent changes. new fears and hopes will without warning, take
charge of human conduct [when] the forces of disillusion may suddenly impose a
new conventional basis of valuation. (Keynes 1973) But even though when we
attempt to make predictions about the future, we know that the existing state
of affairs cannot continue indefinitely, nevertheless, says Keynes the idea of the
future being different from the present is so repugnant to our conventional modes
of thought and behavior that we, most of us, offer a great resistance to acting on
it in practice. (Keynes 1973)
Animal Spirits is a phrase that Keynes made famous when he used it in The
General Theory to apply to the spontaneous motivation in humans for action.
Keynes wrote that Even apart from the instability due to speculation, there is
the instability due to the characteristic of human nature that a large proportion
of our positive activities depend on spontaneous optimism rather than on a
mathematical expectation, whether moral or hedonistic or economic. Most,
probably, of our decisions to do something positive, the full consequences of which

Wesley, Pech. (2009), Behavioral Economics and The Economics of Keynes, Journal of Be-havioral and Experimental Economics (Formerly the Journal of SocioEconomics), vol. 38, issue 6, pp.2.

2014 . Issue 68

43
2

will be drawn out over many days to come, can only be taken as a result of animal
spirits - of a spontaneous urge to action rather than inaction, and not as the
outcome of a weighted average of quantitative benefits multiplied by quantitative
probabilities. (Keynes, 1936)
According to Ackerlof and Schiller, animal spirits also refers to a restless and
inconsistent element in the economy, to our peculiar relationship with ambiguity
or uncertainty. Sometimes we are paralyzed by it. Yet at other times it refreshes
and energizes us, overcoming our fears and indecisions. (Ackerlof and Schiller,
2009) And it is to these animal spirits, to the spontaneous optimism of human
nature upon which enterprise depends. Animal spirits dim the thoughts of
ultimate loss when we embark on new endeavors and push us to take risks and
build enterprises which if successful will benefit the community as a whole. It is
animal spirits which propels the pioneer forward and which causes the healthy
man to put aside fears of death (Keynes, 1936).
Even though psychological factors underlie how market participants form
opinions upon which they base their decisions for predictions about the
future direction of prices, and even though this causes an acknowledged
precariousness, because it is human nature to become habituated to our
surroundings, Keynes says, we assume some of the most peculiar and temporary
of our late advantages as natural, permanent, and to be depended on, and we
lay our plans accordingly. (Keynes, 1919) Therefore, as long as the balance
of market participants accept the current basis for valuation this will cause
markets to show a degree of continuity and stability.

He who attempts [to be a serious minded investor] must surely lead much more
laborious days and run greater risks than he who tries to guess better than the
crowd how the crowd will behave; and, given equal intelligence, he may make
more disastrous mistakes. 3
Even the wisest and most astute of investors have to contend with the limitations
of the human mind. Their ignorance of the distant future is by far greater than
their knowledge of the present. In attempting to predict the future, they will
also find themselves disproportionately seeking clues in the present which they
will project into the future (Keynes, 1971). But what of the expert professional
investor? Can he not be relied on to correct the vagaries of the ignorant investor
with his greater skill, knowledge and judgment? Keynes explains that the
professional cannot be relied on to correct the irrationalities caused by the
masses, and may often exacerbate it. This is because no one wants to be left
holding a security which they believe the market will value lower three months

Keynes, J.M. (1936) The General Theory of Employment Interest and Money, Chapter 12 (5)

44

or a year from now. The professional investor is therefore engaged in the game
of trying to ascertain what public opinion will value a security at a few months
hence by looking for impending changes in the news or in the atmosphere, of the
kind by which experience shows that the mass psychology of the market is most
influenced. (Keynes, 1936) Professional investors find themselves therefore for
the most part engaged in a game of trying to beat the gun, to outwit the crowd,
and to pass the bad, or depreciating half-crown to the other fellow. (Keynes,
1936)
Keynes likens professional investment to a newspaper competition where
competitors have to pick out the six prettiest faces from a hundred photographs.
The prize is then awarded to the person who successfully picks out the face
which average opinion agrees is the prettiest. So the puzzle then becomes a
question of picking out of the bunch not the face that the competitor considers
the prettiest, but the face he thinks that average opinion will consider the
prettiest. In Keynes words, It is not a case of choosing those which, to the best
of ones judgment, are really the prettiest, nor even those which average opinion
genuinely thinks the prettiest. We have reached the third degree where we devote
our intelligences to anticipating what average opinion expects average opinion
to be. (Keynes 1936) Professional investing therefore be-comes a part of the
same game played by the public. But with their greater insight, the professional
plays the game by trying to beat the crowd. And as Keynes says, because crowd
psychology can be discerned, it is wise and natural that the professional investor
should be influenced by their expectations on the basis of past experience of
the trend of mob psychology. Thus, so long as the crowd can be relied on to act
in a certain way, even if it is misguided, it will be to the advantage of the betterinformed professional to act in the same way - a short period ahead. (Keynes,
1971) Therefore making the professionals actions somewhat rational (Winslow,
2010) while escalating the underlying irrationality in the market.
Chuck Prince came in for a lot of criticism for his dancing comments. But what
if he had chosen to ignore short term expectations and go against the street? If
this had caused Citigroup to underperform its peers, how would he then have
justified his position to his Board? It is likely he would have been viewed even
more harshly by his Board if in the midst of a raging bull market he was forced
to justify underperformance by pointing to the irrationality of others. And it is
possible that were he to be then relieved of his duties, his successor would have
joined in the game of musical chairs and got handsomely rewarded while the
game was being played. For as Keynes stated so presciently, worldly wisdom
teaches that it is better for reputation to fail conventionally than to succeed
unconventionally. (Keynes, 1936)

keynes and the Psychology of markets

A good example of someone who sat out the game, is famed investor Warren
Buffett, widely acknowledged as the worlds most successful investor. But he
endured many years of ridicule during the dot-com bubble for some of his
investment decisions, most notably, his decision not to invest in anything he
did not understand, therefore standing aside during the market frenzy. As a
consequence, in 1999, Berkshire Hathaway underperformed the broad market
indexes, with many analysts dismissing Buffett and claiming that he had lost his
touch because he had missed the technology boom. When the market crashed
in the early 2000s with the Nasdaq dropping an intra day low of 1108.49 in
October 2002, from an all time high of 5,048 on March 10, 2000 Buffett was
vindicated. He said at the time, It was a mass hallucination, by far the biggest
in my lifetime. 4
Buffett, therefore plays to perfection the role Keynes ascribes to the serious
minded investor who, unperturbed by the prevailing pastime, continues
to purchase investments on the best genuine long term expectations he can
frame. (Keynes, 1936) But not everyone can be a smart investor, or will
be rewarded and not penalized for it. Keynes states some of the reasons
why smart investment strategies are so much more difficult than those
employed by the game-players. To stand apart from the crowd to is open
oneself to the possibility of making more disastrous mistakes. It needs more
intelligence to defeat the forces of time and our ignorance of the future than
to beat the gun. Human nature desires quick results, there is a peculiar zest
in making money quickly, and remoter gains are discounted by the average
man at a very high rate. The game of professional investment is intolerably
boring and over-exacting to anyone who is entirely exempt from the
gambling instinct. Also, an investor who decides to ignore near-term market
fluctuations needs greater financial resources and cannot afford to operate
with large amounts of leverage. (Keynes, 1936)
Finally, Keynes points out that this type of investor is likely to come in for the
most criticism, especially if he is managing funds for a committee, board or
bank. Since such an investor who goes against the crowd/consensus is likely
to be eccentric, unconventional and rash in the eyes of average opinion. If he is
successful, that will only confirm the general belief in his rashness; and if in the
short run he is unsuccessful, he will not receive much mercy. (Keynes, 1936)
It is likely therefore, that he will not remain in business long.

summary
Our need to predict the future comes out of our inability to tolerate
uncertainty, so we create conventions to help us to explain the future. Our
predictions are based on facts upon which we have a level of confidence

4
5

which we project into the future irrespective of whether or not those facts
have anything to do with the issue upon which we are attempting to form
predictions. When we are unable to form our own predictions, we accept
conventional judgment as our guide leading to herding behavior. When we
have arrived at our predictions, our animal spirits compels us to take some
action, any action. This behavior can cause precariousness in markets. But
precarious markets can trend for extended periods of time and then shift and
turn suddenly and unexpectedly, when the basis of conventional wisdom
changes resulting in booms and crashes.

2. keynes behavior sPeculaTor


and invesTor
A speculator is one who runs risks of which he is aware and an investor is one who
runs risks of which he is unaware. 5
Keynes described speculation as the activity of forecasting the psychology of the
market as opposed to enterprise, which he defined as the activity of forecasting
the prospective yield of an asset over its whole life (Keynes, 1936). As much as has
been written about Keynes the polymath, surprisingly little has been devoted to his
work as a speculator and investor. Fantacci, Marcusso and Sanfilippo (2010) focus
on Keynes commodity trading, in particular wheat futures. Chambers, Dimson
and Foo (2013) focus on Keynes management of Kings College endowment
fund through his investment in equities, Chambers and Dimson (2012) point out
this surprising knowledge gap in the lack of comprehensive studies on Keynes
records relating to his investment activity. Keynes began to speculate in the
financial markets in August 1919. One strategy he employed was to sell short
in the forward market the currencies of France, Italy, Holland, and after March
1920, Germany. He went long U.S. dollars, Norwegian and Danish Krona and
Indian rupees. (Keynes, 1983) In 1920, Keynes set up the modern equivalent of
a hedge fund. Together with a few friends, he formed an investment fund which
was called The Syndicate, he raised 30,000 of seed capital and began trading
operations on January 21, 1920. Keynes managed the fund, initially speculating
in currencies. In one trade, he went short French francs and lire and long rupees.
By the end of May of that year, the Syndicate had realized profits of almost 9,000.
But not too long afterwards, his operations ran into problems when European
currencies strengthened against sterling, sterling rose against the dollar and the
rupee fell against sterling. (Keynes, 1983) That year, Keynes received a margin call.
He moved to lessen risk by closing out contracts. In a letter Keynes wrote to Sir
Ernest Cassel dated May 26, 1920, Keynes said, speculators are being squeezed out
everywhere and the prices quoted are absurd. (Keynes, 1983)

Keynes, J.M. (1936) The General Theory of Employment Interest and Money, Chapter 12 (5)
Keynes, J.M. (1983) The Collected Writings of John Maynard Keynes: Economic Articles and Correspondence, Volume 12, 109, London, Macmillan

2014 . Issue 68

45

Keynes rebuilt his fortune the same way he had lost it, by speculating. Records
show that Keynes was an extremely confident speculator.6 By the end of 1922,
he had paid off all of the debts he had incurred for the Syndicate and in the
process became a substantial investor with records showing his net assets in
excess of 21,000.
It is not uncommon to come across superficial studies on Keynes where they
state something to the effect that he started off as a speculator/market timer,
learnt the error of his ways and then became a value investor. This is simply
not true. Apart from the earliest days when Keynes held individual stocks on a
modest scale, his speculating and investing activities overlapped for most of the
time he was active. Sometimes he speculated in the hopes of making a profit,
other times, he speculated to hedge investment positions.
By 1936, Keynes had become a substantial investor, managing money for
several entities, and he was still very active as a speculator. One incident will
illustrate his activities during this period. One day he was found measuring
the cubic capacity of Kings college chapel. When people enquired why, he told
them that he was about to take delivery of several loads of wheat. In fact, he
had speculated in what amounted to a months supply of wheat for the entire
country. He was measuring the chapel to see if it would hold the wheat when
he took delivery of it and was somewhat annoyed when he found that the
chapel was too small, only able to take about half of the wheat. Keynes then
hatched a plan. He decided that he would take possession of the wheat, but
as each cargo came in, he would object to its quality. He knew that each cargo
was coming in from Argentina and would likely need to be cleaned. He also
knew that the available machinery to clean wheat was limited and could only
handle one cargo at a time. It would also take some days to clean an entire
cargo of wheat. This is exactly what happened. As each cargo load came in,
Keynes would object to its quality and upon inspection, it was indeed found
that every cargo had to be cleaned. It took over a month for Keynes to clear the
position with no loss or profit to him for storing the wheat. 7
In a memorandum that Keynes wrote to the Estates Committee of Kings
College, Cambridge dated 8th May, 1938, he affirmed an earlier statement he
had made, when he stated that, it is safer to be a speculator than an investor in

the sense of the definition I once gave the Committee that a speculator is one who
runs risks of which he is aware and an investor is one who runs risks of which he is
unaware. (Keynes, 1983)
Keynes also wrote that, Speculators may do no harm as bubbles on a steady
stream of enterprise. But the position is serious when enterprise becomes the
bubble on a whirlpool of speculation. (Keynes, 1936) When speculators
dominate in markets over those involved in enterprise and markets become
little more than casinos, market instability is a result.
My central principle of investment is to go contrary to general opinion, on the
ground that, if everyone is agreed about its merits, the investment is inevitably too
dear and therefore unattractive. 8
Up until June 1919, when Keynes resigned from the Treasury, records indicate
that he operated his trading activities on his own account only, and on a
modest scale. At this time, though he did not trade for others, he did provide
investment advise to friends and family. Records show that Keyness first
purchase was on July 6th, 1905 for 4 shares in the Marine Insurance Company
(Keynes, 1983). Over the next few years, he bought more shares in a few other
companies, still on a very modest scale. During this period Keynes activities
seemed to have been limited to buying and holding small positions and adding
to those positions.
After June 1919, Keynes began to both speculate and invest in financial
markets on a much larger scale. Upon his return from Treasury to take up
teaching at Kings College, Cambridge, he was made Second Bursar of the
Colleges endowment fund. He immediately set upon influencing the College to
broaden its investment portfolio out of fixed income securities and real estate.
He succeeded in getting the Trustees to authorize the setting up of a separate
fund, which was the origin of The Chest.9 Through this fund, Keynes was able to
invest in foreign government securities, and other securities including shares,
currencies and commodities. Up until that time equities had been considered a
new asset class, and too risky for college endowments to touch.

Chambers, D. and Dimson, E. (2012) The Stock Market Investor, (5)


Keynes, J.M. (1983) The Collected Writings of John Maynard Keynes: Economic Articles and Correspondence, Volume 12, (10-12) London, Macmillan
8
Keynes, J.M. (1983) The Collected Writings of John Maynard Keynes: Economic Articles and Correspondence, Volume 12, (111) London, Macmillan
This statement was contained in a letter Keynes wrote to Jasper Ridley, a Banker and also a Trustee of Eton Colleges endowment fund. Keynes was attempting to get Eton to buy Australian dollar bonds
because of fears for the future of the dollar and Australias reputation. But he was met with opposition from Ridley who though the trade was too risky.
9
At this time Colleges in Oxford and Cambridge were subject to the Universities and College Estates Act of 1925 (The Trustees Acts) which placed onerous restrictions on types of financial investments
Trustees were able to undertake. Funds were supposed to be managed conserva-tively. Keynes used his influence on the Fellows of Kings college to permit a part of the en-dowment to be placed in a separate
fund and excluded from the onerous provisions of the Trus-tees Acts. These funds became known as the Discretionary Portfolio, of which The Chest was a part and over which Keynes had full discretion from
1924 when he became First Bursar.
6
7

46

keynes and the Psychology of markets

In the early days, of his investing career, it would appear that Keynes employed
a top down macro investment strategy. Results of his performance are mixed
and it would appear that this strategy did not result in consistent success.10
Records show that between 1922 and 1929, Keynes performance was worse
than the Bankers Magazine Index (Keynes, 1983). He was a very confident
and extremely active investor in his early years, but also later on, with records
showing that within most of the years, between 1923 and 1940, the value of
securities he sold exceeded the market value of the securities he held at the
beginning of the year. An audit performed in 1945 showed that almost 30%
of positions were held for 3 months or less with only 15% being held for longer
than 3 years (Keynes, 1983). After 1929, Keynes investment record improved
considerably and he outperformed the index by a wide margin in 21 out of 30
accounting years (Keynes, 1983).
In 1924 Keynes was made First Bursar, of the Chest and he had complete
discretion over the investments in the fund. Keynes also held several other
money management posts including, Chairman of the National Mutual Life
Assurance Society (1921-38), a director of the Provincial Insurance Company,
and directorships of a number of investment trusts, namely; the Independent
Investment Company (1923-46); the A.D Investment Trust (1921-7); the P.R.
Finance Company (1924-36) and its Chairman (1932-6), and as if he did not
have enough of his plate, he ran the Syndicate; his hedge fund.
As a macro manager, Keynes, attempted to sell market leaders in a falling
market and buy them in a rising one. Keynes appears to have not been able to
do this successfully, saying, it needs phenomenal skill to make much out of this
strategy. He changed his investment strategy to a bottom up approach, and he
said, my alternative policy undoubtedly assumes the ability to pick specialities
which have, on the average, prospects of rising enormously more than an index
of market leaders. The discovery which I consider that I have made in the course
of experience is that it is altogether unexpectedly easy to do this. (Keynes, 1983)
Easier for him to implement, he said, than his credit cycling strategy, and it also
enabled him to take advantage of market fluctuations (which was the point of
credit cycling), although in a different way. With this new strategy, he could buy
when the market fell. Because in market crashes, only the astute investor is able
to see that bargains are to be had, as shares have gone on sale. As Keynes puts
it, It is largely the fluctuations which throw up the bargains and the uncertainty
due to fluctuations which prevents other people from taking advantage of them.
(Keynes, 1983) Keynes acknowledged that even though he had often been too
slow to sell his shares after they had finished most of their rise, it was better, in

his opinion to be too slow to sell, than to be too fast to sell in a rising market.
Because you would lose less (Keynes, 1983).
The evolution of Keynes investment strategy is evident in a letter he wrote to
F.C. Scott dated 15 August, 1934, but, in so far as one is prepared to continue
to hold investments in the metal, I have decided for myself and for other accounts
for which I am responsible, to concentrate practically the whole of what I am
prepared to invest in this way in the Union Corporation, and then to hold the shares
obstinately for a period of years for a really large appreciation, - unless, as I have
said, the gold position as a whole shows signs of change. (Keynes, 1983) And
then he wrote, as time goes on I get more and more convinced that the right
method in investment is to put fairly large sums into enterprises which one thinks
one knows something about and in the management of which one thoroughly
believes. It is a mistake to think that one limits ones risk by spreading too much
between enterprises about which one knows little and has no reason for special
confidence. (Keynes, 1983)
In 1938, Keynes wrote a memorandum to the Estates Committee of Kings
College about the investments which he managed on behalf of the college. He
wrote in that report how his investment policy changed from 20 years ago when
he first persuaded the college to invest in equities. In this letter it is apparent
that Keynes had evolved and matured as an investor and the three principles
which he lays out in the letter are classic value investing strategies which are as
relevant to successful investing today as they were to investing in Keynes time.
According to Keynes, the principles to successful investing are:
(1) a careful selection of a few investments (or a few types of investment) having
regard to their cheapness in relation to their probable actual and potential intrinsic
value over a period of years ahead and in relation to alternative investments at
the time;
(2) a steadfast holding of these in fairly large units through thick and thin, perhaps
for several years, until either they have fulfilled their promise or it is evident that
they were purchased on a mistake;
(3) a balanced investment position, i.e. a variety of risks, in spite of individual
holdings being large, and if possible opposed risks (e.g. a holding of gold shares
amongst other equities, since they are likely to move in opposite directions when
there are general fluctuations). (Keynes, 1983)11 In this way, Keynes believed
that the skilled investor could succeed in achieving the social object of skilled
investment which should be to defeat the dark forces of time and ignorance which
envelope our future. 12

There still needs to be more research done into Keynes trading record for a definitive evalua-tion to be valid of his trading results.
Keynes, J.M. (1983) The Collected Writings of John Maynard Keynes: , London, Macmillan
12
Keynes, J.M. (1936) The General Theory of Employment Interest and Money, Chapter 12, Kindle Edn.

10
11

2014 . Issue 68

47

conclusion
There is much for the technician to learn from an analysis of the writings and
investment style of John Maynard Keynes. He was one of the earliest of the
modern day institutional and hedge fund managers. He traded a variety of
markets and in a variety of styles. Though he did not always succeed, he was
willing to take on risk and study the markets to glean what worked, and what did
not. He was not afraid to change his mind as we have seen through the evolution
of his trading strategies from top down to bottom up and from credit cycling to
value investing. He was able to speculate and invest without locking himself
into any one particular investment camp. And most importantly, perhaps from
our perspective as technicians is his contribution to our understanding of the
field of behavioral economics or the psychology of market participants, which
is the basis of technical analysis. As the Market Technicians Association says in
its explanation of what technical analysis is, In general, a technician believes
that people have predictable mental short cuts [when] reacting to action in the
markets (known as heuristics in cognitive psychology). Technicians seek to profit
by anticipating the mass psychological biases of buyers and sellers in a broad
range of markets. And this is exactly what Keynes sought to do through his
trading strategies.

references
Keynes, J.M. (1936), The General Theory of Employment, Interest and Money,
Chapter 12
Keynes, J.M. (1919), The Economic Consequences of the Peace, Chapter 1
Keynes, J.M. (1921) A Treatise on Probability
Akerlof, G.A. and Shiller R.J. (2009) Animal Spirits: How Human Psychology Drives
the Economy, and Why it Matters For Global Capitalism.
Keynes, J.M. (1971) The Collected Writings of John Maynard Keynes: Activities 1906
- 1914: India and Cambridge, London, Macmillan, Volume 5,
Keynes, J.M. (1971) The Collected Writings of John Maynard Keynes: The Treatise on
Money, London, Macmillan, Volume 6, p 323 - 324
Keynes, J.M. (1972) The Collected Writings of John Maynard Keynes: Essays in
Persuasion, London, Macmillan, Volume 9,
Keynes, J.M. (1973) The Collected Writings of John Maynard Keynes: The General
Theory of Employment, Interest and Money, London, Macmillan, Volume 7,
Keynes, J.M. (1972) The Collected Writings of John Maynard Keynes: Essays in
Biography, London, Macmillan, Volume 10, p 447 - 450
Keynes, J.M. (1983) The Collected Writings of John Maynard Keynes: Economic
Articles and Correspondence, London, Macmillan, Volume 12, p 4, 5, 10-12, 57, 82,
90, 109, 100-101, 107, 111

13
14

Keynes refers to heuristics as conventions.


www.mta.org/eweb/dynamicpage.aspx?webcode=what-is-technical-analysis

48

Keynes, J.M. (1973) The Collected Writings of John Maynard Keynes: The General
Theory and After: Defense and Development, London, Macmillan, Volume 14, p 112
-113, 114 - 115,124 - 125
Mackay, Charles (1980) Extraordinary Popular Delusions and the Madness of Crowds.
New York, NY: Crown Publishing Group
Tversky, A and Kahneman, D. (1974) Judgment under Uncertainty: Heuristics and
Biases. Science, New Series, Vol 185 pp. 1124-1131
Kahneman, D. and Tversky, Amos (1979) Prospect Theory: An Analysis of Decision
under Risk Econometricia, 47(2) pp.263-291,
Chambers, D. and Dimson, E. (2012) The Stock Market Investor. Journal of Financial
and Quantitative Analysis (Forthcoming)
Shefrin, H. Behavioral Finance in the Financial Crisis: Market Efficiency, Minsky, and
Keynes
Beachy, B, (2012) A Financial Crisis Manual: Causes, Consequences, and Lessons
of the Financial Crisis. The Global Development and Environment Institute (GDAE)
Working Paper No.1206, Tufts University
Fantacci, L., Marcuzzo, M.C., and Sanfilippo, E., (2010) Speculation in Commodities:
KeynesPractical Acquaintance With Futures Markets. Journal of the History of
Economic Thought, Volume 12, Number 3
Chua, J.H. and Woodward, R.S., (1983) J.M. Keynes Investment Performance: A
Note. The Journal of Finance 38(1): 232-235
Barberis, N., Shleifer, A. and Vishny R., (1998) A Model of Investor Sentiment,
Journal of Financial Economics, 49, pp. 307-343.
Wesley, Pech. (2009), Behavioral Economics and The Economics of Keynes, Journal
of Behavioral and Experimental Economics (Formerly the Journal of SocioEconomics),
vol. 38, issue 6, pp.891-902.
Winslow, Ted. (2010), Keynes on the Relation of the Capitalist Vulgar Passions to
Financial Crises, Studie Note di Economia, Anno XV, n. 3-2010, pp. 369-388
Ricciardi, V and Simon, H.K. (2000) What is Behavioral Finance? Business,
Education & Technology Journal, Vol. 2, No. 2, pp. 1-9.
Sewell, M., (2010) Behavioural Finance. University of Cambridge.

BOOVBM 
TZNQPTJVN

#&:0/%5)&#&/$)."3,

"QQMJDBUJPOTGPS"DUJWF"TTFU.BOBHFNFOU

.BSDI 
/FX:PSL$JUZ
&BSMZ3FHJTUSBUJPOJTOPX0QFO3FTFSWF:PVS4FBU5PEBZBU

IUUQTZNQPTJVNNUBPSH

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

lookinG To our oWn PlaneT


for markeT insiGhTs
Tom mcclellan

bioGraPhy
Tom McClellan
Tom McClellan has done extensive analytical spreadsheet development for the stock and commodities markets, including the synthesizing of the
important market and economic data.
as an Army helicopter

absTracT
Natural phenomena have a much better correlation to stock prices than reason
or chance might allow. The linkage between the financial markets and sunspots,
rainfall, temperature, and other data likely runs through the agricultural sector.
Better or worse crop yields likely affect the financial markets more than is
appreciated by some analysts. Overwhelming evidence of a co-relationship exists,
even if we cannot establish a causal relationship. This information can be used to
aid market forecasting.

inTroducTion
When some technicians want to get answers ahead of time about what is going
to happen to the stock market, they turn for answers to the movements of the
other planets in our solar system. They think that they can find answers about
when stock prices will turn by looking at when planets other than Earth get into
certain specific orientations or relationships. But looking out into the solar system
can mean missing some really good information about what is happening right
here on our own planet. There are certain pieces of information about natural
phenomena here on Earth which have a much better correlation to stock prices
than reason or chance might allow. Something is clearly going on.
Among the interesting relationships between Mother Earth and the financial
markets are the following:
t4VOTQPUTBSFBTTPDJBUFEXJUIDZDMFTPGJOBUJPO VOFNQMPZNFOU TPDJBMVOSFTU
t3BJOGBMMJTCVMMJTIGPSTUPDLT PSHPME EFQFOEJOHVQPOXIFSFJUGBMMT
t&M/JPJTCFUUFSGPSDPSQPSBUFQSPUTBOEUIFTUPDLNBSLFUUIBO-B/JB

t"UPSOBEPXBTCBEGPS%PSPUIZBOE5PUP CVUUIFZBSFHPPEGPSUIFTUPDLNBSLFU
t(MPCBMXBSNJOHJTBHPPEUIJOH BUMFBTUGPSUIFTUPDLNBSLFU
t.PSFBSDUJDJDFNFBOTMFTT(%1

sunsPoTs
Sunspots are a symptom of greater activity in the sun, and their numbers wax and
wane on a fairly consistent 9-14 year cycle, with an average period of 11 years. As
far back as 1924, Alexander Chizhevsky had correlated sunspot activity to social
phenomena including disease outbreaks and revolutions. (citation). That revelation
HPU$IJ[IFWTLZJOUPUSPVCMF CFDBVTFIJTFYQMBOBUJPOTGPSXIBUMFEUPUIF3VTTJBO
revolutions of 1905 and 1917 were at odds with what Joseph Stalin thought about
the reasons for those events. It resulted in Chizhevsky spending 8 years in a gulag
for espousing politically incorrect theories about the interrelationship between
geophysical and social phenomena.
Figures 1 and 2 show some up-to-date examples of what Chizhevsky was referring
to, with the ascending phase of each sunspot cycle seeing greater occurrences of
protest movements, revolutions, and other social unrest.
In Figure 1 it is not shown here, but is worth noting that sunspots saw a minimum in
 BOEUIFOOVNCFSTTUBSUFESJTJOHJO KVTUBTUIF"NFSJDBO3FWPMVUJPOXBT
getting underway. Similarly, the abolitionist movement reached a climax in the late
1850s, following a sunspot minimum in 1856 with rising sunspot numbers leading
VQUPUIFFMFDUJPOPG"CSBIBN-JODPMOJOMBUF GUQOHEDOPBBHPW
$PVOUJOH
forward from 2011, we can expect then next mass protest movement around 2022.

What jumps out to the naked eye is that the low points in the sunspot cycle
NJOJNB
UFOEUPDPJODJEFSPVHIMZXJUIJNQPSUBOUMPXTGPSTUPDLQSJDFT1FSIBQT
more importantly, the ascending phase of the cycle is associated with price
appreciation for stocks.

FIGURE 1 - Sunspots and protests/revolutions, 1900-1950

FIGURE 2 - Sunspots and protests/revolutions, 1950-2012


What Chizhevsky apparently did not know is that sunspot cycles also correlate pretty
well to the movements of the stock market, and they give a leading indication for
JOBUJPOBOEVOFNQMPZNFOU'JHVSFQSPWJEFTBDPNQBSJTPOCFUXFFOUIFNPOUIMZ
TVOTQPUOVNCFSBOEUIF%+*"

But it is still a really rough correlation, with some significant anomalies. This
means that the correlation by itself is not good enough to form the basis of an
entire trading system, even though over the decades it is evident that there is
something going on. Turning toward other types of financial market data, we find
better correlations to sunspot data, especially when an important adjustment is
made. Figure 4 shows a coincident comparison of the monthly sunspot number
UPUIFJOBUJPOSBUF

FIGURE 4 Notice that the peak in the sunspot cycle tends to coincide with a peak in the
JOBUJPO SBUF 5IFSF JT BMTP TPNF EFHSFF PG DPSSFMBUJPO FWJEFOU CFUXFFO UIF
TUSFOHUI PG UIF TVOTQPU DZDMF BOE UIF TFWFSJUZ PG UIBU JOBUJPO BU UIF QFBL 
although that relationship is a little bit weaker. There have been only 10 sunspot
DZDMFTTJODFUIFDSFBUJPOPGUIF$POTVNFS1SJDF*OEFYCBDLJO BOETPUIBU
is not enough to reliably correlate maximum sunspot number magnitude and
JOBUJPONBHOJUVEF
*UJTBMTPXPSUIOPUJOHUIBUTQJLFTJOUIFJOBUJPOSBUFIBWFPDDVSSFEBUPUIFSUJNFT 
not suggested by the sunspot cycle. These are usually spikes related to actions by
governments, such as wars or commodity bubbles, as well as the notable example
PGUIFRVBTJHPWFSONFOUBMBHFODZ01&$NPVOUJOHBOPJMFNCBSHPJOUIFT
Once those exogenous forces are removed from the marketplace, we can see that
JOBUJPOHFUTCBDLPOUSBDLXJUIUIFTVOTQPUDZDMF

FIGURE 3 -

52

If we make one key adjustment, as discussed below, we can find that the sunspot
cycle will actually lead movements in other data. Figure 5 provides a comparison
of the sunspot number to the U.S. unemployment rate:

The idenTificaTion
and
of balanced
ladder levels
in
lookinG
TouTilizaTion
our oWn PlaneT
for markeT
insiGhTs
The Technical analysis of sTock and markeT index Time series

Normal rainfall for New York City is about 48 per year. Interestingly, that is more
UIBO4FBUUMFBOE5BDPNBHFU JOUIFTVQQPTFEMZSBJOZ1BDJD/PSUIXFTU XIJDIJT
discussed below. It is not the average rainfall that matters for the markets, but
rather the deviation from average.

FIGURE 5 For this chart, the key adjustment is that the sunspot data have been shifted
GPSXBSECZZFBST NPOUIT
UPSFWFBMUIBUUIFVOFNQMPZNFOUSBUFGPMMPXTJO
UIPTFTBNFGPPUTUFQTZFBSTMBUFS BUMFBTUGPSUIFNPTUQBSU
5IJT SFWFMBUJPO JFT JO UIF GBDF PG XIBU TPNF FDPOPNJD IJTUPSJBOT TBZ JT UIF
explanation for protest movements, revolutions, and other mass defiance
phenomena. There is a popular theory that it is unemployment and economic
suffering which lead to such protests and revolutions. But the mass protest
movements appear to be coincident with the rise in the sunspot number, whereas
UIFSJTFJOUIFVOFNQMPZNFOUSBUFUFOETUPPDDVSZFBSTMBUFS
"TXJUIUIFJOBUJPOEBUBSFGFSFODFEBCPWF UIFSFBSFPDDBTJPOBMTQJLFTJOUIF
unemployment rate which are not explained by the sunspot cycle, but each can
be attributed to some type of governmental action. What can be said with greater
statistical reliability is that the rising phase of the sunspot cycle has always been
GPMMPXFECZBSJTFJOUIFVOFNQMPZNFOUSBUFZFBSTMBUFS PSBUMFBTUUIBUIBTCFFO
UIFDBTFGPSBTMPOHBTUIF-BCPS%FQBSUNFOUTVOFNQMPZNFOUSBUFEBUBIBWF
FYJTUFE TJODF
5IJTTUSPOHMZTVHHFTUTUIBUTIPVMETFFBSJTFJO
the unemployment rate, which just recently has been trying to get back on track in
the wake of the Fed-induced housing bubble in the mid-2000s and its subsequent
CVTU*GXFEPOPUTFFSJTJOHVOFNQMPZNFOUJO JUXPVMECFUIFSTUUJNF
JOPWFSZFBSTPGVOFNQMPZNFOUEBUBUIBUBSJTFJOTVOTQPUTXBTOPUGPMMPXFE
years later by a rise in unemployment.

rainfall
*GBOOVBMSBJOGBMMJO/FX:PSL$JUZJTHSFBUFSUIBOOPSNBM UIFOUIBUTHPPEGPSUIF
stock market. That might not seem like a good place to look for a cause and effect
relationship, but getting stuck wondering about causes can prevent us from seeing
what is.

FIGURE 6 - NYC Rainfall vs. DJIA


When New York City experiences wetter than normal years, as determined by
total rainfall over each trailing 12-month period, then that tends to be bullish for
the stock market. Figure 6 shows the deviation from normal, meaning from the
historical average of 48 inches per year. Wetter is obviously better, although the
SFTVMUTHPUJOUPBMJUUMFCJUPGUSPVCMFJOXIFOUIF'FEFSBM3FTFSWFXBT
putting its thumb on the scale with QE1, QE2, etc. Now we are seeing results that
show a dryer than normal period for New York City.
This same phenomenon occurred decades ago, as seen in Figure 7.

FIGURE 7 - NYC Rainfall vs. DJIA, 1910-1936


Notice that the late 1920s saw a wetter than normal period to coincide with the
DPODMVEJOHQIBTFPGUIFTUPDLNBSLFUTVQUSFOE"OEXIFO/FX:PSLTUBSUFE
drying out in 1929, that coincided with the crash and huge market decline, a

2014 . Issue 68

53

EPXOUSFOEXIJDIMBTUFEVOUJMUIFSBJOTUBSUFEDPNJOHCBDLJOMBUF"UUIFMFGU
end of the chart, we see that the market crash at the start of WWI coincided with
the start of a really dry period for New York City.
Why would that matter? Why is rain in New York City a good thing for the
economy, but dry periods are bad? It is not necessary have to answer the why in
order to notice the is. Our human brains like it better when we can understand
the why, but it is not essential. Until Copernicus came along, humans did not
understand the why of the sun rising each day, but they came to accept it given
the considerable evidence that it was so.
0OUIFPUIFSTJEFPGUIFDPVOUSZ SBJOGBMMUPUBMTJOUIF1BDJD/PSUIXFTUNBUUFS CVU
in a different way. Figure 8 compares rainfall deviations from normal in Tacoma,
Washington, to gold prices:

el nino
5IF&M/JPQIFOPNFOPOJOWPMWFTXBSNFSUIBOOPSNBMXBUFSJOUIF1BDJD
Ocean, and it has a big effect on global weather. The name comes originally
GSPN1FSVWJBOTIFSNFOXIPOPUJDFEUIBUBXBSNDVSSFOUTFFNFEUPBQQFBSJO
some years around Christmas. It is strongly correlated to what meteorologists
call the Southern Oscillation Index, which refers to the difference in atmospheric
QSFTTVSF CFUXFFO UIF JTMBOE PG5BIJUJ BOE %BSXJO  "VTUSBMJB 5IF 4PVUIFSO
0TDJMMBUJPOIBTCFFOLOPXOBCPVUTJODFUIFT CVUJUTSFMBUJPOTIJQUP&M/JP
XBTOPUEFTDSJCFEVOUJM XIFO1SPG+BDPC#KFSLOFTPG6$-"XSPUFBCPVUJU
The relationship is now referred to in the weather industry by the acronym ENSO
&M/JP4PVUIFSO0TDJMMBUJPO
 GBDVMUZXBTIJOHUPOFEV

8IFO&M/JPDPOEJUJPOTBSFJOFFDU XFBUIFSPCTFSWFSTSFDPSEQSFEPNJOBOUMZ
IJHIFSBUNPTQIFSJDQSFTTVSFMFWFMTJO5BIJUJUIBOJO%BSXJO*GUIBUDPOEJUJPO
occurs over a long period of time (several months), it tends to be associated
XJUISJTJOHTUPDLQSJDFT5IFPQQPTJUFDPOEJUJPOJTLOPXOBT-B/JB XIJDIJT
generally bearish for stock prices.

FIGURE 8 - Gold vs. Tacoma Rainfall


The relationship is not quite as obvious as it is for NYC rainfall versus the stock
market, but wetter than normal weather in Tacoma is reliably associated with
rising prices for gold bullion. And both data series show a propensity for important
bottoms about every 8 years. If it were 11 years between important bottoms, we
could chalk it up as a sunspot cycle phenomenon. But this 8-year period between
important bottoms has been going on for as long as gold has been freely traded in
the U.S., hinting at a common cause in both sets of data, but a cause which eludes
rational explanation. The next iteration of this cycle calls for a bottom in both gold
QSJDFTBOE1BDJD/PSUIXFTUSBJOGBMMJO

FIGURE 9 - SP500 vs. Inverse of Southern Oscillation Index

Figure 9 shows an indicator calculated as a cumulative running total of Southern


Oscillation Index values (raw data from NOAA1), inverted for display in this chart
to reveal the correlation. Its rising and falling periods are well-correlated with the
NPWFNFOUTPGUIF41'JHVSFQSPWJEFTBDMPTFSMPPLBUUIFNPSFSFDFOUZFBST
PGUIJTSFMBUJPOTIJQ BOEDPNQBSFEUIJTUJNFUPUIF/:4&T"EWBODF%FDMJOF-JOF

GUQGUQDQDODFQOPBBHPWXEEHEBUBJOEJDFTTPJIJTBOEIUUQXXXDQDODFQOPBBHPWEBUBJOEJDFTTPJ

54

The idenTificaTion
and
of balanced
ladder levels
in
lookinG
TouTilizaTion
our oWn PlaneT
for markeT
insiGhTs
The Technical analysis of sTock and markeT index Time series

There are a couple of anomalous periods, but there is generally a strong positive
correlation. The tornado count drop suggested that the 1970 bottom should have
been more significant than the 1974 bottom, but we should recall that an oil
embargo and a presidential resignation in 1974 helped to push stock prices down
more than might otherwise have been the case.
Figure 12 provides a more up-to-date picture of that same relationship:

FIGURE 10 - A-D Line vs. Inverse of Southern Oscillation Index


The correlation during recent years has been really great, except for the times
when the Fed has put its thumb on the scale. The implication is that the drop in
this SOI indicator since 2010 was supposed to have been associated with a stock
market decline, but the Fed intervened and pushed prices higher. Usually such
an anomaly represents a debt which must one day be repaid. One cannot cheat
Mother Nature forever.
The Southern Oscillation Index will be revisited below, in a discussion of global warming.

Tornados
The National Oceanic and Atmospheric Administration (NOAA) publishes monthly
totals for tornado occurrences in the U.S.2 These data reveal a cyclical nature in the
frequency with which tornados appear in the U.S. It turns out that these data give
about a 2-year leading indication for what stock prices are likely to do.
Figure 11 shows what this relationship between tornados and stock prices looked
MJLFGPSUIFQFSJPEGSPNUP

FIGURE 12 We can see that the big rally during the Internet bubble of the late 1990s followed
a similarly big increase in the tornado count (note that the tornado count plot is
shifted forward in this chart). And with the tornado count declining in 2000-2001,
JUJTVOEFSTUBOEBCMFUIBUBESPQJOTUPDLQSJDFTUPXBSEUIFCPUUPNXPVME
unfold to match the drop in tornados. A similar peak and decline also explain the
2007 top and 2008 bear market.
The period since the 2009 bottom is a little bit harder to explain according to this
NPEFM5IF'FETFPSUTUPTNPPUIPWFSBMMSJQQMFTJOUIFMJRVJEJUZTUSFBNJTQBSUPG
the story, as is a big anomaly in the tornado record during April 2011, when two
separate outbreaks hit the U.S. One was April 14-16, 2011, and it caused a string of
tornados stretching from Mississippi to Virginia. The other was April 25-28, 2011,
XIJDILJMMFEQFPQMF4. Since April 2011, the tornado totals have been lower
than normal, which suggests that if stock prices continue to correlate strongly with
tornadic activity, then we are likely to see a corrective period for stock prices.
At this degree of graphical resolution, it is still a lumpy looking correlation. Adding
BEEJUJPOBMTNPPUIJOHUPBNPOUIUPUBM BTTIPXOJO'JHVSF DMFBOTJUVQ
some more:

FIGURE 11 IUUQXXXODEDOPBBHPWTUPSNFWFOUTGUQKTQ
IUUQXFCBSDIJWFPSHXFCIUUQXXXTQDOPBBHPWQSPEVDUTXBUDIXXIUNM
4
IUUQXXXTQDOPBBHPWDMJNPUPSOGBUBMUPSOIUNM

2


2014 . Issue 68

55

raising interest rates, thereby further weakening the economy and the stock
NBSLFU4JODF UIFEFUSFOEFEDPSSFMBUJPOJTCFUUFSBU 
For most of the rest of this period, the advancing and retreating periods for
tornados have been matched 12 months later by corresponding advances
and declines in stock prices. This highlights one of the problems with
SFMZJOHTPMFMZVQPODPSSFMBUJPODPFDJFOUTUPEFUFDUSFMBUJPOTIJQT5IFZBSF
dependent not only upon timing but also upon magnitude of movements,
and sometimes the market does not work that way. Sometimes there is a
different magnitude of response, even though the timing of the direction
change is still there.
FIGURE 13 %PJOH B TJNQMF 1FBSTPOT $PSSFMBUJPO $PFDJFOU GPS UIF FOUJSF QFSJPE
shown brings a misleadingly high +0.88. It is misleading because there
is an upward slope to both series, and that throws off a simple correlation
DPFDJFOUDBMDVMBUJPO
If we detrend both sets of data by using a linear regression, we can calculate
FBDIEBUBQMPUTEFWJBUJPOGSPNUIBUMPOHUFSNUSFOE UIFSFCZUBLJOHBXBZ
UIFFFDUPOUIFDPSSFMBUJPODPFDJFOUCSPVHIUCZFBDIQMPUTVQXBSETMPQF
Figure 14 shows that adjustment:

Global WarminG
The Southern Oscillation Index (SOI) discussed above also plays a role in
global warming, with a far closer correlation to temperature data than
is found with items like atmospheric carbon dioxide. Figure 15 provides a
DPNQBSJTPO CFUXFFO UIBU 40* JOEJDBUPS TIPXO BCPWF  BOE /"4"T (MPCBM
-BOE0DFBO5FNQFSBUVSF*OEFY

FIGURE 15 -

Oscillation Index

FIGURE 14 - SP500 vs. Detrended 3-Year Tornado Count ,

5IFDPSSFMBUJPODPFDJFOUJTPOMZ  UIBOLTNPTUMZUPBCJHBOPNBMZJOUIF


1970s. There should have been an upturn in the 1970s according to the tornado
NPEFM CVUUIF"SBCXPSMETSFTQPOTFUPUIF64IFMQJOH*TSBFMJOUIFXBS
DBVTFEBOPVUTJ[FEEPXOXBSESFTQPOTFJOUIF%+*"5IJTXBTFYBDFSCBUFECZ
'FEFSBM3FTFSWFSFTQPOEJOHUPUIFJOBUJPOBSZFFDUTPGUIBUPJMQSJDFIJLFCZ

56

We can easily see that the biggest part of the rise in global temperatures
has been since the 1970s, a period that has also seen this SOI indicator rising
JOXBZTOPUTFFOCFGPSFUIFT4PIBWJOHNPSF&M/JPZFBSTJTSFMJBCMZ
associated with higher global average temperatures (AKA global warming).
5IBUTBMMSFBMMZJOUFSFTUJOHGPSNFUFPSPMPHJTUT CVU'JHVSFIJHIMJHIUTXIZJUJT
important to those who use technical analysis for stock prices:

The idenTificaTion
and
of balanced
ladder levels
in
lookinG
TouTilizaTion
our oWn PlaneT
for markeT
insiGhTs
The Technical analysis of sTock and markeT index Time series

FIGURE 18 -

FIGURE 16 -

1980-2012

(MPCBM UFNQFSBUVSF EBUB IBWF B QSFUUZ OJDF DPSSFMBUJPO UP UIF BQQSFDJBUJPO JO
stock prices during the 20th century. It is not a perfect correlation, but it is wellcorrelated enough over this long time span to at least be worth knowing about.

5IF1FBSTPOT$PSSFMBUJPO$PFDJFOUGPSUIFZFBSQFSJPETIPXOJT  
showing the strong positive correlation. The big rise in the stock market
during the 1980s and 1990s matched a general rise in global average
UFNQFSBUVSFT BCPVU  ZFBST FBSMJFS  &WFO UIF NBKPS EJQT IBWF CFFO PO
schedule, although the magnitude of the temperature dips has not always
NBUDIFEUIFNBHOJUVEFPGUIFTUPDLNBSLFUTSFTQPOTF5IFCPUUPNPGUIF
 DSBTI XBT  ZFBST BGUFS B CPUUPN JO HMPCBM UFNQFSBUVSFT TIPXO BT
CFJOHBMJHOFEJOUIJTDIBSU EVFUPUIFZFBSPTFUPGUIFQMPU
5IFTUPSZJT
UIFTBNFXJUIUIFCPUUPN5IFTUPDLNBSLFUTEJQJOXBTNVDI
larger than this data implied it should have been, which just goes to show
what a big anomaly that financial crisis actually was.
The correlation since 2007 has not been as good as at other times, which
TVHHFTUTUIBUUIF'FEFSBM3FTFSWFIBTCFFOIBWJOHBOPVUTJ[FEJNQBDUPOUIF
NBSLFUTJODF%S#FSOBOLFUPPLPWFSBTDIBJSNBO*GUIF'FEFWFSSFUVSOTUP
BQPTUVSFPGIBWJOHBMFTTPWFSUJOVFODFVQPOUIFTUPDLNBSLFU XFTIPVME
expect the correlation to improve again as natural market forces dominate
rather than governmentally induced artificial forces.

FIGURE 17 -

#ZTIJGUJOHGPSXBSEUIFHMPCBMUFNQFSBUVSFEBUBCZZFBST BTTIPXOJO'JHVSF
17, we can clean up the slight correlation mismatch between this temperature
EBUBBOEUIFCFIBWJPSPGUIF415PTBZJUNPSFTJNQMZ WBSJBUJPOTJOHMPCBM
average temperatures have been LEADING the movements of stock market
CZBCPVUZFBST BUMFBTUEVSJOHUIFQBTUDFOUVSZQMVTGPSXIJDIEBUBFYJTU
Zooming in more closely, we can see how this relationship has acted over the
QBTUEFDBEFTJO'JHVSF

Now, here is an ironic twist to this relationship. There are many people who
are concerned about the risks to the planet (and humanity) from global
warming, and they are advocating for taking steps which they believe will
halt the warming trend and cool down the planet to some level that they
believe is more optimal. If one supposes that they are right, and if humanity
could somehow find a way to bring about a cooling trend for global average
temperatures over a period of several years, then the implication of the
correlation shown above is that such a change would also bring about a
multi-year bear market for stock prices. So if one is both an investor and
an opponent of global warming, then that involves mentally rooting for two
PVUDPNFTXIJDIBSFTUBUJTUJDBMMZJODPOJDUXJUIFBDIPUIFS

2014 . Issue 68

57

'JHVSF  TVCTUJUVUFT /"4"T UFNQFSBUVSF JOEFY EBUB XJUI UIF 4PVUIFSO
Oscillation Index (SOI) data discussed above. It is not a perfect correlation,
but it is at least good enough to show is that there is something going on
there. The big growth years for stock prices in the 1980s and 1990s matched
BQFSJPEPGQSFEPNJOBOUMZ&M/JPDPOEJUJPOT GBDUPSJOHJOUIBUZFBSMBH5IF
leveling off of the SOI indicator over the past decade has correctly foretold the
leveling off of the stock market over the same time period, again once we
BEKVTUGPSUIFZFBSMBH

FIGURE 20 -

FIGURE 19 - SP500 vs. Inverse Southern Oscillation Index,


The most recent data from the Southern Oscillation Index suggest that a decline
lies ahead for stock prices, and it is modeled as a decline of greater magnitude than
the one which killed the Internet bubble in 2000.
It is not just in the most recent decades that this phenomenon has been working.
Figure 20 provides a picture of that comparison between the Southern Oscillation
Index data (shifted forward) and the behavior of the Cowles Index (precursor to
UIF41
GSPNUP
Notice that the peak in 1929 and the final bottom in 1942 occurred right on
schedule. So did the post-WWI depression bottom in 1921, which led to the boom
years of the 1920s.
And it is not just in stock prices where we see this correlation. It also appears in
corporate profits5 , as shown in Figure 21.
The relationship is not perfect, especially at times when the government has

IUUQXXXCFBHPWJ5BCMFJ5BCMFDGN 3FR*%TUFQ 5BCMF MJOF UIFOEJWJEFECZ(%1


IUUQXXXHQPHPWGETZTQLH$)3(TISHIUNM$)3(TISHIUN
7
GUQTJEBETDPMPSBEPFEV%"5"4&54/0""(
5
6

58

FIGURE 21 BHSFBUFSJOVFODFPOUIFFDPOPNZ5IF'FEFSBM3FTFSWFJOUIFMBUFTXBT
USZJOH UP QSJDL UIF *OUFSOFU CVCCMF  BOE UP HIU JOBUJPO XIJDI XBT OPU ZFU
evident but which was thought to lie over the horizon6 . So corporate profits
BTBQFSDFOUBHFPG(%1TUBSUFEGBMMJOHBIFBEPGUIFQPJOUJOUJNFXIFOUIF
Southern Oscillation Index said such a drop should have started. But the
bottom for corporate profits in 2002 was right on schedule.
In a similar way, corporate profits started falling in 2006, ahead of this
TDIFEVMF BOEBUBUJNFXIFOUIF'FEFSBM3FTFSWFXBTBUUFNQUJOHUPQSJDL
the housing bubble. The Fed succeeded in that effort, and then in 2008
when this Southern Oscillation Index model said profits should fall, the
EFTDFOUXBTNBHOJFE1SPUTIBWFCFFOSFCPVOEJOHTJODFUIFCPUUPNJO

The idenTificaTion
and
of balanced
ladder levels
in
lookinG
TouTilizaTion
our oWn PlaneT
for markeT
insiGhTs
The Technical analysis of sTock and markeT index Time series

2PG5IFDPOUJOVFESFCPVOEJOQSPUTBTBQFSDFOUBHFPG(%1EVSJOH
2011-12 comes at a time when this model says that they should still be under
downward pressure, and that continued rebound is likely a product of the
'FETFBTZNPOFZQPMJDJFT 2&  FUD
DSFBUJOHGVSUIFSJNCBMBODFTJOUIF
financial markets.

arcTic ice
5IF/BUJPOBM4OPXBOE*DF%BUB$FOUFSIBTCFFOQVCMJTIJOHEBUBPOCPUIUIF
area and the extent of arctic sea ice since 19787. Not surprisingly, the ice
waxes and wanes on an annual cycle, with the peak for both ice area and ice
extent usually coming in the month of March, as seen in Figure 22.

FIGURE 23 - March Arctic Ice Area vs. Real GDP


ZFBSGPS(%1"OEUIJTJOGPSNBUJPOXBTBWBJMBCMFBTPGUIFQVCMJDBUJPOPGUIF
March arctic ice extent number all the way back in April. Sometimes one
really can get the answers ahead of time.

conclusions

FIGURE 22 - Total Arctic Ice Area, March Annual Maxima Highlighted


The dots highlight the values observed in March of each year, the month
which usually sees the largest ice area for each winter. By taking just that
March ice area data and ignoring the rest, we find an interesting correlation
UPFDPOPNJDEBUB'JHVSFJOWFSUTUIFTDBMJOHPOUIBU.BSDIJDFBSFBEBUB 
BOEDPNQBSFTJUUPSFBM JOBUJPOBEKVTUFE
(%1HSPXUI
With the scaling for the ice area inverted in this chart, this chart reveals the
DPSSFMBUJPOUP(%1.PSFBSDUJDTFBJDFJTBTTPDJBUFEXJUIMPXFS PSOFHBUJWF

(%1 HSPXUI 5IF ZFBST PG IJHIFS (%1 HSPXUI UFOE UP CF BTTPDJBUFE XJUI
warming years when the ice area is smaller. We do not yet have the final
BOOVBMOVNCFSGPSSFBM(%1HSPXUI CVUHJWFOUIBU.BSDITBXBO
increase in arctic ice extent over 2011 (shown as a drop on the inverted chart
scaling), it is reasonable to conclude that 2012 will turn out to be a subpar

The relationship of the financial world to the geophysical world has a more
significant linkage than many believe, if we are to accept the message
of the data presented above. The linkage between the financial markets
and sunspots, rainfall, temperature, and other data likely runs through
the agricultural sector, with better or worse crop yields likely affecting the
financial markets more than is appreciated by some analysts. There is even
BO BSNBUJWF TDJFOUJD CBTJT GPS UIF SFMBUJPOTIJQ CFUXFFO TVOTQPUT BOE
SBJOGBMM BTSFWFBMFECZUIFSFTFBSDIPG%BOJTIBUNPTQIFSJDSFTFBSDIFS)FOSJL
Svensmark. To overly simplify his findings: More sunspots equals less cosmic
rays hitting the upper atmosphere, which then equals less rain, which affects
crop production, food prices, wealth, liquidity, and ultimately the financial
markets. The bountiful stock market returns of the 1980s and 1990s were
associated with a huge solar and atmospheric anomaly, causing a big upward
deviation from the bigger scheme of how average global temperatures look
PWFSUIFSFBMMZMPOHSVO %JTDPWFS.BHB[JOF

Even if we cannot establish a causal relationship, we can at least accept the
overwhelming evidence of co-relationship, and infer that there must be
some underlying causal relationship. Better still, we can use that information
to aid in market forecasting.

2014 . Issue 68

59

Whenever interesting correlations are found between seemingly unrelated


data sets, someone always pipes up with the old saying about Correlation
does not mean causation. As market technicians, we should not be deterred
from using data where causation cannot be established; what we seek is
reliable indication. If an indicator or data series can provide a reliable enough
indication ahead of time BCPVUXIBUJTDPNJOHGPSTUPDLQSJDFT (%1 PSBOZ
other useful piece of economic data, then it is worth listening to, even if one
cannot establish exactly what the causation may be.
If one is willing to approach atmospheric and climate data with an open mind,
then one can learn to get messages ahead of time about what is coming for stock
QSJDFT  JOBUJPO  VOFNQMPZNFOU  BOE PUIFS OBODJBM EBUB 5IF SFDFOU DPPMJOH
USFOEGPSUIFHMPCBMBWFSBHFUFNQFSBUVSFTVHHFTUTUIBUUIFOFYUZFBSTXJMMCF
a period of correction for the stock market.

references
Bureau of Economic Analysis
http://www.bea.gov/iTable/iTable.cfm?ReqID=9&step=1
Cycles Research Institute
http://cyclesresearchinstitute.org/cycles-history/chizhevsky1.pdf
Discover Magazine
http://discovermagazine.com/2007/jul/the-discover-interview-henriksvensmark#.UN6QbawVXB0
Government Pringing Office
http://www.gpo.gov/fdsys/pkg/CHRG-108shrg96536/html/CHRG108shrg96536.htm
Kessler, Billy
http://faculty.washington.edu/kessler/occasionally-asked-questions.html#q2
National Oceanic and Atmospheric Administration
ftp://ftp.ngdc.noaa.gov/STP/SOLAR_DATA/SUNSPOT_NUMBERS/MONTHLY
National Oceanic and Atmospheric Administration
http://www.ncdc.noaa.gov/stormevents/ftp.jsp
Snow and Ice Data Center, University of Colorado Boulder
ftp://sidads.colorado.edu/DATASETS/NOAA/G02135/

60

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

Parameter-results stability:
A New Test of Trading Strategy Effectiveness
connors research

bioGraPhies
Larry Connors
How
Markets Really Work, Short Term Trading Strategies That Work, High Probability ETF Trading,
Guidebook High Probability Trading with Multiple Up & Down Days.

ETF Trading with Bollinger Bands, Options Trading with ConnorsRSI, Trading Leveraged
ETFs with ConnorsRSI, and ETF Scale-In Trading.

absTracT
This study provides the results of a trading strategy that enters positions
when a stock becomes oversold and exits when the oscillator used to define
the oversold condition returns to a neutral level. A method of analyzing test
results from a practitioners perspective will also be presented. This practical
approach to strategy analysis could be adopted by traders seeking to verify
their strategy has a high probability of being as profitable in the future as it
was when applied to back tested data.
Few studies address short-term trading strategies from a traders perspective.
Because the traders perspective is different from the researchers perspective,
there is a need for new ways to evaluate the effectiveness of trading
strategies. While the Sharpe Ratio, excess returns and other statistical tests
offer important information, parameter-results stability is also important.
Parameter-results stability provides traders with a means to evaluate the
robustness of the general idea underlying the trading strategy. This paper
explains the relationship between parameter selection and strategy results
while demonstrating that the relationship should be predictable.
Parameters are the numeric variables of a rule that is part of a trading strategy.
For example, many strategies rely on moving averages and the number of days
or months used to calculate the moving average is a parameter. It is possible to

test a wide range of parameters for a given strategy to find one that works, a
problem known as data mining. Strategies based on data mining are unlikely
to work in the future since they are finely tuned to the past. A sound strategy
with a high probability of future success should be based on parameters that
are selected with a rational and logical approach. The parameters defining
a sound strategy should also exhibit a degree of stability, meaning a small
change in the parameter will result in only a small change in the test results.
To extend the moving average example, a sound strategy would be one that
showed little variability in performance for a moving average calculated for 10
months, 9 months, 8 months, 11 months and 12 months. The small change in
the value of the calculation should have a minimal impact on profitability. If
the 10-month moving average is profitable but the nearby parameters of 8, 9,
11 and 12 months are not, the strategys success is most likely due to luck rather
than a sound idea. This strategy would be unlikely to be profitable in the future.
A parameter can be considered to be stable when small changes in its value
have only a small impact on the strategys performance. While the topic
of parameter stability should be important to traders, many traders use
optimization to select strategy parameters. Optimization involves finding
the parameter that meets a limited objective. Optimization might be used to
maximize the percentage of winning trades, for example. Optimized strategies

may not enjoy stable results in the future because they are finely tuned to the
price history used in the testing. Rather than optimization, traders should
consider testing that offers insights into how small changes in the strategys
parameters impact key performance metrics. If small changes in parameters
result in relatively small changes in performance, the strategy should be more
likely to be stable in the future since this approach is not finely tuned to the past.

when a stock becomes oversold and exits when the oscillator used to define
the oversold condition returns to a neutral level. A method of analyzing test
results from a practitioners perspective will also be presented. This practical
approach to strategy analysis could be adopted by traders seeking to verify
their strategy has a high probability of being as profitable in the future as it
was when applied to back tested data.

This paper demonstrates an approach to testing for parameter stability that a


trader can incorporate into their strategy development process.

ii. daTa and meThodoloGy

i. liTeraTure review
Traders and researchers have different perspectives and those differences are
apparent in their published work. Short-term trading is largely unexplored in
the academic literature. Conrad and Kaul (1998) ignore short-term strategies
after demonstrating that trend following fails to capture profits with a oneweek holding period. This result is the most statistically significant finding in
their landmark paper but has been overlooked by most researchers. Instead
the focus has been on the papers conclusion that trend following strategies
can deliver statistically significant profits over time periods ranging from three
to twelve months.
The fact that trend following failed to work over one-week time periods leads
to the question of whether contrarian, or mean reversion strategies, could be
profitable in the short term. This question has not been the subject of much
research.
Practitioners have devoted some attention to the question of what works in
short-term trading but it is important to note there are significant differences
between academic research and practitioner-published research. There is
generally no peer review of practitioners work and there is no requirement
to include test results in the work. This leads to well-selected examples being
offered as the only proof of many ideas or test results that fail to cover a
sufficient time period. A final difference is that while academic literature will
include a number of papers explaining ideas that fail to work, practitioners will
generally only publish successful ideas.
Practitioner studies related to mean reversion strategies with a holding period
of approximately one week do not seem to have been widely published.
Among notable exceptions are Connors and Raschke (1996) and Connors and
Alvarez (2008). Although a number of other books have been written about
short-term strategies, they tend to rely on the well-selected example as proof
of success rather than quantified testing with a large sample size.
This study provides the results of a trading strategy that enters positions

62

To determine if mean reversion can be profitably exploited in the short term we


consider stocks that are members of the S&P 500. The data set uses the actual
components of the index at any given time and accounts for changes in the
composition of the index over time, thus making it free of survivorship bias.
Daily data beginning January 1, 2001 and running through March 31, 2014 is
used in the test.
Every day, each stock is examined to determine whether it is oversold or not.
Oversold stocks are simply defined as those that have fallen below a lower
bound on an oscillator. The lower bound will be defined in each test and the
oscillator selected will be the ConnorsRSI, which will be referenced as CRSI in
tables. More detail on this oscillator can be found in the Appendix.
ConnorsRSI was selected because it incorporates information about the
momentum and magnitude of recent price changes along with the duration
of the price trend. Its construction makes it responsive to the market action
while minimizing the likelihood of whipsaw signals which result in numerous
short-term trades that increase trading costs as an indicator moves within a
narrow range above and below the oversold level.
Quantified testing has demonstrated that ConnorsRSI is a reliable short-term
indicator. In this case, reliable means that the indicator has a high probability
of providing a profitable trade signal. This definition is practical rather than
academic, focusing on how useful the indicator would be to traders.
To determine the reliability of ConnorsRSI, Connors et al. (2014) tested
approximately 6,000 highly liquid stocks from January 2, 2001 through
February 28, 2014. Each day, each stock was assigned to one of twenty different
buckets corresponding to its ConnorsRSI value at the close of trading on that
day. Stocks with ConnorsRSI values of less than 5 were assigned to the 0 bucket.
Stocks with ConnorsRSI values greater than or equal to 5 and less than 10 were
assigned to the 5 bucket, etc. Next, for each of the 20 buckets, subsequent fiveday returns of each stock were calculated. The returns were then averaged for
all five-day periods. More than 4.9 million different trading opportunities were
evaluated with this process. The results are summarized in Table I.

Parameter-results stability
A new Test of Trading Strategy Effectiveness

connorsrsi
bucket

5-day return

30

ConnorsRSI and the 2-period Relative Strength Index (RSI). The 2-period RSI, or
RSI(2), is more responsive to price changes than the traditional 14-period RSI.
Figure 2 summarizes the test results and shows that ConnorsRSI identifies
trading opportunities that have a larger average return. This is true for both
oversold and overbought stocks.

Figure 1 displays the frequency of occurrence for each bucket. It can readily
be seen that ConnorsRSI rarely reaches extreme values. ConnorsRSI readings
below 5 occurred just 0.63% of the time in testing. Readings below 15, the first
three buckets shown in Table 1, indicate a stock is oversold and occurred 6.46%
of the time. Readings above 85, the last three buckets in Table 1, indicate a
stock is overbought and occurred 6.39% of the time in the test.
Testing has demonstrated that ConnorsRSI is reliable and testing has also
revealed that trade signals will occur infrequently, an important consideration
for traders who have limited resources. Additional testing is needed to
determine whether or not ConnorsRSI offers any benefit relative to more widely
followed indicators. This test was completed by Radtke (2014) and compared
the five-day average return for buckets of stocks with identical readings of

ConnorsRSI is bounded by 0 on the downside and 100 on the upside. Readings


below 15, 10 and 5 will be used to define the oversold level. These levels were
selected in keeping with the traditional approach to interpreting indicators
that levels near the lower bound are generally considered to be oversold.
Once a stock becomes oversold, it will be considered a buy if the price declines

2014 . Issue 68

63

the next day. Trades will be simulated assuming entries are made 2, 3 and
5% below the previous days close. All trades will be closed when ConnorsRSI
returns to a neutral level. Two levels, values of 50 and 70, will be tested.
By using a variety of parameters we will be able to evaluate whether or not
the underlying idea that oversold stocks can profitably be traded on the long
side is valid. We would expect to see only small changes in the results as the
parameters change if the idea is valid. This test will focus on the percentage
of winning trades. A parameter will be considered stable if small changes in
the parameter have only a small impact on the percentage of winning trades.
In summary, testing will be done on the setup condition, the entry level
and the exit to identify how robust the strategys rules are to changes in the
parameters. Results will be quantified with performance metrics to analyze
whether or not parameter selection plays a predictable role in a well-designed
trading strategy.
Performance metrics will include the total number of trades, the percentage
of winning trades and the average percentage return of all trades. These are
metrics that are of significance to traders.
Risk is also an important metric and that is incorporated, from a traders
perspective, into the average percentage return of all trades. A high percentage
return indicates that the size of the winning trades is cumulatively larger than
the losing trades. The Sharpe ratio and profit factor are also provided. To find the
Sharpe ratio, the average percentage return and standard deviation of returns
is calculated. These two figures are annualized. The risk free rate of return is
subtracted from the annualized average return and this value is then divided
by the annualized standard deviation of returns to obtain the Sharpe ratio. The
profit factor is the dollar amount of the profit from winning trades divided by
the total dollar value of all losing trades. These values were calculated with
AmiBroker, the software used for this test.

iii. resulTs
In this section, we present the results of back testing. We expect to see that
more restrictive strategy rules decrease the frequency of trading activity
but increase the percentage of winning trades. More restrictive exits should
increase both the percentage and average size of winning trades. If the
strategy is based on a sound idea, these changes should occur in a nearly
linear fashion and the results of all trading strategy variations should be fairly
consistent. From a traders perspective, there is no need to develop an objective
definition of nearly linear and fairly consistent. Since the goal of a trader
is to make money, consistent profitability is the only standard the results
must meet. Unstable results (for example, a single parameter set that results

64

in unprofitable performance) would render the entire strategy unusable no


matter what statistical tests indicated.
Our expectation that restrictive rules decrease trading frequency and increase
profitability might seem to be intuitive but there is no documentation in the
literature of the idea that parameter selection affects results in a predictable
way when the strategy is based on a sound idea rather than data mining. For
the most part, the literature contains data for trading strategies based on crosssectional analysis seeking to identify factors responsible for excess returns. This
paper addresses one of the biggest shortcomings in trading and financial
research - intuitive ideas associated with trading in pursuit of profits are not
exhaustively analyzed.
To begin with, we need to establish that the trading strategy works from the
perspective of a trader. The strategy is detailed below:
A buy Setup occurs when the stock is oversold, which is defined with these
rules:
1. The stock must be a member of the S&P 500 index.
2. The value of ConnorsRSI (3,2,100) is less than X where X = 5, 10 or 15.
After a Setup is completed, we Enter a trade that is equal to the full position
size (100% of the equity allocated to a position under this strategy) with a
limit order by:
3. Submitting a limit order to buy the stock at a price Y% below the closing
price on the day the setup is completed, where Y is 2, 3, or 5%.
After we Enter a position, we will Exit using one of the following methods,
selected in advance:
4. The stock closes with a ConnorsRSI value greater than Z where Z is 50
or 70.
An order to close an open trade will be executed at the close of trading on the
day the Exit signal triggers.
To begin testing, we will review results when the value of X in the setup
condition (Rule 2) changes. Table II summarizes those results. In order to
complete the test with differing values of X, we will need to select the other
parameters and hold them constant. We will use 5% as the limit entry price (Y
= 5 in Rule 3) and a ConnorsRSI reading of 70 to trigger a sell (Z = 70 in Rule 4).
Results shown in Table II confirm expectations. A less restrictive setup condition
results in more trades and a lower probability of success for any given trade.

Parameter-results stability
A new Test of Trading Strategy Effectiveness

#
Trades

Avg %
Profit /
Loss

Avg
% of
Trades /
Winners
Year

5,690

1.96%

67.1%

2,789

3.14%

941

4.72%

Sharpe
Ratio

Profit
Factor

CRSI
Buy

#
Trades

Avg %
Profit /
Loss

Avg
% of
Trades /
Winners
Year

429.4

1.18

1.78

15

941

4.72%

72.1%

68.7%

210.5

1.69

2.64

10

947

3.95%

72.1%

71.0

2.73

3.41

1,674

Sharpe
Ratio

Profit
Factor

CRSI
Buy

71.0

2.73

3.41

70

70.6%

71.5

3.80

4.10

50

2.51%

68.0%

126.3

1.66

1.97

70

1,683

2.20%

67.6%

127.0

2.54

2.30

50

2,493

1.64%

66.7%

188.2

1.16

1.51

70

2,508

1.56%

67.3%

189.3

2.09

1.92

50

One factor not shown in the table is that these are short-term trades with an
average holding period of less than five trading days. This general holding
period is applicable to all test results presented in this section.

Sharpe
Ratio

Profit
Factor

CRSI
Buy

Once again, a clear pattern emerges. Restrictive rules reduce the frequency
of trading relative to less restrictive parameters and more restrictive rules
generally increase the probability of success. The least restrictive entry (a limit
price 2% below the previous close or Y = 2 in Rule 3) enjoys slightly more
winning trades than the more restrictive exit for that entry parameter. This fact
does not materially impact the conclusions of the testing since the relationship
between parameter selection and results is still relatively stable. Of all the
parameters tested, the exit parameter has the least impact on trade results.

188.2

1.16

1.51

70

All eighteen possible permutations of the trading strategy are shown in Table V.

68.0%

126.3

1.66

1.97

70

72.1%

71.0

2.73

3.41

70

Table II holds the ConnorsRSI buy level constant at 5 (X = 5 in Rule 2) and retains
the same exit threshold of Z = 70 from the previous test. The percentage of the
limit price used to enter a trade is varied in this test (Y in Rule 3).

#
Trades

Avg %
Profit /
Loss

Avg
% of
Trades /
Winners
Year

2,493

1.64%

66.7%

1,674

2.15%

941

4.72%

Results again confirm expectations. There are more trade signals with a less
restrictive entry parameter but those signals have a lower probability of
success when compared to signals given by more restrictive signals.
In Table IV, a summary of the impact of the exit parameter is presented. This
test holds the ConnorsRSI buy level constant at 5 (X = 5 in Rule 2) but varies
both the percentage of the limit price used to enter a trade (Y in Rule 3) and
the value of the threshold trigger used in the exit rule (Z in Rule 4). Results are
sorted by the value of the entry limit price and then the sell threshold.

Table V: All trade results. This table shows the results of all permutations for all
parameters.
Table V is sorted by the percentage gain of the average trade. Strategies with
the most stringent setup conditions, entry rules and exit criteria consistently
outperform strategies with less strict criteria. All strategies display a
remarkably consistent percentage of winning trades and trading frequency is
directly correlated with the strictness of the strategy parameters.
Using the results from Table V, we can answer the question of whether or not this
strategy is useful to traders. We can conclude that this strategy is robust since
small changes in parameters result in small changes in performance. This was
confirmed by running a series of different tests that vary individual parameter
and observing a high degree of results stability. With high parameter-results
stability, we conclude the strategy is robust and tradable.
It is important to note that actual performance would depend upon trading
costs. Trading costs may vary for each individual but French (2008) found that

2014 . Issue 68

65

#
Trades

Avg %
Profit /
Loss

Avg
% of
Trades /
Winners
Year

941

4.72%

72.1%

947

3.95%

2,789

degree of parameter-results stability is present in all strategies they use.


Sharpe
Ratio

Profit
Factor

CRSI
Buy

71.0

2.73

3.41

70

70.6%

71.5

3.80

4.10

50

3.14%

68.7%

210.5

1.69

2.64

70

1,674

2.51%

68.0%

126.3

1.66

1.97

70

2,828

2.43%

68.1%

213.4

2.29

2.70

50

1,683

2.20%

67.6%

127.0

2.54

2.30

50

5,690

1.96%

67.1%

429.4

1.18

1.78

70

5,760

1.80%

67.9%

434.7

1.19

2.06

70

5,916

1.74%

68.0%

446.5

1.83

1.88

50

2,493

1.64%

66.7%

188.2

1.16

1.51

70

2,508

1.56%

67.3%

189.3

2.09

1.92

50

5,875

1.48%

67.7%

443.4

1.79

2.21

50

12,430

1.31%

67.7%

938.1

0.95

1.60

70

8,919

1.25%

67.8%

673.1

0.97

1.74

70

13,027

1.10%

67.3%

983.2

1.48

1.70

50

9,135

1.07%

67.0%

689.4

1.53

1.93

50

19,582

0.95%

67.6%

1,477.9

0.76

1.51

70

20,730

0.80%

66.7%

1,564.5

1.26

1.57

50

the average cost of active trading is 0.67%. It is reasonable to assume that the
costs would be lower now since commission rates have dropped since that study
was published. However, using the value French reported, each variation of this
trading strategy would be profitable for a trader with the lowest performing
variation of the strategy offering an edge of 0.13% after trading costs under
Frenchs model. The best performing variation would have an historical edge of
4.05% per trade.

aPPendix: ConnorsRSI

ConnorsRSI combines three components. On its own, each element can be


useful to traders with some degree of predictive ability based on profitability:
- Price Momentum: The Relative Strength Index (RSI) is an excellent way
to measure price momentum, i.e. overbought and oversold conditions. By
default, ConnorsRSI applies a 3-period RSI calculation to the daily closing prices
of a security. We will refer to this value as RSI(Close,3).
- Duration of Up/Down Trend: When the closing price of a security is lower
today than it was yesterday, we say that it has closed down. If yesterdays
closing price was lower than the previous days close, then we have a streak
of two down close days. Our research has shown that the longer the duration
of a down streak, the more the stock price is likely to bounce when it reverts
to the mean. Likewise, longer duration up streaks result in larger moves down
when the stock mean reverts. In effect, the streak duration is another type of
overbought/oversold indicator.
The problem is, the number of days in a streak is theoretically unbounded,
though we could probably place some practical limits on it based on past
experience. For example, we might observe that there have been very few
instances of either an up streak or a down streak lasting for more than 20
days, but that still doesnt get us to a typical oscillator-type value that varies
between 0 and 100.
The solution is two-fold. First, when we count the number of days in a streak,
we will use positive numbers for an up streak, and negative numbers for a
down streak. A quick example will help to illustrate this:
DAY

CLOSINGPRICE

$20.00

$20.50

$20.75

$19.75

-1

iv. conclusions

$19.50

-2

This paper demonstrates that a sound trading strategy should provide results
that vary slightly when the strategy parameters are varied by a small amount.
In doing so, this paper adds to the body of knowledge of technical analysis by
providing a framework to test a trading strategy for robustness. This type of
testing would be conducted by practitioners who should ensure that a high

$19.35

-3

$19.35

$19.40

66

STREAK DURATION

Parameter-results stability
A new Test of Trading Strategy Effectiveness

The closing price on Day 2 is higher than on Day 1, so we have a one-day up


streak. On Day 3, the price closes higher again, so we have a two-day up streak,
i.e. the Streak Duration value is 2. On Day 4, the closing price falls, giving us a
one-day down streak. The Streak Duration value is negative (-1) because the
price movement is down, not up. The downward trend continues on Days 5
and 6, which our Streak Duration reflects with values of -2 and -3. On Day 7 the
closing price is unchanged, so the Streak Duration is set to 0 indicating neither
an up close nor a down close. Finally, on Day 8 the closing price rises again,
bringing the Streak Duration value back to 1.

returns, or about 5 months of price history. To reiterate, large positive returns


will have a Percent Rank closer to 100. Large negative returns will have a
Percent Rank closer to 0.

The second aspect of the solution is to apply the RSI calculation to the set of
Streak Duration values. By default, ConnorsRSI uses a 2-period RSI for this
part of the calculation, which we denote as RSI(Streak,2). The result is that
the longer an up streak continues, the closer the RSI(Streak,2) value will be
to 100. Conversely, the longer that a down streak continues, the closer the
RSI(Streak,2) value will be to 0.

The result is a very robust indicator that is more effective than any of the three
components used individually. In fact, ConnorsRSI also offers some advantages
over using all three components together. When we use multiple indicators to
generate an entry or exit signal, we typically set a target value for each one.
The signal will only be considered valid when all the indicators exceed the
target value. However, by using an average of the three component indicators,
ConnorsRSI produces a blending effect that allows a strong value from one
indicator to compensate for a slightly weaker value from another component.
A simple example will help to clarify this.

Thus, we now have two components -- RSI(Close,3) and RSI(Streak,2) -- that


both use the same 0-100 scale to provide a perspective on the overbought/
oversold status of the security were evaluating.
- Relative Magnitude of Price Change: The final component of ConnorsRSI
looks at the size of todays price change in relation to previous price changes.
We do this by using a Percent Rank calculation, which may also be referred to as
a percentile. Basically, the Percent Rank value tells us the percentage of values
in the look-back period that are less than the current value.
For this calculation, we measure price change not in dollars and cents, but as a
percentage of the previous days price. This percentage gain or loss is typically
referred to as the one-day return. So if yesterdays closing price was $80.00,
and todays price is $81.60, the one-day return is ($81.60 - $80.00) / $80.00
= 0.02 = 2.0%.
To determine the Percent Rank, we need to establish a look-back period. The
Percent Rank value is then the number of values in the look-back period that
are less than the current value, divided by the total number of values. For
example, if the look-back period is 20 days, then we would compare todays
2.0% return to the one-day returns from each of the previous 20 days. Lets
assume that three of those values are less than 2.0%. We would calculate
Percent Rank as:
Percent Rank = 3 / 20 = 0.15 = 15%

The final ConnorsRSI calculation simply determines the average of the three
component values. Thus, using the default input parameters would give us the
equation:
ConnorsRSI(3,2,100) = [RSI(Close,3) + RSI(Streak,2) + PercentRank(100)] / 3

Lets assume that Trader A and Trader B have agreed that each of the following
indicator values identify an oversold condition:
t34* $MPTF 

t34* 4USFBL 

t1FSDFOU3BOL 

Trader A decides to take trades only when all three conditions are true. Trader B
decides to use ConnorsRSI to generate her entry signal, and uses a value of (15
+ 10 + 20) / 3 = 15 as the limit. Now assume we have a stock that displays the
following values today:
t34* $MPTF 

t34* 4USFBL 

t1FSDFOU3BOL 

t$POOPST34*   

Trader A will not take the trade, because one of the indicators does not meet
his entry criteria. However, Trader B will take this trade, because the two low
RSI values make up for the slightly high PercentRank value. Since all three
indicators are attempting to measure the same thing (overbought/oversold
condition of the stock) by different mechanisms, it makes intuitive sense to
take this majority rules approach.

The default Percent Rank look-back period used for ConnorsRSI is 100, or
PercentRank(100). We are comparing todays return to the previous 100

2014 . Issue 68

67

references
Connors, Laurence and Alvarez, Cesar, 2008, Short Term Trading Strategies That
Work (TradingMarkets, Jersey City, NJ)
Connors, Laurence and Raschke, Linda Bradford, 1996, Street Smarts: High
Probability Short-Term Trading Strategies (M. Gordon Publishing Group, Jersey
City, NJ)
Connors Research, LLC; Connors, Laurence; Alvarez, Cesar and Radtke, Matt,
2014, An Introduction to ConnorsRSI, 2nd edition (TradingMarkets, Jersey City,
NJ)
Conrad, J., and Kaul, G., An Anatomy of Trading Strategies, Review of Financial
Studies (1998) 11 (3): 489-519.
French, Kenneth R., The Cost of Active Investing (April 9, 2008). Available at
SSRN: http://ssrn.com/abstract=1105775
Jordan, Douglas J., and Diltz, J. David, 2003, The profitability of day traders,
Financial Analysts Journal, 59(6), 8594.
Radtke, Matt. (January 2, 2014). Compare ConnorsRSI to RSI2. InTradingMarkets.
com. Retrieved July 25, 2014, from http://www.tradingmarkets.com/
analytics/how-does-connorsrsi-compare-to-rsi2-1583255.html.

68

The idenTificaTion and uTilizaTion of balanced ladder levels in


The Technical analysis of sTock and markeT index Time series

harnessing volatility for Profit Through leverage:


How I Learned to Stop Worrying About EMH and Love Leveraged ETFs

bioGraPhies
G.L. Biff Robillard III CMT. President & Co-Founder, Bannerstone Capital Management.
Biff Robillard is a Chartered Market Technician, portfolio manager and a principal of Bannerstone Capital Management, LLC.
Bannerstone is a registered investment advisor in Deephaven, Minnesota.

Thomas C. Pears

absTracT

Leveraged exchange-traded funds, or LETFs, are financial products


engineered to generate daily returns that are a multiple (2, 3, 2, or 3)
of an underlying indexs daily returns. While these products are generally
reliable in providing amplified daily gains or losses, their long-term returns
can end up being far from the intended multiple of the underlying indexs
long-term returns. This is the result of the ETFs leverage being rebalanced
nightly so that the leverage factor is the same at the beginning of every
day. The rebalancing of LETFs causes the value of the product to decay over
time when the price of the underlying index fluctuates greatly from dayto-day; thus, a strong negative correlation exists between LETF returns and
the volatility of the underlying index. Because of this tendency for LETFs to
underperform in volatile markets, analysis of an LETF investment strategy is
incomplete without serious consideration of the effect of volatility. While we
agree that LETFs can suffer from significant volatility drag, the purpose of
this paper is not to make a case for or against investment in LETFs. Instead,
we wish to present a set of tools that can help investors understand these
financial products more clearly, account for the effect of volatility while
making investment decisions, and ultimately realize substantial gains in
their portfolio given appropriate index returns and volatility. In this paper,
we analyze the relationship between the realized volatility of an index
and the returns of LETFs that are based on that index, creating a visual
representation of LETF return for a given index return as volatility increases.
We then present a real-time oscillator that measures the potential for a
successful investment in LETFs based on a given indexs realized volatility and
the investors expectations for index performance. Using these tools, an LETF
investor is able to make a more complete assessment of market conditions
and determine whether or not LETFs are appropriate investment vehicles
with which to execute a specific investment strategy.

inTroducTion
Leveraged exchange-traded funds (LETFs) were first introduced in 2006
(McCall, 2011). By holding swaps, futures, and other derivatives of a given
index, LETF managers are able to create products that provide daily returns
equal to an intended multiple of that underlying indexs daily return. Long,
or bull, LETFs generate two or three times the underlying indexs daily return,
while inverse, or bear, LETFS provide a leverage factor of negative two or
negative three. Since these financial products are engineered to deliver
amplified daily returns, the multi-day return of an LETF will usually differ from
the return of the underlying index times the leverage factor. During periods
of high volatility, an LETFs return will often fail to achieve the intended
multiple. This is because the effects of volatility drag are amplified by
leverage. Volatility drag is the price decay that occurs during volatile periods
with equal daily percent gains and losses for instance, if an investment
gains 10% and then loses 10% the next day, the overall return is not 0%, but
1%. If you have ever taken the average of your investments daily returns
and found that your investment actually performed much worse, you know
exactly how detrimental to return this decay can be.
LETF underperformance comes from the unique way in which these
products are levered. In order to achieve the same multiple of daily returns
every day, LETFs leverage needs to be rebalanced in between trading
sessions. Understanding the difference between this type of leverage and
conventional margin is essential to understanding the inherent decay risk of
LETFs. Levering with 50% margin creates a 2X-levered investment initially,
but once the price changes, the leverage changes as well. In this way, a static,
conventional margin loan creates variable leverage that changes from day to
day. Suppose an unlevered index ETF is priced at $100. We buy one share of
this ETF, using $50 of equity and $50 of debt. If the ETF rises to $125 (a 25%

gain) our equity will increase by $25 (a 50% gain). We achieved twice the
return of the index, but now we have $75 of equity invested in our $125 ETF;
our leverage factor is now 1.67.
Conversely, LETF managers rebalance leverage between each trading
session. In this way, one could consider LETF leverage, adjusted to maintain
a particular leverage factor, to be fixed leverage. To achieve fixed leverage,
managers increase their exposure when the LETF rises and decrease their
exposure when it declines; this can create a buy high, sell low pattern that
reduces return during high By 1936 Keynes volatility periods. For instance,
when the index goes up during the day, a long LETFs exposure is increased
that night, essentially doubling down the traders directional bet (the same
effect would be achieved by buying the index ETF on margin and adjusting
the leverage at the end of every trading session to maintain the same
leverage ratio). This style of leverage allows investors to stay leveraged as
the underlying index trends upwards, similar to the trading technique of
pyramiding. However, if the index then reverses, the increased exposure,
achieved by re-leveraging at a higher price, works against the investor, and
often causes the leveraged ETF to decay while the index experiences nondirectional volatility.

day

abc abc bull bull bear bear


Price %
Price %
Price %

100

100.0

100

105

5.00%

115.0

15.00%

85.0

-15.00%

100

-4.76%

98.6

-14.29%

97.1

-14.29%

105

5.00%

113.4

15.00%

82.6

-15.00%

100

-4.76%

97.2

-14.29%

94.4

-14.29%

105

5.00%

111.7

15.00%

80.2

-15.00%

100

-4.76%

95.8

-14.29%

917

-14.29%

105

5.00%

110.1

15.00%

77.9

-15.00%

100

-4.76%

94.4

-14.29%

89.1

-14.29%

105

5.00%

108.6

15.00%

75.7

-15.00%

10

100

-4.76%

93.1

-14.29%

86.5

-14.29%

TABLE 1 - The closing price and daily percent change of ABC,


BULL & BEAR.

ParT 1: The relaTionshiP


How does volatility cause LETF prices to decay
over time?
To analyze the effect of volatility on LETF return, let us examine a set of
hypothetical data over a 10-day time frame. Say that an ETF with the ticker
symbol ABC tracks an index, and that LETFs BULL and BEAR are 3X long and
3X inverse LETFs of the same index: BULL produces three times the daily
movement of the underlying index, while BEAR gives negative three times the
indexs movement. At the start of our 10-day window, all three ETFs are valued
at 100. The index then undergoes a trading period of extreme volatility. On day
one, the index gains 5%, and ABC closes at 105; on day two, the index drops
4.76%, and ABC goes back down to 100. This pattern continues for 10 days, after
which the price of ABC is still 100; however, the market values of BULL and BEAR
are significantly lower. The table and graph below present the closing prices and
daily percent changes of ABC, BULL, and BEAR over the 10-day period.
In Table I and Figure 1, we see that although ABC is unchanged at the end
of the 10-day window, BULL has declined by 6.9%, and BEAR has declined
by 13.5%. This decay, the result of volatility drag, has to do with the
mathematical properties of percent gains and losses. When a price rises and
falls by an equal amount over two days, the percent increase on the up day
is greater than the percent decrease on the down day. In Table I, a 5% gain

70

FIGURE 1 - ABC, BULL & BEAR price over the 10-day period.
brought ABC to a value of 105, but it only took a percent decrease of 4.76% to
bring it back down to 100.
The relationship between equivalent percent gains and losses is expressed by
the following equation:

EQUATION 1:

1
D+1

Where:
U = Percent change on the day the price goes up
D = Percent change on the day the price goes down

harnessing volatility for Profit Through leverage:


How I Learned to Stop Worrying About EMH and Love Leveraged ETFs

This expression describes when a percent decline exactly offsets a percent


increase, resulting in a net change of zero. However, this relationship is
distorted when leverage is introduced, causing our equation to no longer
hold true: The 2-day return is less than zero. This is evident in Table I, where
BULLs price increase of 15% was more than offset by the price decrease of
14.29%, even though the two daily changes were simply three times ABCs
offsetting percent changes. Also notice how with BEAR, the signs of the
percent changes are the opposite of BULLs: it decreases by 15%, and increases
by only 14.29%. Because of the relationship we defined above, U will always
be greater than the absolute value of D when index return is zero. When
inverse leverage is used to engineer a negative multiple of daily returns, the
bear LETFs U is derived from the indexs D, and vice versa. This means that
when the index has static returns over a given time frame, the bear LETF
actually drops on down days by a greater percentage than it increases on up
days. Because of this, bear LETFs theoretically always perform worse than
their bullish counterparts in volatile, non-directional markets (as seen in
Figure 1 and Table I).

purposes we simply subtract the expense ratio for the period, E, from our
n
, where
return equation. We define E with the formula
252
e is the annual expense ratio of the LETF. For a full explanation of how we
derived this formula, see Appendix A.

What is the relationship between LETF return and volatility?

To model the relationship between LETF return and volatility, we now express
index volatility in terms of A when return is R. For this study, we used
realized volatility, as opposed to the more widely-used historical volatility.
Historical volatility is more appropriate for directionally trending markets,
while realized volatility is more appropriate for volatile, non-directional
markets; we concluded that the latter would be more appropriate for a
study of volatility drag. Nonetheless, we did not find a significant difference
between our hypothetical historical volatility and realized volatility results,
and we believe that a practitioner wishing to apply our models to an
investment strategy could use the historical volatility tool that is available on
most financial software. The realized volatility daily formula, as described by
the VolX Group (2013), is displayed on the following page.

Our example demonstrates that decay in the value of leveraged ETFs is directly
correlated with index volatility: higher volatility leads to higher decay. Now we
will explore the mathematical relationship between index volatility and LETF
return. To do this, we build equations for volatility and LETF return that are
based on our 10-day hypothetical scenario (Figure 1 and Table I). To simplify,
let us reduce our reference period to two days. Since the 10-day simulation was
simply a 2-day pattern repeated five times, this will not affect the relationship
between volatility and LETF return.
Using an independent variable A to represent the daily index price changes,
we construct an equation that gives us LETF return in terms of these daily
price movements. Equation 2, below, gives LETF return over n number of
days when the index follows our 2-day example with a 2-day price change
represented by the parameter R. This formula takes the geometric mean of
two factors: the percent increase on the first day, and the percent decrease
on the second day. Multiplying one plus part one by one plus part two
gives us our 2-day return, plus one. Because of the commutative property
of multiplication, the order in which the price movements occur does not
affect return. Hence, the product of these two halves, minus one, represents
the n-day return of an LETF whose underlying index has a return of R and
average daily price movements of A.
Fund expenses must also be considered when investing in LETFs, and
generally cost 90110 basis points per year. Expenses reduce the net asset
value of the LETF every day after close. Thus, the effect of the expenses is
not significantly dependent on the indexs price movements, and for our

EQUATION 2:

Where:
= LETF return over the reference period
A = The indexs price gain on day one
L = The leverage factor of the LETF
n = The number of trading days in the reference period
E = The expense ratio of the LETF for the period
R = Index price change over the 2-day period, as calculated by the formula:
Where:

= Index return over the entire reference period

EQUATION 3:
Where:
V = Realized volatility
252 = The approximate number of trading days in a year
t = A counter representing each trading day
n = Number of trading days in the reference period
Ln = Natural logarithm
= Index price at close on day t
= Index price at close on the day preceding day t
The equation below gives us realized volatility in terms of daily price change,
A, when the indexs 2-day price change is R. For a full explanation of how we

2014 . Issue 68

71

derived this formula, see Appendix B.

EQUATION 4:

Now that we have formulas for both


and V in terms of Awe are able to
plot the relationship between realized volatility and LETF return for a given
index return. Ideally, we would solve our volatility equation for A and then
equation, giving us
in
substitute this solution for A into the
terms of V. However, the complexity of the volatility equation makes solving
for A impractical. Instead, we use A to find coordinates that plot LETF return
against volatility.

What happens to LETF return for a given index return as


volatility increases?

FIGURE 2 - 2X LETF returns versus increasing volatility with index


return at 10%.
The graph above plots BULL, BEAR, and ABC return over a 252-day holding
period when index return is 10% and LETF leverage is two. Both BULL and
BEAR have annual expenses of 1%. ABC, the ETF for the underlying index,
has annual expenses of 0.5%. If we look at the red BULL Return line, we can
see that when volatility is in the single digits, BULL return is roughly 20%,
twice the underlying index return. As volatility climbs into the 1020 range,
BULL return begins to noticeably decay, and by the time volatility is in the
30s, BULL return is actually lower than ABC return. Thus, if we expect annual
index return to be 10%, it would be wrong to employ a 2X LETF to execute
this strategy unless we also predict volatility to be below 30.
Figure 3 graphs 2X BULL, BEAR, and ABC return when index return is 25%,
using the same time and expense parameters as before. Notice that in
this instance, in our sweet spot of single-digit volatility, BULL Return
significantly exceeds two times the return of the index. The rebalancing effect
that causes LETFs to underperform during volatile periods can also cause

72

FIGURE 3 - 2X LETF returns versus increasing volatility with index


return at 25%.

LETFs to outperform during periods of low volatility. As the index increases


in value, its amplified daily growth rates compound, causing correctly-called
LETFs (bull LETFs in bull markets, and vice versa) to achieve returns in excess
of their 2X or 3X objective.
Compounding has the opposite effect on incorrectly-called LETFs (bear LETFs
in bull markets and vice versa), causing them to decline by less than their
multiples would suggest. In this case, we expect BEAR to decline by double
the indexs return, or 50%. Instead, during periods of low volatility, BEAR
only declines by roughly 40%. Also notice that the volatility level where BULL
Return and ABC Return overlap is much further to the right than in Figure
2, when index return is 10%. Volatility causes decay in LETFs no matter the
return of the index; however, higher index return helps offset the effect of
volatility drag.
We will now see that increased leverage exaggerates the effects of volatility

FIGURE 4 - 3X LETF return versus increasing volatility when index is 25%


on LETFs.
Above we see 3X BULL, 3X BEAR, and ABC returns when index return is
25%. Notice how higher leverage causes greater compounding of returns
during low-volatility periods, but also leads to increased volatility drag when
volatility is high. BULL return is above 80% when volatility is under 15, but
decays to below zero before volatility hits 50. BULL Return also intersects

harnessing volatility for Profit Through leverage:


How I Learned to Stop Worrying About EMH and Love Leveraged ETFs

from 0 to 5. The more values you use, the smoother the curves will be
increments of .02 should be sufficient. Next to the A column, create a column
to hold volatility values for the given A value, using Equation 4. Equation 5,
below, shows one way of writing this as an Excel formula.

EQUATION 5:

FIGURE 5 - 3X LETF return versus increasing volatility when index


return is -20%

with ABC Return at a lower volatility level than in Figure 3.


Finally, we see triple-levered BULL, BEAR, and ABC returns when index return
is 20% (the negative equivalent of a +25% index return, as demonstrated
by Equation 1). Comparing this graph and the previous, it is clear that long
and inverse LETFs are not simply mirror images of one another. Under lowvolatility conditions, BEAR return for an index decline of 20% is the same
as BULL return for an index gain of +25% (Figures 4 and 5). However, BEAR
return falls more sharply during periods of high volatility, and in this case
BEAR has negative returns before volatility reaches 35. Furthermore, compare
BULL in this graph to BEAR in the previous; both incorrectly-called LETFs face
the same losses at low volatility levels, but BULL fares much better under
higher-volatility conditions, eventually outperforming BEAR when volatility
is extremely high (Figure 5).
From our study of these curves, we can draw several conclusions regarding the
performance of leveraged ETFs. First, there is a negative correlation between
realized volatility and LETF return for a given index return. Thus, when
volatility is high enough, the return of correctly-called LETFs can drop below
index return, and can even fall below zero despite the LETF being directionally
correct. Second, when volatility is low, these correct LETFs provide amplified
positive returns that can even exceed the intended multiple of index return.
Greater index return increases this effect and augments correct LETF return
at every volatility level. Third, incorrect LETFs at low volatility levels decline
less than their multiples would suggest. Fourth, higher leverage exaggerates
these four characteristics; that is, 3X and 3X LETFs are more sensitive to
volatility than are 2X and 2X LETFs. Finally, inverse LETFs are more adversely
affected by volatility than are long LETFs. With these principles of LETF return
in mind, an investor can more intelligently decide which level of leverage
is appropriate, and can have a clearer understanding of the factors that
influence his potential investment performance.
HOW TO CONSTRUCT THIS MODEL:
To model this relationship on Microsoft Excel, create a column of A values

Where:
Column B contains the corresponding realized volatility values and B6 is the
cell containing this specific formula.
Column A contains the A values, and A6 contains the A value for which this
formula is calculating a realized volatility level.
is the fixed cell containing the indexs annual return.
The formula for BULL return should be written in column D, and is based off
Equation 2. As an excel formula shown on the following page.

EQUATION 6:

Where:
C is the column containing BULL return values and C6 is the cell containing
this specific formula.
is the fixed cell containing the leverage factor for the LETF.
is the fixed cell containing the annual expenses of the LETF.
In column D, the formula for BEAR return should be the same, but should
reference column D instead of column C. The cell containing the leverage
factor, in this case
, should be negative.
You can also create a column for the non-levered ABC return, which always
equal index return minus index expenses. The formula in this case would be
. Notice that the formula does not refer to the independent
variable A, since ABC return is unaffected by volatility. Table II, below, shows
the cells that contain the various formulas above and the referenced user-

TABLE II. - Row 6 contains the formulas stated above

2014 . Issue 68

73

defined parameters.
Graphing BULL, BEAR, and ABC return (columns C, D, and E) in terms of realized
volatility (column B) will produce graphs like Figures 25. You can use these
graphs to find the theoretical return for LETFs at different index return and
volatility levels. You can also use the LINEST function in Excel to extract the
coefficients of a trendline, which will allow you to create a formula for LETF
return in terms of index volatility. A fourth-degree polynomial trendline
usually has an R-square in excess of 0.999, and can be used to easily and
precisely find LETF return for a corresponding volatility level.
Remember that these return levels are theoretical, and tracking error can
cause your actual return to differ from the model. Nonetheless, simply
studying the relationship between volatility and LETF return for various index
return and leverage parameters is extremely valuable for the LETF trader,
and allows for a better understanding of the unique behavior of financial
products that are re-leveraged on a daily basis.

ParT ii: ThevoloscillaTor


HOW CAN WE APPLY OUR KNOWLEDGE OF LETFs TO
EVERYDAY TRADING?
Since volatility and return are the main variables that determine LETF returns,
it has occurred to us to create a visual tool incorporating these parameters in
an effort to aid the LETF investor. Forecasting future underlying index return
and volatility is the essence of successfully using LETFs; as we have seen in
Part I, expressing an opinion only about future index performance is not
sufficient for successful LETF investment.
With our knowledge of volatilitys effect on LETF return, we present a real-time
oscillator that assesses whether or not current market conditions favor LETF
investment. We have established that volatility has a negative effect on LETF
return, and that for a given index return, each potential volatility level has a
corresponding theoretical LETF return. Therefore, with a given index return
forecast, we can find a corresponding volatility level that will allow us to
achieve a desired minimum LETF return. For a given investment strategy, our
oscillator will subtract the real-life indexs realized volatility from this breakeven volatility level. By doing this, the Voloscillator generates a positive value
when prevailing market conditions suggest that the investor can achieve his
investment goals using LETFs, given his predicted index return. An investor can
construct this instrument on a spreadsheet by simply downloading daily data
for 30-day realized volatility and subtracting these values from the break-even
volatility level that is found through Equations 5 and 6.
To introduce this tool, we will analyze an historical period that featured high
growth coupled with high volatility; in this instance, the months following a
significant market decline. From July 22nd to August 8th, 2011, the S&P 500

74

dropped by more than 16%. Volatility rose considerably, breaking 20 for the first
time that year (Figure 6).

FIGURE 6 - S&P 500 closing price and 30-day realized volatility


for March - December 2011
Above, we have the price and 30-day volatility of the S&P 500 in 2011. As
we can see, the relative stability of March through June was interrupted by a
major downturn beginning near the end of July. Market selloffs like this can
provide opportunities to invest in LETFs, since markets often rebound, giving
high returns over a short-term period. However, bear markets are often
volatile, and if volatility stays high, this can pose risks to the return of LETFs.
Consider this scenario on August 15, 2011: The S&P 500 appears to be
recovering from the selloff, and is up 85 points over the past week (Figure
6). Suppose we believe that the index will continue to go up by at least 5%
over the next 30 trading days, and we want to profit from this growth using
a 3X-levered ETF. Furthermore, to compensate for the added risk of leverage,
we want our theoretical return to be at least 2.2 times the indexs expected
return. We will use our equations from the last section to find the relationship
value of 5% and an n value
between volatility and LETF return for an
of 30 (Equations 2, 4, 5, and 6). We will set our expense parameter, e, at a 1%
annual fee.

FIGURE 7 - Theoretical 3X bull returns versus volatility, as well as


our break-even return line of 11%

harnessing volatility for Profit Through leverage:


How I Learned to Stop Worrying About EMH and Love Leveraged ETFs

The graph above illustrates the estimated return for a 3X long LETF, given
a 30-day index return of 5%. Notice that if we simply wanted our bull to
outperform the index for this given index return, we would only need volatility
to be less than 54. However, since we desire additional compensation for our
added risk, we will instead find where the return curve intersects with our
break-even line. They cross when volatility is 34; we will use this volatility
level to build our oscillator.

inTroducinG The voloscillaTor

Figure 9 reveals that volatility was indeed very high for several weeks,
causing our oscillator value to stay below zero. If we were not to adjust our
previous index return forecast, we could wait until volatility seems to be on a
downward trend. By examining the oscillator on a daily basis, we might have
determined that by about September 21st, volatility had declined to such a
level that use of LETFs to achieve our bullish return objective could then be
appropriate. If we suspected an index rally, we could go long a 3X bull ETF
after we see our oscillator cross the zero line. In our example, we buy shares
of UPRO, a 3X S&P 500 ETF, just before the market closes on September 22nd.
From September 22nd to November 3rd, the S&P 500 went from 1129.56 to
1261.65, an increase of 11.7% over the period of 30 trading days. 30-day
realized volatility for the index on November 3rd was 29.8. UPRO, our 3X
LETF, had a 30-day gain of 35.4%, just over three times the indexs return
(our model from part I predicts a 35.0% LETF return for these index return
and volatility parameters).

FIGURE 8 - Our oscillator value from June 1 to August 15, calculated


by subtracting 30-day realized volatility from our break-even volatility
level of 34.
In figure 8, we see how our oscillator would have behaved during our scenario
in August 2011 (Figure 8). The Voloscillator takes the break-even volatility
level we previously calculated and subtracts the 30-day realized volatility of
the index. Therefore, a value below zero indicates that the prevailing volatility
level overwhelms our predicted index return, and it is unlikely that we will
achieve our required 2.2X multiple of index returns. As we can see, Augusts
spike in volatility caused the Voloscillator value to plummet (Figures 6 and 8).
Weighing our options on August 15th, it seems that the market is too volatile
to achieve our investment goals by going long a triple-levered ETF. Instead,
we can wait until we anticipate a decline in volatility.

FIGURE 10 - S&P 500 and UPRO closing prices for our examples
holding period.

FIGURE 11 - S&P 500 and UPRO closing prices for August 15th
through November 3rd.

FIGURE 9 - Our oscillator for JuneSeptember 2011, along with the


S&P 500s price at close.

Figure 10, two above, plots the prices of UPRO and the S&P 500 over our
holding period. For comparison, Figure 11, above, shows their performance
starting on August 15th, when we may have invested if we had not consulted

2014 . Issue 68

75

the Voloscillator. UPRO still realizes substantial gains, but the high volatility of
August through mid-September causes noticeable decay in the value of the
LETF during that period. The tableIII compares our hypothetical investments
return for the two periods.

Period a: With voloscillator


DATE

S&P 500 PRICE

UPRO PRICE

9/22/11

1129.56

22.79

11/3/11

1261.15

30.85

Return

11.6%

35.4%

Period b: Without voloscillator


DATE

S&P 500 PRICE

UPRO PRICE

8/15/11

1204.49

28.24

11/3/11

1261.15

30.85

Return

4.7%

9.2%

Average
Voloscillator
Value

Achieved LETF
Return
Multiple

Average
Voloscillator
Value

Achieved LETF
Return
Multiple

4.427

3.04

-0.696

1.96

TABLE III - Price and return for S&P 500 and UPRO for the two
investment periods. The lower part gives the average Voloscillator
value for the two periods, as well as the multiple of index returns that
UPRO achieved.
For Period B, the period starting on August 15th, the average Voloscillator
value was below zero, and UPRO return was 1.96 times index return, lower
than our selected risk margin of 2.2. Period A, on the other hand, had an
average oscillator value well above zero, and UPROs return multiple
actually exceeded the 3X daily leverage factor. By applying the Voloscillator
to this strategy, we avoided a highly volatile period that would have been
detrimental to our investment return, and our investment ended up
outperforming our objective.
Note that this example is an instance when the oscillator was effective
not as a return prediction tool, but as a mechanism to discern when LETF
investment was appropriate. While Period A happened to have a higher S&P
500 return than did Period B, this was a coincidence our tool does not help
the user find periods of high return, but rather periods where volatility is

76

likely to be low relative to return. A positive oscillator value implies that if


the index achieves the returns you predicted, an LETF is likely to meet your
return objectives. Conversely, a negative oscillator value does not mean that
your investment analysis will be incorrect, but that prevailing volatility levels
make LETF investment unlikely to be effective.
Furthermore, while our example simply used the Voloscillator to avoid an
unfavorable investment period, this tool can also be used to track volatility
trends using technical analysis methods such as trendlines, support and
resistance levels, and Relative Strength Index. Just like price levels, volatility
can trend, and the Voloscillator allows the user to analyze an inverted
volatility chart that is centered around the traders predetermined volatility
threshold.
In conclusion, we believe that the Voloscillator and our graphical
representation of LETF returns versus realized volatility are novel tools for
investors considering using leveraged ETFs. We hope that our investigation
will help investors better understand the effects of volatility on their
investment returns, and will improve their ability to use LETFs in a more
effective manner. The relationship between volatility and LETF return is
complex, but with a proper understanding of volatilitys long-term effects,
we believe that investors can use LETFs to generate satisfactory returns for
their portfolio.

harnessing volatility for Profit Through leverage:


How I Learned to Stop Worrying About EMH and Love Leveraged ETFs

aPPendix
A. Multi-day return can be expressed as the product of daily percent changes.
If we break down our 10-day example to a 2-day scenario, there are only two
days we have to consider: the day that the index price moves up, and the day
the index price moves back down. Thus, if we know an LETFs daily percent
changes for these two days, we can calculate the return of that product.
The equation below shows the relationship between 2-day return and daily
percent change.

This can be simplified to the following:

Where:
r = 2-day return
u = Percent change on the day the price goes up
d = Percent change on the day the price goes down

= Annualized LETF return


A = An independent variable representing the indexs price gain on day one.
L = The leverage factor of the LETF
n = The number of trading days in the reference period.

Since LETF daily percent changes are simply multiples of index daily percent
changes, we can write a return equation for an LETF in terms of the index
daily percent changes based on the relationship we just defined.

If we want to adjust the equation to incorporate the possibility of positive or


negative index return, we need to change the return equation so that instead
of exactly offsetting the first days gains, it will leave the index price below or
above 100. Let us say that day twos closing price will now be (100+R), where
R is determined by the index return that we choose. The return of the index in

Where:

= LETF 2-day return


U = Percent change of the underlying index on the day the price goes up
D = Percent change of the underlying index on the day the price goes down
L = The leverage factor of the LETF

Now that we have an equation for the return of the LETF in our 2-day
scenario, we need to find a way to relate our return equation to a realized
volatility equation. We will do this by defining the variables U and D in terms
of an independent variable, A.

Where:

Where:
R = Index price change over the hypothetical 2-day period
= Index return over the entire reference period
n = Number of trading days in the reference period
We rearrange this formula to find R in terms of r:

With this in mind, we update our return formula by incorporating R.

Where:
A = The indexs price gain on day one

If we simplify this, we get:

Substituting our new values for U and D, we can get LETF return in terms of
our new variable, A.

We now have LETF return for when index return goes up by A on day 1 and
down by A on day 2. To make the model applicable to scenarios of more than
two days, we will raise the 2-day return to the power of /2.

Where:
= LETF return over the reference period
A = An independent variable representing the indexs price gain on day one
L = The leverage factor of the LETF
n = The number of trading days in the reference period
R = Index price change over the 2-day period, as calculated by the formula:

2014 . Issue 68

77

Where:
= Index return over the entire reference period

B.
We start with the realized volatility formula:

Where:
= Annualized LETF return
A = An independent variable representing the indexs price gain on day one
L = The leverage factor of the LETF
n = The number of trading days in the reference period
R = Index price change over the 2-day period, as calculated by the formula:

Where:
= Index return over the entire reference period
Where:
V = Realized volatility
252 = A constant representing the approximate number of trading days in a year
t = A counter representing each trading day
n = Number of trading days in the reference period
Ln = Natural logarithm
= Index price at close on day t
= Index price at close on the day immediately preceding day t
Since realized volatility is derived from the sum of squared daily returns, for
our 2-day scenario we can eliminate the sigma from the equation and simply
add the squared daily returns for the two days.

We can use the variable A to express the index price at the end of day 0, 1, and
2. Substituting in A and setting n to 2, we get:

To modify the equation to allow the possibility of positive or negative index


returns, we then replace the numerator representing
with 100+R.
100+R represents the price at the end of day 2.

78

Note that we take the realized volatility of the index, not the leveraged ETF,
so the L variable is not used.

references
McCall, Matthew, 2011, Leveraged ETFs: Are They Right For You?, Investopedia,
Web.
The VolX Group Corporation, 2013, RealVol Daily Formula (Realized Volatility
Formulas), Web.

$IBSMFT)%PX"XBSE
3 & $0 ( / * ; * / (  & 9$ & - - & / $ &
/PX"DDFQUJOH4VCNJTTJPOTGPSUIF$IBSMFT)%PX"XBSE
5IF$IBSMFT)%PX"XBSEIJHIMJHIUTPVUTUBOEJOHSFTFBSDIJOUFDIOJDBMBOBMZTJT5IF"XBSEFNCPEJFTFYDFMMFODFBOE
DSFBUJWJUZJOUIFFMEPGUFDIOJDBMBOBMZTJT8JOOJOHQBQFSTIBWFDSFBUFETVDDFTTGVMUSBEJOHTZTUFNT JOTJHIUTJOUP
UIFPSJFTPGIPXNBSLFUTGVODUJPOBOEIBWFSFQSFTFOUFEUIFSJDIOFTTBOEEFQUIPGUFDIOJDBMBOBMZTJT
0GUIFBVUIPSTDPBVUIPSTXJOOFST FJHIUIBWFHPOFPOUPQVCMJTICPPLTCBTFEPOUIFJSTVCNJTTJPOTUPUIF$IBSMFT
)%PX"XBSE8JOOFSTIBWFQSFTFOUFEBUNVMUJQMF.5"FWFOUTBOEXFSFNFOUJPOFEJOWBSJPVTNFEJBPVUMFUT
JODMVEJOH#BSSPOT #MPPNCFSH/FXT $/#$ $//.POFZ *OWFTUJOH%BJMZ .JOZBOWJMMF 5IF4USFFU 5IPNTPO3FVUFST 
BOE5IF8BMM4USFFU+PVSOBM
5IFDPNQFUJUJPOJTPQFOUPBMMQSBDUJUJPOFSTBOEBDBEFNJDT5IFTVCNJTTJPOXJMMCFKVEHFECBTFEPOJUTBCJMJUZUP
FOIBODFUIFVOEFSTUBOEJOHPGNBSLFUBDUJPO UIFDPODFQUTPGUFDIOJDBMBOBMZTJT BOEUIPSPVHISFTFBSDI'PSUIF
DZDMFXFBSFBDDFQUJOHQSFWJPVTMZQVCMJTIFEOPODPNNFSDJBMXPSL'PSBEEJUJPOBMJOGPSNBUJPOPOUIF4UBOEBSETPG
+VEHNFOUJODMVEFEXJUIJOUIF(VJEFMJOFTGPS4VCNJTTJPOTWJTJUIUUQHPNUBPSH%PX"XBSE(VJEFMJOFT
5IFXJOOJOHTVCNJTTJPOJTEJTUSJCVUFEUPPVSWBTUNFNCFSTIJQTQBOOJOHDPVOUSJFTBOEBQQSPYJNBUFMZ
NFNCFST5IF"XBSEDBSSJFTBQSJ[FPG BOEJTQSFTFOUFEBUUIF.5"(BMB"XBSET%JOOFSIFMEJO/FX:PSL$JUZ
JO.BSDI'PSNPSFJOGPSNBUJPOPOUIF$IBSMFT)%PX"XBSE QMFBTFDPOUBDU"KBZ(+BOJ $.5BU
%PX"XBSE!NUBPSH

'JOBM4VCNJTTJPO%FBEMJOF
+BOVBSZ 
0VUMJOF4VCNJTTJPO%FBEMJOF
/PWFNCFS 
4VCNJUUJOHBOPVUMJOFJTPQUJPOBM5IFPVUMJOFQSPDFTTXJMMIFMQDMBSJGZBOEPSHBOJ[FUIFDBOEJEBUFTUIPVHIUTXIJMFQSPWJEJOHUIFGPVOEBUJPOGPSMPOHFS
SFTFBSDI5IFDBOEJEBUFXJMMSFDFJWFDPNNFOUBSZPOUIFUPQJD JUTTVJUBCJMJUZ BOEUIFBQQSPBDIUIFDBOEJEBUFJTUBLJOH

Issue 68

You might also like