You are on page 1of 25

Risk Providers Respond Six Key Questions Answered by Barra, APT, Northfield, R-Squared, and Axioma

Introduction
One of the most frequent questions we get from our users is how do the risk model providers differ? In this series, we ask each risk provider that has integrated with FactSet a standard set of questions. By comparing their answers to these questions about their approach to risk, the impact of the credit crisis and their opinions on model types, we hope that you gain insight into the differences among the providers: APT, Axioma, MSCI Barra, Northfield, and RSquared.

Part 1a: If weve gone global, how should models adapt?


Our first question, taken in two parts, addresses globalization as it impacts modeling. The first part given the increasing multinational exposure of companies in todays markets, is it beneficial for portfolio managers to consider the global markets in risk analysis? Do regional and country models still play a role in risk analysis?

Laurence Wormald, Head of Research, SunGard APT


We think that theres a lot of different kinds of requirements among this specializing world of investment. And for certain new kinds of specialists, especially those in, say, Frontier Markets or the Middle East and other really different kinds of markets, then it is important to use a dedicated risk model. However, there are a lot of other managers out there who are trying to manage money essentially looking at the global market. And theres no doubt that risks are global in nature. And if you ask to truly understand where risk is coming from, I would say I think you do need a model that takes into account global factors. But nevertheless, we provide a range of dedicated models for Frontier Markets, Greater China, Middle East and North African regions for those specialists, while we still put a lot of effort into making sure our global risk models do really reflect all of the global risk factors, which themselves affect each region and specialist area, but in slightly different ways.

Sebastin Ceria, Chief Executive Officer, Axioma


Certainly, where we are right now is we are in an environment where volatility is rapidly changing. And that change is caused by a variety of reasons; some local or some global. Obviously, the world is much more interconnected than it used to be. So in an environment of rapidly changing volatility, the first thing you have to worry as a risk provider is that your risk models are adaptive to that sudden changes in the volatility regime that youre living in. And we believe that for this particular reason, it is indeed very important to use daily data when building a risk forecast.
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

So the first point that Id like to stress is that in a world, which is, as you said, interconnected and also, in markets which are going through much quicker moves, this idea of daily data is certainly very important. Axioma very much believes in it. Its obviously a lot harder to use daily data because its harder to clean it. And you particularly have to deal with certain issues when youre dealing with global models, which well get into in a second. And Axioma is there. Axioma is the only provider that essentially re-estimates all models on a daily basis.
The second part of your question is also very important, and that is and obviously, its connected to the first one, which is does the global market matter, right now, more than it used to; and is risk or is are events in other markets affecting whats going on in other regional or local markets? And the answer is, of course, yes. But the question is not whether this is the reality. The question is how do you deal with that reality? Essentially, because what happens is, markets do not trade on a 24-hour clock; markets open and close, and obviously, the open and close of those markets affects other markets in other time zones. So when Axioma thinks about global risk, Axioma is thinking about, again, using daily data, but taking into consideration this interaction effect that is happening across markets. So, Axioma has included a proprietary trading adjustment into the model we call it Returns Timing Adjustment which takes into consideration this effect that some markets have on other markets as the trading day progresses. So, this is, again, sort of a very important characteristic that we see or has to be taken into consideration when building global models. Finally, the last point of your question is whether regional models matter. And our belief is that, indeed, they do, because although there is much more interaction across markets around the world, there is also a separation of the behavior of these markets. And probably, the most notable example is how emerging markets are behaving differently than some of the developed markets these days. And so we believe that regional models are essential, because they actually allow you to capture some of the effects that are occurring in those regional markets.

Oleg Ruban, Senior Associate of Applied Research and Patrick dOrey, Vice President and Head of Europe Equity Analytics, MCSI Barra
Oleg Ruban: Yes. We believe that, in fact, both types of models play a role because they illustrate essentially different facets of risk and, as such, they are complementary. So for example, if you think about a global model, it is building such a model, it is important to understand the main drivers of correlations between different markets. Also, these drivers of correlations will be appropriate to analyze risk over a broadly diversified portfolio; for instance, a portfolio that is invested in many countries, many different submarkets. However, if you have a portfolio that is concentrated in a single country or a region, then global factors may not be adequate enough to explain the risk and return of that portfolio. And region or country-specific factors are necessary in that case to forecast, then, attribute risk accurately. So, there is a lot of value in the additional granularity of regional and single country models.

And its also worth noting that there are different ways in which you can build a truly global risk model. So, one way, for example, is to aggregate single-country models through a set of global factors. And in this case, this model can

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

provide full accuracy and detail of local models within markets, but also give consistency for risk forecast at the global level.
Patrick dOrey: Just one point from my end; I think also the key thing here is that there is never a one measure fits all. So, while, certainly, movements in global markets play a large role in driving returns, different investment strategies have different requirements. And as such, they will need suitable models that will fit that purpose.

Dan diBartolomeo, President, Northfield Information Services


Well, I think, theres really a couple of things you have to think about. The first is an understanding of where a security is really traded. Even in emerging market countries, frequently, a large number of the investors are from outside the country. So, you really have to think about, are you trading in local shares or are you trading in ADRs or GDRs, and which form of the security is the dominant one in terms of the investors? The second thing to think about is that, particularly in emerging markets, trading is important. Bid/ask spreads are large. And therefore, when assessing risk, liquidity, or the lack of liquidity is an important a very important component of risk and, therefore, needs to be taken into account in the way observed returns are being analyzed. The third thing that I would do is think about how the model of risk that youre using makes the distinction between developed markets and emerging markets in terms of how the model and the security factor exposures are estimated. Typically, when youre dealing with a global model that is looking at the world in a capitalization-weighted way, the world capitalization is dominated by large multinational companies. And therefore, it is typical to estimate the factors using a capitalization-weighted scheme, or a square root of capitalization-weighted scheme. On the other hand, if primarily your risk interest is in emerging markets, where many more of the influences on companies and the stronger influences are local, its often best to have a model, which is estimated on an equalweighted basis. The same argument would occur with small-cap. There are many, many more small-cap firms in any market than the number of very large firms. On the other hand, the large firms dominate from a capitalization perspective. So, it really depends on what kind of portfolio youre holding, what sort of model estimation process is apt to be best for you.

Jason MacQueen, Founder, R-Squared Risk Management


I think my answer to that is that it depends on the way the manager goes about constructing their portfolio. If, for example, youre a British fund manager managing U.K. securities, everyone knows on the one hand, that most of the constituents of the FTSE 100 Index are multinationals and derive a lot of their earnings from markets outside the U.K. However, it would be quite legitimate to analyze your U.K. equity portfolio using just a U.K. equity risk model. The fact that there will be currency influences will still be there, but they wont be brought out explicitly if youre just using a U.K.-only equity risk model. But that might be appropriate for someone who is only thinking about U.K. equities. I guess my answer to this question also answers the, sort of, slightly more general question, which has to do whether with whether or not there is one correct answer, so to speak, to how you should go about modeling risk.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

Our very strong view is that there is not one correct answer. There are any number of equally good different ways of decomposing risk, and it really depends on how the manager is going about constructing the portfolio, what their investment process is about in order to use a model thats appropriate.
So to give a complete specific answer to the question, you dont need to consider global markets, even if youre investing in a portfolio that you know contains multinational stocks that happen to be based in some particular market. If you do consider global factors you will, of course, get a slightly different breakdown of the overall risk. Regional risk models clearly do still play a role in risk analysis. If youre running, for example, a European portfolio or an emerging markets portfolio, it probably would be useful to use a European risk model or an emerging markets risk model to look at those things. But equally well, you could use a global risk model provided it recognized the existence of those different regions.

Part 1b: How can investor analyze risk in the emerging markets?
Next, well hear their responses to part two of our question. The emerging markets are a growing and potentially lucrative part of the market. How can investors best analyze risk in this relatively new area of interest?

Laurence Wormald, Head of Research, SunGard APT


We think that you do need dedicated models, and Ive mentioned some of the ones which weve created just recently like Frontier Markets. But if you ask how do they best analyze risk exposure, its all about flexibility of analysis, I think. And so we continue to encourage our clients not to think in terms of just pre-specified sets of factors or just conventional risk measures, but rather, think about the flexibility that you can achieve in trying to attribute and look at the contributions to risk in different regions, especially these emerging market regions. So with the technology we call RiskScan, which is available in FactSet, you can take user-defined factors and combine them yourselves to create the most flexible view of the risk exposure for your particular region. And we know that there will be factors like liquidity, which are exceptionally important in some regions, and less so in others. So that flexibility and attribution to risk, which we have in the APT system, allowing users to choose their own factors which are overlaid on the model factors, is something that we think does make a difference for the specialist areas.

Sebastin Ceria, Chief Executive Officer, Axioma


So the first point is that you need a dedicated emerging markets model. You cannot analyze whats going on in emerging markets with a global model because a global model has within it many developed countries, and the behavior of those markets these days is very different from the behavior of these emerging markets. So the reality of these emerging markets is not really captured or is not really related to how the developed markets are doing. So, you need an emerging markets model. Axioma has a dedicated, both fundamental as well statistical, emerging markets model that allows us to attune the model to the behavior of these emerging markets by having an estimation universe that is exactly focused on these markets.
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

Olivier dAssier, Managing Director for the EU and Asia Markets, Axioma
To put it perspective, emerging markets are about 13% of the MSCI All World Country Index [sic MSCI All Country World Index]. So, if youre using a global model to analyze that 13%, your model includes quite a lot of noise that may not be relevant for that particular asset class. With the dedicated emerging market model, you can focus on the issues that are pertinent to this asset class, and better manage your risk.

Oleg Ruban, Senior Associate of Applied Research, Patrick dOrey, Vice President and Head of Europe Equity Analytics, MSCI Barra
Oleg Ruban: So, I guess that we should start from the very beginning and highlight what is the most important thing in my mind in terms of building a risk model for emerging markets. And the most important thing here is sourcing good, clean data for this universe, and actually also, in small-cap. This is an important point because data in emerging markets tends to be quite noisy for a number of reasons. One reason, which is because data providers may not have adequate data. The other reason is because the underlying process is indeed jumpy, and the returns are quite often illiquid. This is why we, as a provider, we place a huge amount of emphasis on data collection and cleaning. And so I think we have quite a few decades of expertise of obtaining reliable data in emerging markets, as well as dealing with things like illiquid returns and jumps in currency. Also, in the single-country models that are built for emerging market countries, we always take care to choose the set of factors that is, if you like, the most appropriate for forecasting risks in that specific universe, and in that specific market. So as long as you have good data, and as long as you choose the most appropriate set of factors to explain the returns, and then these factors are also going to be the most important for forecasting the risk, I think you have a good, good balancing in terms of building a model that reflects the situation in emerging markets. Patrick dOrey: So Id add one point, and reinforce the point that Oleg made on data. It may seem almost a boring topic, but it is hugely important. And thats why we take a lot of care and invest a lot of resources into cleaning and analyzing and sourcing our data. And so the effort that we put into doing all of this data cleaning really pays off, because you end up with models that really can explain returns, and can really analyze well the sort of emerging market strategies, or any such type of investments.

Dan diBartolomeo, President, Northfield Information Services


Well, I think, theres really a couple of things you have to think about. The first is an understanding of where a securitys really traded. Even in emerging markets countries, frequently, a large number of the investors are from outside the country. So, you really have to think about are you trading in local shares, or are you trading in ADRs or GDRs, and which form of the security is the dominant one in terms of the investors?

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

The second thing to think about is that, particularly in emerging markets, trading is important. Bid/ask spreads are large and, therefore, when assessing risk, liquidity or the lack of liquidity is an important a very important component of risk and, therefore, needs to be taken into account in the way observed returns are being analyzed. The third thing that I would do is think about how the model of risk that youre using makes the distinction between developed markets and emerging markets in terms of how the model and the security factor exposures are estimated. Typically, when youre dealing with a global model that is looking at the world in a capitalization-weighted way, the world capitalization is dominated by large multinational companies. And therefore, it is typical to estimate the factors using a capitalization-weighted scheme or a square root of capitalization-weighted scheme. On the other hand, if primarily your risk interest is in emerging markets, where many more of the influences on companies and the stronger influences are local, its often best to have a model, which is estimated on an equalweighted basis. So, it really depends on what kind of portfolio youre holding, what sort of model estimation process is apt to be best for you.

Jason MacQueen, Founder, R-Squared


Well, again, its similar to the first part of the question. Again, the answer is that you could do it in a different a number of different ways. You could use just an emerging markets model, or you could use a global model that had emerging markets recognized within it. Either one would give you an analysis of the risk of a portfolio. An emerging markets risk model will tell you that all of the risk in the portfolio was captured purely by factors that were used to build that model, of course, apart from stock-specific risk, whereas a global model might also suggest that you had exposures to factors outside the emerging markets themselves; although I would expect that most of the risk would be dominated by emerging market-related factors. And again, the answer really is that it depends on what the manager is most comfortable with in terms of thinking about the restructure of the portfolio.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

Part 2: What has the market crisis done to risk modeling?


Have you ever wondered what distinguishes the major providers of risk models? How have the recent events in the market impacted the approach to risk modeling among the providers Northfield, MSCI Barra, Axioma, R-Squared and APT?

Jason MacQueen, Founder, R-Squared


Weve had to make a number of adjustments to our methodology for building risk models in the light of the events over the last two years, I would say. As a matter of fact, I gave a presentation on the changes that weve made at an evening seminar at FactSet a couple of weeks ago and I believe that is available for people who might be interested in the detail. But just to summarize that, I guess from the a risk model builders point of view, the main impact of the credit crunch and the subsequent market turbulence was first of all that our factor volatility or market volatility generally, I should say, went up quite dramatically. But even more important from a modelers point of view, correlations went up extremely to an extremely high level. We had sort of countries and industries with correlations of over 0.9 in some cases. Now if you are trying to build a risk model in which you start with factor returns and then try to estimate the sensitivities of different stocks to these factors, you frequently end up in a situation where you are doing multiple regressions. So, for example, you might have a company like SocGn, French bank, and you are trying to estimate simultaneously its sensitivity to the French market factor and also to a banking factor. And ordinarily, in normal times, that wouldnt be problematic at all, its a very straightforward exercise. When the credit crunch happened and correlations got extremely high and volatilities went up very high, we began to experience cases where instead of getting two sensible betas out, we would get sort of one very large positive beta on one factor and a very large negative beta on the other factor. And this is just an artifact of the multiple regression technique. What was even more bizarre was that if you went forward a couple of weeks, added a bit more data to your sample and dropped some off the back end, youd sometimes find that you still had a large positive and a large negative where of course youre expecting two sensible positives, but now theyve reversed. The one that used to be positive has now become negative and the one that was negative has become positive, and we ended up with stock betas flip-flopping around in some cases. I dont want to overstate this. This wasnt happening for every stock, but it was happening for enough stocks to make it worthwhile doing something about it, so we had to develop a technique that would end up giving us sensible and probably positive betas for these sorts of exposures. The details are in the presentation that I alluded to, so I wont go into them there. The other thing that we have done, as a result of the credit crunch and particularly for the short-term risk model, because market conditions have changed so rapidly, weve tried to make the model more responsive to recent changes in the market environment. All risk models, after theyre built are then scaled so as to, if you like, give the right answer in comparison to some broad market portfolio on an ex-post basis. Normally when we build a model, as we did with the short-term risk model, is based on one years worth of daily returns looking back time-weighted. When we do the scaling, we would scale it to the volatility of a global market portfolio also over a one-year period. But after the credit crunch happened, and we found that volatilities were
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

changing very rapidly indeed, we changed that. So I think its now currently scaled to a trailing six-month exposed volatility of a global market portfolio. But that did have the effect, Im glad to say, of making the model more responsive to recent changes in market volatility, and is therefore giving quite good forecasts of short-term risk.

Dan diBartolomeo, President, Northfield Information Services


I dont think weve changed anything we do post 2008. Starting around 2005, and even in fact going back earlier in one case to 1997, we have developed a very different way of looking at risk over shorter time horizons. Rather than look at high-frequency data as many people do, we chose to continue to use low-frequency data, but supplement the observed risks with information from alternative sources such as option markets and other forms of volatility measurement, such as what is known as Parkinson Volatility, cross-sectional volatility and all sorts of other measures which are more readily observable in the short run.

Oleg Ruban, Senior Associate of Applied Research and Patrick dOrey, Vice President and Head of Europe Equity Analytics, MSCI Barra
Oleg Ruban: So in one way, we havent really changed our approach that much because what the markets have started to focus on following the crisis, so things for example like looking at risk at the multiple horizons as well as having the right set of tools to look at the sales of return distributions. Well, weve been offering this kind of tools that can help you do that for quite a number of years. Now our view, which we have maintained again for a number of years, is that risk essentially is not a single number. So since the early part of this decade we have been providing long and the short versions of many of our models. By this decade, I mean of course the end of the since the end of the 90s. So this helps investors to address the issue of analyzing risk accurately at multiple horizons. And we also have a number of tools that can help investors look into the sales of the return distribution. For example, we provide the capability to run correlated [inaudible], which allow users to see the impact of simulated stress event on their portfolio in a theoretically robust way. We also have a tool for analyzing extreme risk. So weve recently released a tool called BXR, which looks at the empirical facts of distribution and provides a way of calculating the risk of extreme gains and losses in the portfolio. So we can use this kind of tool because we have a very long time series of high-frequency returns for our factors. So typically when youre going into the tail of the return distribution, the most important problem that you face is again one of adequate data, because by definition tail events dont happen all that often. So on an asset level or on a portfolio level, you may not have enough data history on its own to really understand what would happen in the tail of the distribution. But since we have a very long time series of high-frequency returns for factors, we can use that together with the kind of exposures of the portfolio to the factors, and if you like combine what we know about the tail of the factor return distribution with the current exposures to give an accurate picture of tail risk given the composition of the portfolio as it is today.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

So I think this is essentially what we think about when we think of how recent trends have changed the market perception of the way people look at risk. And weve as Ive just mentioned, again, weve been sort of advocating that people take a multi-dimension view of risk for a number of years.
Patrick Burke-dOrey : Yes. I think the bottom line really is that the market has realized that what weve been saying for many, many years is the right approach to looking at risk. So as Oleg said it, risk is not a single number. And so hence, all of the wide range of tools that weve offered for many years, theyve even before this crisis, we find that the market is now using more and more of these in conjunction. So this is really the recognition and the change. So our approach to risk analysis, if you like, hasnt changed that much. But Ill give you an example of this recent Barra Extreme Risk toolkit that Oleg has mentioned. We started researching this a few years before the crisis. We recognized that we needed to look more into the tail of the distribution. And as such, it happened that we released it in a very timely fashion when people started looking into what are the what is the impact of analyzing the tail risk so to speak. So I think really the case is any provider who can give you multiple views of risk and who can give you multiple ways of assessing and looking at what your portfolio looks like will be very much the preference of the market.

Sebastin Ceria, Chief Executive Officer, Axioma


Well, so when Axioma embarked on this risk venture, that was in 2005, when we made the acquisition of intellectual property from Goldman Sachs Asset Management with respect to the risk platform, we already thought this was 2005, so the times were very stable, volatility was very low and stable. But we already knew by looking at history that this was not going to be an environment that would last forever. And so when we embarked on this venture, we said to ourselves that we really wanted to find ways in which we could distinguish ourselves from our competitors, and what we found three broad categories where that was important at the time. One is that, we thought that it was important to actually re-estimate models on a daily basis rather than on a monthly basis. At the time, it appeared like, why do you need that level of precision, volatility is very stable and its not changing that rapidly? Our answer was, although this is a good environment to be in, this environment is not going to last forever. It looks like we had a premonition we didnt that volatility was going to change that rapidly, but essentially our ability to re-estimate models on a daily basis was really visionary. The world showed that we were right, that indeed you needed to re-estimate models on a daily basis when 2008 happened and the crisis of Bear Stearns and then Lehman Brothers and there was this sudden rise in volatility. Our models did an exceptional job in capturing that sudden rise in volatility because they were re-estimated on a daily basis. We didnt have to wait until the end of the month or we didnt use monthly data for any of our models. So that was one of the ways in which I believe that recent events have impacted risk model vendors is now everybody is talking about re-estimating models on a daily basis, because everybody has recognized that volatility can rapidly change. So Axioma was the first one to do it and that competitive advantage has been a huge advantage for the clients that subscribe to our models during those times.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

The second thing we said when we embarked on this risk venture was that we didnt believe in this religious discussion of fundamental models versus statistical models of which one was the one that was right? Whether you should do a fundamental model or you should do a statistical model? And we thought that customers were forced to make a choice that they should not be forced to make, that actually statistical models would work well under certain environments and for certain strategies and fundamental models would work well under different market environments and different strategies.
So Axioma decided to actually provide multiple risk models, both a fundamental model as well as a statistical model as part of our platform. Again, at the time, 2005, 2006 it looked like this was not needed. It so happened and when the crisis hit, the ability of our clients to have multiple views of risk became again a great competitive advantage. Finally, and again, it would look like we really knew what the future was going to bring, we didnt at the time. We were just speculating obviously like everybody else was. We thought that this idea of transparency was crucial that when we actually did our fundamental models, we thought it was particularly important to be extremely transparent with our clients and tell them exactly what each factor meant to give them a very clear definition of how the factors were computed so that they could easily understand where risk was coming from and if risk was changing, where those changes was coming were coming from. So that was, again, essential we believe during the crisis. So to some extent has the crisis changed any of the views that Axioma has had? The answer is no. The crisis has fortified or has actually given a lot of credence to what we thought were big differentiators and big competitive advantages that our firm had.

Laurence Wormald, Head of Research, SunGard APT


As you might expect, you cant look at this period as a risk manager and that was my job before I took this job. As a risk manager in a big hedge fund, you have to reassess just about everything you were doing and we understand that our clients are making some pretty big reassessments of their risk management processes. So, so have we here at SunGard APT. And there are things that well talk about in terms of time horizons, short-term versus long-term models. I would say in addition to that, we have made two major efforts to think about improving the way our models work. One of them has to do with the way in which we look at these global macro factors across regions and across asset classes. So even in the equities world alone, global factors such as credit effects, interest rate effects, volatility effects, economic growth, investor confidence and so on clearly affected the risk on equities portfolios. So weve been adding those factors into the what we call the estimation core of our models, all of them, in order to show that the factorization approach that we do, the principal components factorization approach we do, can better get a handle on these emerging very volatile factors risk factors. So adding macro factors to the what we call the estimation core has really helped us get a grip on the changing nature of factor risk over the last two, three years, thats one thing. The other thing that weve undertaken and quite definitely in response to recent events has been to undertake a major effort to back test and look at the outer sample performance of all of our risk models across all asset classes, across all regions again. Were looking at risk performance over 13-week periods, over one-year periods, over two and three year periods, now, as this crisis has rumbled on. And were making the results of those back tests transparent to our clients. So they can see for themselves where our models worked well and in some cases where they didnt work quite

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

so well. So I think transparency is a very important aspect of risk management, it will be even more so going forward. And so I think the transparency that goes with proper scenario analysis and proper back testing of your models, were aiming to make that available to our clients.
Weve been working with what we call our short-term volatility models and that has a decay rate which typically our clients set to have a half-life of approximately 13 weeks or a quarter, that has made the models more responsive than our medium term models during the real turbulence and changeable volatility over the last couple of years. So it is important even for regulatory reasons, its important to understand how a decay rate or influence function changes the forecast.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

Part 3: Looking to the horizon length does a long time horizon still prove worthwhile?
Traditionally, time horizons in risk management were long. What impact has the credit crisis had on the use of a long horizon strategy? Do these providers believe that investors can benefit from using risk analysis over shorter or multiple time horizons?

Dan diBartolomeo, President, Northfield Information Services


I think theres really two aspects to that. One is for large asset owners, big pension funds, sovereign wealth funds, it is economically in their interest to be long-term investors, and to extract rent from liquidity providers for being long-term investors who are willing to sit through volatility when markets are volatile. And we would suggest for those kind of investors, it is in their interest to continue to use long-horizon models. On the other hand, both asset owners who invest in things like leveraged hedge funds, and the people who operate such leveraged funds, have a problem. And the problem is that all of our portfolio theory is based on Markowitzs original conception of the future being one long period. And obviously if you can go broke and disappear in the middle of the period, then thats not a very good conception. As a result, for investors or asset managers who have significant risks of non-survival, basically liabilities at call, hedge funds that are margined for example or things of that nature, then clearly a shorter-horizon risk estimate is appropriate, simply because it doesnt matter what your risks are over the next year if you only survive for the next two weeks. And so we supply, essentially for all of our models, two different versions, one of which, which does focus on a 10-day horizon and one of which focuses on a one-year horizon.

Oleg Ruban, Senior Associate of Applied Research, MSCI Barra


Again, the most important thing I would say is to have the appropriate model given the investment process and given the decision process that you are facing. So different measures are likely to be appropriate in different circumstances. So weve been providing and advocating that investors use both long and short versions of our models for a number of years, where long versions have a longer half life and are appropriate for, if you like, looking at a longer investment horizon, and short versions have shorter half life and are appropriate for looking at a more immediate picture of risk. And the other thing that I would again mention in this regard, in terms of multiple views of risk, that we really need to go beyond volatility in assessing risk. And we see also our clients, investors, private investors, increasingly becoming interested in risk measures that really look at the tail of the distribution, not just different horizons but also different ways of analyzing the whole distribution and not just the central part of it.

Jason MacQueen, Founder, R-Squared


I am not quite sure I would describe it as multiple time horizons, but I do see a benefit from a medium- to long-term horizon investor looking at the short-term structure of the portfolio, if theyre about to undertake a major rebalancing for whatever reason. It used to be case when markets were less turbulent than they have been recently, thats of normal times, that if you said you had a sort of 12 to 18 month horizon say, you could sort of forget about looking at short-term risk because it really wasnt relevant. You were just sort of generally chugging along, rebalancing from time
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

to time, and what you really needed to think about was to was the risk structure of the portfolio based on a sort of medium- to long-horizon risk model.
One of the interesting things thats been happening in the last couple of years is weve had a number of investment managers who do have medium- to long-term horizon, wanting to look at the short-term risk structure of their portfolios with a view to avoiding being caught out, if you like, by exposures to short-term effects when they are about to undertake a rebalancing, or to invest new money, or indeed to raise money by selling things. So we now have a number of people using both a long-term model and a short-term model for when theyre going to be sort of doing trading, and theyre found that quite useful. Our kind of main business is building customized risk models that match a particular managers investment process, they could either be global models, or indeed regional models or single country models, weve built all sort. They could have different horizons, short-term, medium-term, long-term and so on. What weve tried to do with the short-term risk model thats available on FactSet is to build something that will be useful to a fairly wide range of different types of investors. The models coverage is extremely broad, I think it covers about 40,000 stocks worldwide. It tries to recognize the obvious different regions and countries around the globe, so emerging markets, the developed markets obviously, and also looks sort of industry exposure and some active factors, what some people call style factors. There is always a trade-off in this sort of exercise between building something that will be useful to a broad number of different managers on the one hand, and building something that will be most useful for a particular type of manager on the other hand. But the short-term risk model weve built, as I say, tries to cover all of the regions that people are going to invest in. It includes the most common of the style factors that people use, value, growth, liquidity, momentum and so on, and its intended to give people an idea of what their short-term risk is, up to a horizon of about one, maybe two months.

Laurence Wormald, Head of Research, Sungard APT


You are right that traditional assumptions about time horizons got shaken up during the recent financial crises. We for very good reasons, we think that clients should indeed be focusing on shorter or multiple time horizons. Its very important that you dont just make some assumption that things are going to smooth out over the long term. So we have been working on different kinds of models to help clients understand the different time horizons and the risks associated with them. While there is still an importance across the board on a time horizon of say, a quarter; I think almost everyone can agree that quarterly performance and quarterly risk is important. So if you like, our flagship models are still designed as they always have been to give good forecasts of risk on the timescale of a quarter or a year or two years. Those we now call the medium-term models. We have indeed developed two types of models with shorter time horizons. One type is still based on weekly data, and but it is with an influence function, applied; the other type is based on daily data. So we now use, we now have a daily model, useful for those with very short time horizons, with lets say days to weeks. And a short-term volatility model, which can be tuned to meet the time horizon of clients, but its basically perhaps weeks to a quarter. So were trying to cover then the whole spectrum of time horizons, from those hedge funds perhaps which are really concerned about risks measured daily and the changeability of those risks, to the majority of our fund managers who

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

naturally would be thinking in terms of a quarter, but perhaps like to get a handle on things, slightly even shorter time horizons than that, to those who continue to think in terms of quarters to years with our medium-term model.

Sebastin Ceria, Chief Executive Officer and Olivier dAssier, Managing Director for the EU and Asian Markets, Axioma
Sebastian Ceria: I think that investors always like to say that theyre long-term investors, unless a crisis happens and suddenly everything changes rapidly, in which case they become very, very focused on the short term. So everybody is a long-term investor as long as things are stable, and everybody becomes much more in short-term focused when there is a crisis or when there is a sudden change in the market structure. So our belief is that really long-term assessment of risk has become less relevant, people are more or our clients or our prospects are much more focused lately on what we would call medium horizon. So that is a three- to six-month horizon to estimate risk. We do believe that the bulk of the market is really looking at those kind of timeframes. They are short enough that they allow them to have that they implement rapid changes in the volatility environment, and they are long enough that it allows them to get a sense of where risk is growing, more from a long-term investor perspective. We also believe that there is a need for short-term models, that is for strategies that invest in a much shorter-term horizon, so I would say the one-month time horizon, and so we have, for some markets right now, short-term versions of those models. Olivier dAssier: You can think of it this way, the long-term or sort of long-term investors tend to think of investment horizon when they think in terms of return. So they take a long-term view on certain assets that they want to hold. But they will take tactical they will make tactical changes to their portfolios based on short-term risk changes in that portfolio. So the risk horizon, the short-term risk horizon is still very relevant to them, because they will decide to adjust that position on a tactical basis based on what is currently happening in the market, even though they may have a long-term view on the stock.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

Part 4: Can you do it all with just one model?

Is one model really enough for a detailed view of risk? Do the providers believe in asset-specific models for accurate analysis of fixed income or blended portfolios?

Jason MacQueen, Founder, R-Squared


I have to say regardless this is something of a trick question. The short answer is can a single risk model provide a detailed view would be yes, but thats a misleading answer in the following sense that the risk modeling question although you could call it a single risk model, what it actually is, is a number of different risk models bolted together. So for example, suppose you had a multi-asset portfolio where you had some equities, some fixed income, securities, maybe some commodity futures in, it would be possible to build a single risk model which analyzed a whole lot. But what you actually find if you looked at the model is that there was an equity piece that was analyzing the risk exposures of the equity portion of the portfolio. There would be a fixed income piece likewise for the bonds and so on and there might be a commodity piece for the commodities. And the way that these things would link together is you would find that any particular equity situation for example only had exposures to the equity factors, fixed income securities would only have exposures to the fixed income factors and commodities where we have exposures to the commodity factors, but of course, the factors of the three different asset classes would co-vary with each other, so equity factors would co-vary with fixed income factors and probably with commodity factors. So the linking happens through the covariance of the different factors from the different sub-models, if you will. So I think its something of a trick question or perhaps a misleading question if you like. Id just say, the short answer is, yes. But that gives you the wrong impression because whats actually going on is that you do actually have a detailed risk model for each asset type underneath and they are simply linked together.

Oleg Ruban, Senior Associate of Applied Research, MSCI Barra


Of course, in our view the most appropriate way to analyze different asset classes is through different sets of factors. So a different set of factor is likely to be relevant to the risk of the fixed income portfolio and a different set of factor is likely to be relevant to the risk of an equity portfolio. However, its also important to have an overall view of risk that is computed using the same methodology. And the way to do that is to essentially combine these single asset class models into a cost asset class model. So what we do for example in our integrated model is we allow different asset class returns to be explained by different sets of factors. And then, we aggregate risk in a multi-asset class portfolio by robustly accounting for correlations between different asset classes. So its really the answer is really that both things, if you like makes sense. On the one hand of course, different types of factors need to be used to analyze returns in different asset classes. From the other hand, you then need a combined picture of risk. So you need to find ways to robustly combine these different models into a single model that allows you to look at the risk of your whole portfolio.

Dan diBartolomeo, President, Northfield Information Services


Lets attend to what youre trying to accomplish, if what youre trying to do is run a specialty portfolio. I want the best possible risk model to run a portfolio of Australian bonds, then by all means you should have an Australian bond
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

model. On the other hand, if the goal is to have an assessment of risk across an entire enterprise that may involve multiple portfolios, multiple asset classes, some illiquid asset classes like real estate or venture capital and private equity, there is no statistically legitimate way in my view to take 50 different risk models with 50 different sets of factors and somehow try and aggregate that information together. There is just too many moving parts for such a process to be statistically stable. And as a result, you have to come up with a simpler model that allows you to describe any investment asset in the world within a limited number of factors. And generally, if youre clever, you can find ways of representing even a complex security within a relatively small number of factors.
If you took something like, lets say a corporate convertible bond issued in a foreign currency, you could think of that as a riskless bond, plus a credit default swap, plus an equity warrant, plus a currency swap. So you can take the complex security, decompose it into a series of sub-securities, which can then be analyzed in a relatively simple model and then add the pieces back together. And if you follow that paradigm, you can pretty much analyze anything in the world, from any country in the world and do so within a relatively consistent, relatively parsimonious single model.

Laurence Wormald, Head of Research, SunGard APT


We believe in supporting the specialists with models that are dedicated to their asset class. So we can provide dedicated on commodities, credit models for example and even hedge funds and mutual fund models. There is a lot to be said though for having a multi-asset class approach for several reasons. One is that if you want to aggregate the risk within a firm or organization up by looking at multiple funds and then aggregating the risk to the organization level, you need a multi-asset class framework for that. Another is that if you are running a fund of funds and youve got funds that are in different asset classes, again you need a model thats capable of bringing together the risk at the top level. And lastly, we think that there are many strategies being run these days which really do aim to take advantage of correlations and performance differences between different asset classes and so its important to provide a model that does all that. So again, we are following our clients and we have a global multi-asset class model we call [inaudible] bonds, which actually can be configured to include commodities, include funds, include FX and include credit instruments such as CDS and iTraxx. That model is being well used by our clients including our FactSet clients, but there are those who have specialist needs and for those we are also building specialist models for the different asset classes just as we do as I mentioned before for say certain regions like Greater China or frontier markets. And the key again as I did say and the other question had to do with the flexibility of attribution that can be provided, so a different explanatory factors will work best in different specialist models. And when youre using a global model, we actually provide now well over 200 explanatory factors that clients can select from to do their own factor-based attribution. So, to summarize the answer, we do think a single [inaudible] model can provide a good answer, especially if the question is based in an aggregated view of risk. But we provide specialist models for those who are trading just inside one asset class and we aim to have to the widest possible coverage of asset classes of any factor model provider.

Sebastian Ceria, Chief Executive Officer, Axioma


Well, as we mentioned before, we are big believer in multiple models. So, if you just restrict yourself to equities for the moment, we believe that having both because a statistical as well as a fundamental model can help you better understand risk because a statistical model is going to capture certain things that are more transient in the market,

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

the fundamental model is going to capture more of the long-term risks that are present and prevalent in the market place.
So, we believe that having multiple views of risk is important and thats why Axioma does that. I mean, when you subscribe to Axioma, youre subscribing to both the fundamentals as well as the statistical risk model and this we believe is a key advantage.

Part 5a: Which risk measures work when dealing with extreme losses?
Which measures of risk they favor in the case of measuring extreme losses? The question is in two parts: First, can measures such as duration, tracking error, value at risk or conditional value at risk adequately capture the risk of extreme losses? How important is it to the various providers to have a multi-objective approach to risk measurement?

Oleg Ruban, Senior Associate of Applied Research and Patrick DOrey, Vice President and Head of Europe Equity Analytics, MSCI Barra
Oleg Ruban: So it is very important to realize that volatility is a complete measure of risk only if returns are normally distributed. As we know from a wide range of academic evidence as well as work that weve done internally at MSCI Barra that this is not the case, and therefore there is a clear need to beyond volatility in risk analysis. And there are number of ways you can do this, and the measures that you highlighted here, such as obvious risk, conditional value at risk, especially the latter, can be definitely used to analyze the tails of the return distribution. But also an important thing to highlight here is differences between measures and the distribution, so its not that the measures that have been used in the past are necessarily inaccurate. So things like obvious risk and conditional value at risk may definitely have their place in portfolio construction and risk management. But even more important is: what is the methodology you use to compute this measure? So if in computing this measure, you still rely on the assumption that your returns are normally distributed, then that is not very good. You need to accurately its like the empirical distribution of your return, when you are computing any measures, any risk measures in fact. Patrick dOrey: To add to that, one, just to add one point here, that this, I think, is a very key part because of, there has been a lot of talk around the measures, but not enough talk around how these measures are calculated. So the much maligned VaR for example, there is nothing wrong with VaR per se, but VaR, if you assume that your underlying returns have a normal distribution or follow a normal distribution, then that VaR number isnt going to tell you much. And the same thing goes for the conditional value at risk. So the key thing here though is what we have been able to do is to not only look at these distributions as they are, so without fitting some theoretical distribution around it, but also to decompose these measures, because its very important that you dont just have a single top line measure. You got to identify the causes of it.

Sebastian Ceria, Chief Executive Officer and Olivier dAssier, Managing Director for the EU and Asia Markets, Axioma
Sebastian Ceria: So first, again, lets answer this question from a point of view of equities. When the crisis happened, we were already working with an approach to deal with measures, risk measures that take into consideration the downside risk more than just variance, which is the measure or tracking error which is the measure that we use for our
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

equity portfolios. Our experience has been that over the last year, after the crisis our clients and prospects have become a lot less focused on that, and they believe that the world is moving towards a more normal environment where looking at extreme risk for straight equities is not as important.
Now the question becomes very different if you start having securities in your portfolio that naturally have asymmetrical returns, for example, options. So if you hold options, then obviously the traditional measures of tracking error are not enough because theyre not going to be able to capture the asymmetric nature of the return of non-linear instruments such as options. It is also important to note that for options, for example, delta approximations, which is what most providers do, is just not enough. So when you have options, we believe that these downside risk measures become very relevant, and Axioma is right now working on an approach to actually model these exactly without relying on delta approximations. And this is something that will be available later this year. Olivier dAssier:I think its important also to remember that there is a difference between being able to measure something and been able to manage it afterwards. So what Axioma is trying to do is not only provide a method in our risk models to be able to accurately measure the risk of an equity portfolio that includes non-linear assets such as options, but also to be able to use the ability to manage that risk by having that methodology incorporated in our optimization and our portfolio rebalancing tools, to so that they can actually be properly and exactly hedged if thats the desire of the manager. Sebastian Ceria: Yeah, so just to reiterate the point that Olivier made, thats a very, very important and good point is: we kind of separate measurement from managing. I mean if you can measure but you do not manage, then youre in trouble. You want to measure but you also want to be able to correct if what you measure is not the desired quantity or the desired level of risk that you actually want to have. So thats why at Axioma, we believe that its important whenever you come out with something that allows you to measure to also come up with something that allows you to manage or optimize that particular portfolio with those securities.

Dan diBartolomeo, President, Northfield Information Services


No set of static measures is going to be sufficient. The issue isnt about what measure you use. I think thats kind of silly. What matters is whats the probability distribution under which you assume those measures operate. If I said, for example, whats the likelihood of a ten standard deviation event, which I think most people would intuit as being very extreme, if you assume security returns are normally distributed, the likelihood of that is one in many trillions. Its not very likely to happen and yet weve observed it numerous times in recent years. How can that be? The answer is simple. The returns arent normally distributed, and if theyre not normally distributed and we dont know what the right distribution assumption is, then the likelihood of a ten standard deviation event is one in 200. And for the probability of one in 200, the number of times that we have seen such extreme events is not at all out of the ordinary. So, its not a question of what measure you use. Its having that measure calculated under a probability distribution that actually make sense with what we know about the world, and while it is mathematically convenient to assume the normal distribution, and there are good reasons to use the normal distribution under typical conditions, there is an awful lot of reasons not to assume that normality would be true under extreme conditions. Exactly what the right probability distribution assumption is is up for debate, but certainly under most circumstances, we can reject the view that returns are normally distributed particularly over shorter time horizons. If we start thinking about quarters of a year or years, we would not typically reject the normality assumption. On the other hand, when we start talking about days or hours, clearly the data isnt normal and no one should expect it to be normal.
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

Laurence Wormald, Head of Research, SunGard APT


We do think its absolutely vital to have a multi objective approach towards measurement. Tracking errors, for example, is not useful at all in adequately capturing the risk of extreme losses. Duration is a pretty poor measure as well. Value at risk and conditional value of risk are much more important and useful measures for looking at the downside, and therefore we have focused on providing those as well. So first thing to say, I think you need a 360 degree view of your risk, and to get that you need to look at all the risk measures available, so conventional beta, conventional durations and volatilities or tracking errors depending on whether its an institution or hedge fund, conditional value at risk is probably the best measure for the downside. But then you have to ask how are you modeling assumptions? Are you making to try to actually capture downside risk and it is the Fat Tails of the downside risk which are the things that make conventional measures based on tracking error not very useful. So within the multi-objective approach to risk measurement, you need to have a robust modeling framework underlying it. So the robust estimation of deviation from the benchmark, for example, we called tracking at risk. Tracking at risk is something only available in an APT system, and it provides a measure of prospective deviation from benchmark returns which is not conditional on the Gaussian assumption. So its those kind of things which can also be very, very useful in thinking about the extreme loss. Monte Carlo ultimately is the tool that we use to estimate what we call our Fat Tail measures of VaR and conditional VaR and incremental VaR as well. Those Monte Carlo methods are a vital addition to the normal factor modeling approach also when dealing with derivative products. So when you have a swap, when you have an option, a put, a call, we can handle a whole variety of quite complex non-linear option payoff products, including things like constant maturity swaps and inflation products because within our Monte Carlo, were properly repricing those based on our non-Gaussian approach to the distribution of the underlyings. So its gotten a bit technical here, but I think its very, very important when talking about downside risk to understand the modeling assumptions that you are using, because you may see a VaR number that could change very dramatically depending upon the modeling assumptions that youve made.

Jason MacQueen, Founder, R-Squared


I think the short answer is probably no. So just to be clear I dont think things like duration tracking or value at risk, conditional value at risk do adequately capture the risk of extreme losses. They will give you approximations, but we know from recent and indeed past experience that those approximations can be very approximate and can be very badly wrong in some cases. Our own view at R-Squared is that the solution to this problem is to have a model for normal times and a model for turbulent times. This is an idea that Mark Richmond has proposed so I want to give him credit for that idea. I gave a presentation at one of the London Quant seminars last year where he talked about this idea. So I think in the future what well have is risk models that kind of capture the behavior of normal times, which people can use for all the normal risk monitoring, risk measurement and optimization sorts of things, and then well probably have models which are deliberately designed to tell you what your exposures would be to extreme events of one sort or another. I dont think using, if you like, a normal risk model to give you accurate estimates of what might happen if the world suddenly collapsed, I dont think these will ever be able to do a terribly good job.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

Part 5b: Do you favor fat tails, MCVaR, etc., or do you favor a traditional exposure led approach to risk?
The next part of our question: at a recent FactSet-hosted event, our users split into two camps. Those that favored modeling measures such as fat tails and Monte Carlo VaR and those favoring a traditional exposure led approach to risk. Which do you favor?

Oleg Ruban, Senior Associate of Applied Research, MSCI Barra


Well actually both of these groups are correct in the some ways. So we believe that these two approaches are totally complementary. In fact, our approach to modeling tail risk is through the tax return series. As already mentioned, the tax returns series are typically available to us for much longer than individual asset return series. So, hence exposure to these risk factors may remain very important, in fact they are the key drivers of risk also in the tails of the return distribution.

Dan diBartolomeo, President, Northfield Information Services


No set of static measures is going to be sufficient. The issue isnt about what measure you use. I think thats kind of silly. What matters is whats the probability distribution under which you assume those measures operate. If I said, for example, whats the likelihood of a ten standard deviation event, which I think most people would intuit as being very extreme, if you assume security returns are normally distributed, the likelihood of that is one in many trillions. Its not very likely to happen and yet weve observed it numerous times in recent years. How can that be? The answer is simple. The returns arent normally distributed, and if theyre not normally distributed and we dont know what the right distribution assumption is, then the likelihood of a ten standard deviation event is one in 200. And for the probability of one in 200, the number of times that we have seen such extreme events is not at all out of the ordinary. So, its not a question of what measure you use. Its having that measure calculated under a probability distribution that actually make sense with what we know about the world, and while it is mathematically convenient to assume the normal distribution, and there are good reasons to use the normal distribution under typical conditions, there is an awful lot of reasons not to assume that normality would be true under extreme conditions. Exactly what the right probability distribution assumption is, is up for debate, but certainly under most circumstances, we can reject the view that returns are normally distributed particularly over shorter time horizons. If we start thinking about quarters of a year or years, we would not typically reject the normality assumption. On the other hand, when we start talking about days or hours, clearly the data isnt normal and no one should expect it to be normal.

Sebastian Ceria, Chief Executive Officer and Olivier dAssier, Axioma Managing Director for the EU and Asian Markets, Axioma
Sebastian Ceria: We believe that that very much depends on the instruments that you hold in your portfolio. If you hold the portfolio of straight equities, we believe that the right approach is the second one, the exposure-based approach. If you hold the portfolio that has equities and options or equities and instruments that have non-linear
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

payoffs, we believe that downside risk measures is what is appropriate. So it has more to do with the kinds of instruments that you hold in your portfolio, more I believe than the risk environment, because the risk environments are temporary, and you cannot, you have to be very careful that you cannot necessarily change your risk measure because something is happening in the market. I mean investors dont like that. You want to have risk measures that are consistent regardless of the environment you have.
Olivier dAssier: And to add to Sebastians point, we believe that risk management isnt a one-size-fits-all a type of practice; its a discipline where you have to find a one size that fits just you. So our goal is to provide investors with solutions that are extremely flexible so that they can chose the one model and the one method that best represents their investment process and sources of risk that they have in their portfolios.

Laurence Wormald, Head of Research, Sungard APT


Again, it really depends on the investment strategy, the investment objective. If your fund as a manager is designed as nearly a passive product or an enhanced index product where your active risk is designed to be very small, then clearly by the construction you are going to be more worried about deviations close to the mean or close to zero, and because you are not taking that big active risk positions. And if you are long only and you are close to the S&P, then tracking error, beta and so on are still good measures to use. Well never throw those away, but as clients use more derivatives in their portfolio constructions, as clients go into multi-asset class portfolio constructions, as clients take long-short positions and really are chasing alpha, it becomes more and more important to have a risk approach which does use downside risk in the proper robust way. And so Monte Carlo is essential for hedge funds that are taking absolute return, taking absolute return products. And anyone who is leveraged, anyone who has got exposure to derivatives, I think it is important to our view is that they should be looking at the Monte Carlo numbers; they should be looking to downside risk numbers; and they should have a model like ours which really takes into account the observed Fat Tails of basket returns. But we continue to support and believe in managers who are looking to keep risk quite small, who are looking to build products that are designed, carefully designed to stay close to benchmarks, and for those people of course we wouldnt tell them that TVaR is the most important thing in the world. The tracking at risk number that we produce, the robust estimate of deviation from benchmark has been over the last two years a very, very good forecaster of actual deviations from benchmark, so there a semi-parametric approach to a conventional exposure type or supplemental type risk model measure has been very useful for clients.

Jason MacQueen, Founder, R-Squared


Before I give you the one that we favor, I just want to say it depends on what youre trying to do with your risk measurement. A lot of people, a lot of managers only are assumed to be interested in measuring the total risk of their portfolio or their book, and for those sorts of exercises, Monte Carlo simulation or similar techniques might give you a reasonably good answer. If you actually want to manage the risk of your portfolio as opposed to merely measuring what the overall amount of risk is, then I think you need to have what you are calling here exposure-led measures of risk. It always seems to me that simply knowing how risky a portfolio is overall is mildly interesting but not really as interesting as knowing where the risk is coming from in terms of particular bets inside the portfolio. You need to know
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

how big they are and how big they are in relation to each other, the extent to which they correlate to or diversify each other, and so on.
And only then can you really start to manage the risk inside the portfolio, which of course is what Ben Graham said Investor Relations was all about. So therefore I favor an approach to risk measurement which enables managers to do risk management, which therefore means using some sort of factor model, so as you can see the actual bets that youre making inside the portfolio.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

Part 6: What is your competitive advantage?

Our last question in series; whats the competitive advantage? With user choice expanding in the market, what makes Axioma stand out from Barra? Or APT or R-Squared distinguish itself from Northfield?

Sebastin Ceria, Chief Executive Officer and Olivier dAssier, Managing Director for the EU and Asia Markets, Axioma
Sebastian Ceria: I think weve touched upon most of those points during this discussion, and I would reiterate that the ability to daily re-estimate models is essential in this marketplace. Axioma is the only provider that has multiple models that are re-estimated daily across the globe and are released many hours before the respective local market opens. So not only [inaudible] to re-estimate models on a daily basis, we also believe it is important to provide those models significantly ahead of the market open so that portfolio managers and traders can actually make their respective decisions before those market opens and they need to trade on those decisions that theyve made. So, obviously, daily re-estimation of all elements of the risk model for covariance matrix, specific risk exposures, we believe its crucial for any end user of risk models today. The second one is this notion that it is important to have multiple views of risk. And multiple views of risk is not just different horizons but its also to have both a fundamental and a statistical view of risk because they capture different phenomenon that is going on in the market. We are big believers in having regional models and focus models that address the idiosyncrasies of particular markets. We are big believers in transparency and this is a big distinguishing factor. We dont think that factor models have to have factors that have, you know, hundreds or tens of descriptors that are combined with very unique and secret formulas. We believe that clients demand transparency and risk providers should provide that transparency. Those are some of the advantages we believe that our risk model have with respect to our competition and why the users of FactSet should consider Axioma, even if there is already other established providers that are part of the FactSet platform. Finally, we do believe that it is very important to think about risk in conjunction with how that risk measurement is going to be used to manage your portfolio. Axioma is again a big believer in the interaction of risk and portfolio construction, risk and optimization, and thats why later this year Axiomas Optimizer will be also be available in the FactSet platform. The Axioma Optimizer has some unique features like the Alpha Factor Method that corrects for risk under-estimation in optimized portfolios that actually make use of phenomena that we observed of how risk models interact with optimizers. So those are some of the differentiating factors that we believe makes Axioma unique. Obviously, all of these factors come together with the observation that Axioma is a firm that is innovating in this space Axiomas principles are one of great innovation, advancing the state-of-the-art, recognizing that the markets are changing, and risk providers have to change and continuously innovate. Axioma is a culture of flexibility, we want to give you the flexibility to do what you want to do, we dont want to impose a particular view on the client. And finally, a culture of client services. We think that clients deserve and demand a very dedicated client service of people that really understand whats going on in the marketplace

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.


WWW.FACTSET.COM

Olivier dAssier: The difference of Axioma is that from most other vendors, if not all is that we do not believe that the measurement of risk, the rebalancing or constructing of a portfolio and the estimation of returns for this portfolio are separate practices, they are disciplines that all come together in one in the portfolio management process, and we want to build solutions that incorporate this interaction among those three pieces.

Laurence Wormald, Head of Research, Sungard APT


There are a couple of things that I think we continue to work very hard on and our clients see as having very special value. The flexibility of attribution in our risk model is especially in the equities world a really unique point. You can choose from the widest possible range of explanatory factors, including factors from any other time series FactSet database that you have available when you use the equity model over the FactSet platform. So this really allows people to explore the attribution and debts that theyre taking without being bound into a prespecified set of factors. And our components, we believe, our components approach is useful in that it enables this flexibility. But more than anything else, transparency and flexibility are the key things that clients need. Were also focusing on providing scenario analysis tools, which are in my view one of the most important new things that clients can expect to get from a risk management system, and we are doing that across multiple asset classes. So I have said before that we support equities, bonds, commodities, credits, derivatives, funds, and hedge funds. Putting all of those together gives us probably the widest coverage across asset classes, together with the Monte Carlo, to deal with derivatives exposures. Those things, I think, make the system as capable and as flexible as a risk manager needs it to be these days.

Dan diBartolomeo, President, Northfield Information Services


Well, I would say two things. One is the fact that there is this explicit process that allows us to have both a 10-day horizon and a one-year horizon from exactly the same model. This idea of the question of how are things different now and how is that actually answered and translated into the set of risk measures? The other thing that I think is very different about Northfield is not so much about the models, but the nature of the service we provide. Unlike some of our competitors, the median tenure of our staff is well over 12 years. In the decade of the first 10 years of this century, our staff turnover is actually negative. So few people left as the number of people who had previously left and rejoined was actually greater. So that when our clients are dealing with our staff, theyre actually dealing with people who have a great deal of experience in this field. Theyre not people that have just gotten out of school six months ago. Theyre not people who were teaching physics or operations research a few months ago and really dont have any experience in finance. Our staff has been around for a long time and as a result I think are able to prepare our clients, in a way, for difficult times, for volatility, such that nobody panics, nobody has extreme concerns about the validity of the model because they have confidence in the people doing them.

Oleg Ruban, Senior Associate of Applied Research, MSCI Barra


So, if we could have a bottom underlying phrase, if you like, it would be that we offer multiple views of risk. So risk is not a single number. So there are many things that underline this. First of all, its all of the different range of models and analytics that suit different types of investment strategies and are complementary between each other. And I can
COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.
WWW.FACTSET.COM

mention things like the long- and short-term models, the single country, regional, and global models. We have specialty models, and in some case we do provide small cap models, for example. So this range of tools is very important for being able to analyze risk in each particular circumstance.
The second key point is data. Its something that weve already mentioned, that data plays a huge role in accurately forecasting risk and accurately giving good measures and good insights analyzing the portfolio. So without this you really cant do anything. The third key point is our research. So we have multiple different types of publications, we have constant research bulletins. For example, my colleague Oleg here is one of the authors of many such bulletins, which offer insights into very different types of and recent types of market events, and will analyze and understand the market And finally, its the framework in which we can bring all of this together. So, not only can we analyze these individual markets, sectors, different investment types, but we can bring all of this information together. And an example was our integrated model, when we had the discussion on bringing equity and fixed income together. Its hugely important that we can analyze this. But also, when were looking at these multiple views of risk, it is important that we can analyze them across a very consistent framework. So when we mention things like volatility, we decompose that along a series of factors and the factors that investors care about and understand. And but also we do this with other measures. So when were analyzing tail risk, we dont just give you a conditional [inaudible] or expected shortfall number or what we do is we actually decompose and attribute this number to the underlying forces of risk and return of a portfolio. And all of these are very consistent. So in the same way that we can analyze and decompose volatility, we can do the same thing for tail measures and we can do the same thing along stress testing scenario analysis.

Jason MacQueen, Founder, R-Squared


Were a lot more fun than all the other guys, and if you want a serious answer; well, the particular model that we have up on FactSet, as you probably know, is a short-term model. It is built with the express purpose of being able to forecast risk over a very short horizon, anywhere between tomorrow and two months out, but the one-month would be a good, convenient horizon. We know from our own research and tests and from our users reporting their experiences that it does seem to do a very good job of forecasting short-term risk. You would not want to use it for looking about, thinking about long-term risk. If youre interested in the short-term risk exposures of your fund, because youre about to do a rebalancing or if youre somebody who has a pretty short horizon anyway, then it will do a pretty good job of telling you what the risk of your portfolio is and, as I said, more importantly, what the restructure of the portfolio consists of in terms of the bets that youre making on the various factors. But we are more fun than the other guys.

Conclusions
Please provide us with feedback on our series, found originally on the FactSet Podcast. We continually try to feature a large variety of interesting and reflective pieces from our clients, partners, friends, and internal experts. If youd like to suggest a speaker or learn more, contact us at sales@factset.com.

COPYRIGHT FACTSET RESEARCH SYSTEMS INC. ALL RIGHTS RESERVED.

WWW.FACTSET.COM

You might also like