Professional Documents
Culture Documents
Internal Guide: External Guide: Prof V K Vasal K P Sharda DFS, Delhi University ADM, LIC India
Submitted by:
1
Acknowledgement
I would like to express my deep and sincere gratitude to my supervisor, Professor V K Vasal for his detailed and constructive comments, and for his important support throughout this work. His wide knowledge, understanding, encouragement and personal guidance have provided a good basis for the present report.
I would also like to thank my external supervisor , Mr. K P Sharda for his support and guidance throughout the work. His guidance have been a good support for the report.
Index
I.
Introduction .. 4 Past Literature Review . 5 Methodology ..6-8 Results ..9-37 Summary.. ..38 Bibliography ..39
II.
III.
IV.
V.
VI.
Introduction
Central to investors and policy makers dealing with emerging equity markets is the knowledge of how efficiently those markets incorporate market information into security prices. Specifically, what is the empirical validity of the random walk hypothesis (RWH) in these markets? We would try to find out whether various stock indices are predictable or not .If markets turn out to be predictable than we would like to also analyze the amount of predictability
which can be done in various indices. We have taken Sensex and BSE 500 index from Indian market to test for their predictability. The principal tools for testing the RWH in emerging markets are the Lo Mac Kinlay (1988) variance ratio (VR) test, ARMA, GARCH tests. In this study, we have used Variance ratio test, ARMA and GARCH models to know about the amount of predictability of various indices. Variance ratio test states that index is predictable or not and then ARMA & Garch models tells us about the amount of predictability for various indices. It is the aim of this report to make a complementary contribution to this important issue relating to the predictability of stock indices in Indian Market.
and Malaikah (1992) report evidence of inefficiency in the Saudi Arabian stock market, but not in the Kuwaiti market. El-Erian and Kumar (1995) find the Turkish and the Jordanian markets to be inefficient. Abraham, Seyyed, and Alsakran (2002) examine the random walk in three Gulf markets (Saudi Arabia, Kuwait, and Bahrain) and find that the stock markets of Saudi Arabia and Bahrain, but not Kuwait, are efficient. Using Wrights (2000) non parametric VR tests, Bugak and Brorsen (2003) find evidence against the random walk in the Istanbul stock exchange. Among other emerging markets, Barnes (1986) reports that the Kuala Lumpur Stock market is inefficient. Panas (1990) reports that market efficiency cannot be rejected for the Greek market while Urrutia (1995) rejects the RWH for the markets of Argentina, Brazil, Chile, and Mexico. In contrast, Ojah and Karemera (1999) find that RWH holds in Argentina, Brazil, Chile, and Mexico. Grieb and Reyes (1999) reexamine the random walk properties of in Brazil and Mexico using the VR test and conclude that the index returns in Mexico exhibit mean reversion and a tendency toward a random walk in Brazil. Alam, Hasan, and Kadapakkam (1999) examine five Asian markets (Bangladesh, HongKong, Malaysia, SriLanka, and Taiwan) and conclude that all the index returns follow a random walk with the exception of SriLanka. Darrat and Zhong (2000) and Poshakwale (2002) reject the RWH for the Chinese and Indian stock markets, respectively. Hoque, Kim, and Pyun (2007) test the RWH for eight emerging markets in Asia using Wrights (2000) rank and sign VR tests and find that stock prices of most Asian developing countries do not follow a random walk with the possible exceptions of Taiwan and Korea.
Methodology
Nonparametric VR tests in the study of the RWH in emerging markets, VR tests have been by far the most widely used econometric tools since the pioneering work of Lo and Mac Kinlay (1988). A potential limitation of the LoMac Kinlay6
type (1988) VR tests is that they are asymptotic tests, so their sampling distributions infinite samples are approximated by their limiting distributions. An assumption underlying the VR tests is that stock returns are at least identically, if not normally, distributed and that the variance of the random walk increments in a finite sample is linear in the sampling interval. If the hypothesis is rejected, there is a high probability that the time series is non linear or has chaotic characteristics. Index levels can be determined from index returns, so here basis of our report is index returns and then index levels can be determined from index returns. Index returns are indicator of index level. As seen from the past studies of Bugak and Brorsen, 2003; O. M. AlKhazali,2007 ; R. K. Mishra,2011 the principal tools for testing the RWH in Stock indices are ARMA, GARCH, E GARCH tests. These tests can easily be performed in EViews. So, mainly here we will perform these tests. We will first take daily closing data for index (Sensex and BSE 500) to be checked. After that we will normalize the data by taking natural log of closing data and subtracting it from natural log of previous day closing data, so that variation between them can be reduced. After that we will check about the predictability of index by variance ratio test. If test hypothesis is rejected, than index is predictable. If test results turn out that index is predictable, then we will go for ARMA model to check about the amount of predictability by this model. If results are not satisfactory, then we will go for BDS test .By results of BDS test, we will take decision regarding going for ARCH Models. If BDS test is rejected, then we can go for Garch /EGarch models to predict the index.
Null Hypothesis Index follows a random walk and prediction is not accepted
possible
Unit Root Test is done Null hypothesis accepted Make data Stationary
Correlogram is made &by Results of it, ARMA Model is made AIC value small and variables insignificant Index cannot be predicted by this model have to go on higher models
Perform BDS Test Z Statistics significant Make ARCH Family models like GARCH AIC value small and variables insignificant
Z- Statistics insignificant
Data
We have collected data of various indices in Indian market through BSE site and through prowess databases. We worked on daily return data from 1st Jan 1993 to 31st Dec 2011 of BSE Sensex and 1st Feb 1999 to 31st dec, 2011 of BSE 500 indices. We then run various random walk tests on the data collected to find out whether data is martingale or not. Martingale means that data is
8
purely following random walk and old data does not contain any memory for future data. That means if data is martingale, then it is difficult to predict and forecast future data. The return series of the index exhibits significant levels of skewness and kurtosis. The skewness of the return series for BSE 500 is negative whereas that of Sensex is positive .The negative skewness implies that the index returns are flatter to the left compared to the normal distribution and positive skewness vice versa . The kurtosis reported indicates that the index return distributions have sharp peaks compared to a normal distribution. Jarque Bera statistics confirm the significant non normality of returns.
Process
We first find out log normal return of daily data on index to be checked. Log Normalized Return = Ln (Pt) - Ln (Pt-1) Pt = Closing Point of index on t day Pt-1 = Closing point of Index on t-1 day
This has been done so that data values does not differ in large absolute values as we know that index like sensex has closing value ranging from 1000 to 21000 . So to get good econometric model, we normalized the data, so that values does not differ by large values. After that we performed variance ratio test on obtained data to find out that data is martingale or not. This will suggest us about the nature of data whether old data has memory for current data or not. So now by result of variance ratio test, we can state about the predictability of index data.
Now we can perform various tests like ARMA, GARCH etc. to forecast about the model which can closely be related to current data of index to find out that which model closely fits with the data.
Sensex
1,600 1,400 1,200 1,000 800 600 400 200 0 -0.10 -0.05 0.00 0.05 0.10 0.15
Series: RESID Sample 1/01/1993 12/30/2011 Observations 4955 Mean Median Maximum Minimum Std. Dev. Skewness Kurtosis Jarque-Bera Probability -0.000172 0.000225 0.156536 -0.111974 0.015914 0.000316 8.327294 5859.301 0.000000
Residuals of sensex data are positively skewed and its kurtosis value is 8.32, by large Jarque-Bera value of residuals, we can say that residuals show non normality and we can go for higher test.
BSE 500
10
1,200 1,000 800 600 400 200 0 -0.10 -0.05 0.00 0.05 0.10 0.15
Series: RESID Sample 2/01/1999 12/30/2011 Observations 3369 Mean Median Maximum Minimum Std. Dev. Skewness Kurtosis Jarque-Bera Probability -0.000573 0.000430 0.143053 -0.115562 0.016862 -0.343433 8.556076 4399.600 0.000000
Residuals of BSE 500 data are negatively skewed and its kurtosis value is 8.55, also by large JarqueBera value of residuals, we can say that residuals show non normality and we can go for higher test.
The question of whether asset prices are predictable has long been the subject of considerable interest. One popular approach to answering this question, the Lo and MacKinlay (1988, 1989) overlapping variance ratio test, examines the predictability of time series data by comparing variances of differences of the data (returns) calculated over different intervals. If we assume the data follow a random walk, the variance of a -period difference should be multiple of the variance of the one-period difference. Evaluating the empirical evidence for or against this restriction is the basis of the variance ratio test. Now we have performed Variance ratio test in EViews for Sensex and BSE 500 indices .we have taken hypothesis as: Null Hypothesis: Alternate Hypothesis: Significance level: Index is a martingale Index is not martingale 0.05
The Output combo determines whether we wish to see our test output in Table or Graph form. The Data specification section describes the properties of the data in the series. The Test specification section describes the method used to compute test. The Compute using combo, which defaults to Original data, instructs EViews to use the original Lo and MacKinlay test statistic based on the innovations obtained from the original data.
13
*Probability approximation using studentized maximum modulus with parameter value 4 and infinite degrees of freedom Test Details (Mean = -5.6916575218e-07) Period 1 2 4 8 16 Variance 0.00046 0.00026 0.00013 6.3E-05 3.2E-05 Var. Ratio -0.56879 0.27335 0.13740 0.07019 Obs. 4955 4954 4952 4948 4940
14
Now clearly probability value in joint test comes out to be 0.0000 which states null hypothesis gets rejected at both 5% and 1% level of significance and data is not martingale. Therefore values in data do consist of memory of old data. Therefore we can predict index by various models and can forecast future values.
15
*Probability approximation using studentized maximum modulus with parameter value 4 and infinite degrees of freedom Test Details (Mean = -1.21810132421e-05) Period 1 2 4 8 16 Variance 0.00050 0.00028 0.00014 7.1E-05 3.7E-05 Var. Ratio -0.55984 0.27826 0.14224 0.07461 Obs. 3369 3368 3366 3362 3354
16
Now clearly p value comes out to be 0.000 which states that null hypothesis gets rejected at both 5% and 1% level of significance and data is not martingale. Therefore values in data do consist of memory of old data. Therefore we can predict index by various models and can forecast future values.
INFERENCE: Variance Ratio test states that both indices (sensex and BSE 500) are predictable and they both strongly reject the null hypothesis of being martingale, so we can predict both the indices by various models.
17
Null Hypothesis: Index has a unit root Alternate Hypothesis: Index does not has a unit root Significance Level: 0.05
18
Augmented Dickey-Fuller Test Equation Dependent Variable: D(SENSEX) Method: Least Squares Date: 01/08/12 Time: 16:46 Sample (adjusted): 1/04/1993 12/30/2011 Included observations: 4955 after adjustments Variable SENSEX(-1) C R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood F-statistic Prob(F-statistic) Coefficient -0.894802 0.000333 0.447396 0.447284 0.015914 1.254439 13486.49 4010.014 0.000000 Std. Error 0.014130 0.000226 t-Statistic -63.32467 1.473141 Prob. 0.0000 0.1408 -5.69E-07 0.021406 -5.442779 -5.440152 -5.441858 1.993829
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat
So null hypothesis gets rejected in unit root test. This states that sensex data is stationary and we can go for ARMA estimation.
19
Augmented Dickey-Fuller Test Equation Dependent Variable: D(BSE_500) Method: Least Squares Date: 01/08/12 Time: 16:48 Sample (adjusted): 2/02/1999 12/30/2011 Included observations: 3369 after adjustments Variable BSE_500(-1) C R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood F-statistic Prob(F-statistic) Coefficient -0.861478 0.000406 0.431092 0.430923 0.016856 0.956696 8976.302 2551.351 0.000000 Std. Error 0.017055 0.000291 t-Statistic -50.51090 1.397246 Prob. 0.0000 0.1624 -1.22E-05 0.022345 -5.327576 -5.323942 -5.326276 2.003117
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat
So null hypothesis for BSE 500 gets rejected in unit root test. This states that data is stationary and we can go for ARMA estimation.
20
Correlogram of Sensex
This is mainly used to identify AR and MA terms in ARMA estimation.
Here some values of ACF and PACF is significant as compared to other values , major spikes are coming in first , sixth , ninth , ten lags so we will take ar(1), ar(6), ar(9), ar(10), ma(1), ma(6), ma(9), ma(10) terms in arma equation.
21
Here some values of ACF and PACF is significant as compared to other values , major spikes are coming in first , fourth , sixth , ninth , tenth , thirteen lags so we will take ar(1), ar(4), ar(6), ar(9), ar(10), ar(13), ma(1), ma(4), ma(6), ma(9), ma(10), ma(13) terms in arma equation.
22
1.)
2.)
ARMA Model
ARMA Theory
23
ARMA (autoregressive moving average) models are generalizations of the simple AR model that use three tools for modeling the serial correlation in the disturbance: The first tool is the autoregressive, or AR, term. The AR (1) model uses only the first-order term, but in general, it may use additional, higher-order AR terms. Each AR term corresponds to the use of a lagged value of the residual in the forecasting equation for the unconditional residual. An autoregressive model of order, AR (p) has the form. U(t) = r(1)u(t-1) + r(2)u(t-2) +..+ r(p)u(t-p) + e(t) The second tool is the MA, or moving average term. A moving average forecasting model uses lagged values of the forecast error to improve the current forecast. A first order moving average term uses the most recent forecast error; a second-order term uses the forecast error from the two most recent periods, and so on. An MA (q) has the form: U(t)= e(t) + v(1)e(t-1) + v(2)e(t-2) ++ v(q) e(t-q) The autoregressive and moving average specifications can be combined to form an ARMA (p, q) specification: Ut = r(1)u(t-1) + r(2)u(t-2) ++ r(p)u(t-p) + e(t) + v(1)e(t-1) + v(2)e(t-2) + + v(q) e(t-q) In ARMA forecasting, we assemble a complete forecasting model by using combinations of the three building blocks described above. We have used single step moving average in ARMA forecasting so that model remains simplified and can be better understood.
24
25
Dependent Variable: SENSEX Method: Least Squares Date: 01/08/12 Time: 16:14 Sample (adjusted): 1/15/1993 12/30/2011 Included observations: 4946 after adjustments Convergence achieved after 35 iterations MA Backcast: 1/01/1993 1/14/1993 Variable C AR(1) AR(6) AR(9) AR(10) MA(1) MA(6) MA(9) MA(10) R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood Coefficient 0.000381 -0.033486 -0.218765 0.009212 -0.307348 0.135130 0.169216 0.030033 0.355058 0.019344 0.017755 0.015862 1.242163 13481.81 Std. Error 0.000246 0.086754 0.082848 0.116309 0.087197 0.085907 0.081561 0.115302 0.087362 t-Statistic 1.549163 -0.385991 -2.640571 0.079202 -3.524747 1.572968 2.074724 0.260474 4.064228 Prob. 0.1214 0.6995 0.0083 0.9369 0.0004 0.1158 0.0381 0.7945 0.0000 0.000382 0.016005 -5.447964 -5.436125 -5.443812
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter.
26
Durbin-Watson stat
1.991289
Inverted MA Roots
Akaike info criterion is coming -5.44, which is good as lower the AIC value, better is the model fit but here in ARMA model all variables are insignificant, so there may be some nonlinear relationship present in data.
Forecast: SENSEXF Actual: SENSEX Forecast sample: 1/01/1993 12/30/2011 Adjusted sample: 1/15/1993 12/30/2011 Included observations: 4946 Root Mean Squared Error 0.015848 Mean Absolute Error 0.011417 Mean Abs. Percent Error 155.6666 Theil Inequality Coefficient 0.867555 Bias Proportion 0.000000 Variance Proportion 0.755708 Covariance Proportion 0.244292
SENSEXF
2 S.E.
We can infer from forecast model that root mean square error is 1.584%. Root mean square error of 1.584% means that forecasted values can be predicted at a maximum error of 1.584%.
27
Dependent Variable: BSE_500 Method: Least Squares Date: 01/08/12 Time: 16:08 Sample (adjusted): 2/18/1999 12/30/2011 Included observations: 3357 after adjustments Convergence achieved after 17 iterations MA Backcast: 2/01/1999 2/17/1999 Variable C AR(1) AR(6) AR(9) AR(10) AR(13) MA(1) MA(6) MA(9) MA(10) MA(13) R-squared Adjusted R-squared S.E. of regression Sum squared resid Coefficient 0.000467 0.210507 0.055305 0.156133 -0.118975 0.138227 -0.070322 -0.100956 -0.117499 0.161917 -0.103646 0.031263 0.028367 0.016786 0.942772 Std. Error 0.000399 0.102465 0.086885 0.145528 0.103382 0.085016 0.104051 0.088437 0.145681 0.095586 0.086914 t-Statistic 1.171028 2.054421 0.636533 1.072871 -1.150823 1.625891 -0.675844 -1.141558 -0.806550 1.693930 -1.192522 Prob. 0.2417 0.0400 0.5245 0.2834 0.2499 0.1041 0.4992 0.2537 0.4200 0.0904 0.2331 0.000474 0.017029 -5.333304 -5.313254
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion
28
8962.950 10.79807 0.000000 .89 .54-.70i -.35-.77i -.86+.25i .84 .53+.72i -.39-.73i -.86-.25i
-5.326133 2.010582
Inverted MA Roots
Akaike info criterion is coming -5.33 , which is good as lower the AIC value , better model fit but here in ARMA model some variables are insignificant , so there may be some nonlinear relationship present in data.
Forecast: BSE_500F Actual: BSE_500 Forecast sample: 2/01/1999 12/30/2011 Adjusted sample: 2/18/1999 12/30/2011 Included observations: 3357 Root Mean Squared Error 0.016758 Mean Absolute Error 0.011778 Mean Abs. Percent Error 189.0695 Theil Inequality Coefficient 0.834825 Bias Proportion 0.000000 Variance Proportion 0.700192 Covariance Proportion 0.299808
BSE_500F
Root mean square error is 1.675%. Root mean square error of 1.675% means that forecasted values can be predicted at a maximum error of 1.675%.
29
BDS Test
This test is applied to a series of estimated residuals to check whether the residuals are independent and identically distributed. The residuals from an ARMA model will be tested to see if there is any non linear dependence in the series after the linear ARMA model is fitted. Null hypothesis of BDS test being there is linear dependence in series. After this test, if we found some non linear dependence then we can go for ARCH family models as they are most frequently used models in financial markets and they have been used in other papers (O. M. Al-Khazali etal, The Financial Review42 (2007)303317; R.K.Mishra etal, Review of Financial Economics20 (2011)96104) to predict the indices in financial markets.
30
BDS Test for RESID Date: 01/09/12 Time: 12:27 Sample: 1/01/1993 12/30/2011 Included observations: 4956
Dimension 2 3 4 5 6
Raw epsilon Pairs within epsilon Triples within epsilon Dimension 2 3 4 C(m,n) 6349988. 4810302. 3727274.
0.703302 0.537996 c(1,n-(m-1)) 0.703186 0.703167 0.703131 c(1,n-(m-1))^k 0.494471 0.347677 0.244425
31
5 6
2940904. 2357819.
0.240001 0.192494
8615474. 8612136.
0.703090 0.703102
0.171813 0.120812
Here z- Statistic are significant, so there is some non linearity dependence present in series, Therefore we will go for ARCH family of models to predict index as they are most frequently used models in financial forecasting.
Dimension 2 3 4 5 6
Raw epsilon Pairs within epsilon Triples within epsilon Dimension 2 3 C(m,n) 2972662. 2298977.
32
4 5 6
Here z-Statistic are significant , so there is some non linearity dependence present in series, Therefore we will go for ARCH family of models to predict index as they are most frequently used models in financial forecasting.
GARCH Specifications
In developing an GARCH model, we will have to provide three distinct specificationsone for the conditional mean equation, one for the conditional variance, and one for the conditional error distribution.
Higher order GARCH models, denoted GARCH (q, p), can be estimated by choosing either q or p greater than 1 where q is the order of the autoregressive GARCH terms and p is the order of the moving average ARCH terms.
33
We are using GARCH (1, 1) model here as it covers all ate ARCH models with up to infinity lags. So rather than using an ARCH model with many lags, we are using GARCH (1, 1) model.
34
Dependent Variable: SENSEX Method: ML - ARCH (Marquardt) - Normal distribution Date: 01/08/12 Time: 19:29 Sample (adjusted): 1/04/1993 12/30/2011 Included observations: 4955 after adjustments Convergence achieved after 15 iterations Presample variance: backcast (parameter = 0.7) GARCH = C(3) + C(4)*RESID(-1)^2 + C(5)*GARCH(-1)
35
Variable C SENSEX(-1)
Variance Equation C RESID(-1)^2 GARCH(-1) R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood Durbin-Watson stat 6.88E-06 0.125949 0.852471 0.009715 0.009515 0.015925 1.256153 14031.40 2.037897 7.24E-07 0.006637 0.006858 9.499339 18.97682 124.3066 0.0000 0.0000 0.0000 0.000372 0.016002 -5.661515 -5.654948 -5.659213
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter.
Here Akaike info criterion value is -5.66 which is low and all variable values are also significant, so this is a good model to predict Sensex.
36
Forecast: SENSEXF Actual: SENSEX Forecast sample: 1/01/1993 12/30/2011 Adjusted sample: 1/04/1993 12/30/2011 Included observations: 4955 Root Mean Squared Error 0.015922 Mean Absolute Error 0.011422 Mean Abs. Percent Error 146.9406 Theil Inequality Coefficient 0.873712 Bias Proportion 0.000802 Variance Proportion 0.766432 Covariance Proportion 0.232766
SENSEXF
2 S.E.
.004
.003
.002
.001
.000 94 96 98 00 02 04 06 08 10
Forecast of Variance
Inference: This is forecasted model of Sensex by Garch method. Here it states that root mean square error is coming out to be around 1.59%. It means that sensex can be predicted at an error of 1.59%.
37
38
Method: ML - ARCH (Marquardt) - Normal distribution Date: 01/08/12 Time: 19:31 Sample (adjusted): 2/02/1999 12/30/2011 Included observations: 3369 after adjustments Convergence achieved after 16 iterations Presample variance: backcast (parameter = 0.7) GARCH = C(3) + C(4)*RESID(-1)^2 + C(5)*GARCH(-1) Variable C BSE_500(-1) Coefficient 0.001172 0.129917 Std. Error 0.000213 0.017668 z-Statistic 5.508114 7.353387 Prob. 0.0000 0.0000
Variance Equation C RESID(-1)^2 GARCH(-1) R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood Durbin-Watson stat 7.37E-06 0.160595 0.821913 0.017134 0.016842 0.016874 0.958727 9507.892 1.981550 8.50E-07 0.010647 0.010347 8.678975 15.08312 79.43243 0.0000 0.0000 0.0000 0.000473 0.017018 -5.641372 -5.632286 -5.638123
Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter.
Here Akaike info criterion value is -5.64 which is low and all variable values are also significant, so this is a good model to predict Sensex.
39
Forecast: BSE_500F Actual: BSE_500 Forecast sample: 2/01/1999 12/30/2011 Adjusted sample: 2/02/1999 12/30/2011 Included observations: 3369 Root Mean Squared Error 0.016869 Mean Absolute Error 0.011776 Mean Abs. Percent Error 182.1879 Theil Inequality Coefficient 0.862610 Bias Proportion 0.002042 Variance Proportion 0.770079 Covariance Proportion 0.227879
BSE_500F
2 S.E.
Forecast of Variance
Inference: This is Garch forecasted model of Sensex. Here it states that root mean square error is coming out to be around 1.68%.It means that BSE 500 can be predicted with an error of 1.68%.
Conclusion
40
This study has examined the time series behavior of spot price based daily returns of equity indices for Indian market by using tests of independence, nonlinearity. In short, consistent with the ndings of many previous studies, for example Abhyankar etal.(1995,1997) among others, results of this study reveal that there is a strong evidence of nonlinear dependence in daily increments of all equity indices analyzed. The existing nonlinearity in the data seems to be multiplicative in nature. This implies that nonlinearity is transmitted only through the variance of the process.
More precisely, the results of variance ratio test suggest that the null hypothesis of random walk is strongly rejected for both the return series. It appears, therefore, that daily increment in stock returns are highly autocorrelated. Also by BDS test We can say that there is some nonlinearity present in data , therefore GARCH model will be used to predict the indices.
Clearly GARCH model has smaller Akaike info criterion value as compared to ARMA model. Also variable values are significant in GARCH model and not all variable values are significant in ARMA model. Therefore we will use GARCH
41
(also other higher ARCH family models) model to predict both Sensex and BSE 500 indices.
Bibliography
1. Bugak and Brorsen Report, 2003 2. Belaire Franch and Opong Report, 2005b 3. Hoque, Kim, and Pyun Report, 2007. 4. O. M. Al- Khazali etal , The Financial Review42(2007)303317 5. R. K. Mishra etal , Review of Financial Economics20(2011)96104 6. BSE Website 7. EViews Software User Guide I 8. EViews Software User Guide II 9. Basic Econometrics by Damodar N Gujarati 10.
Wikipedia.com
42