You are on page 1of 16

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1746-8809.

htm

IJOEM 6,2

132
Received September 2008 Revised September 2008 Accepted January 2010

Application of multiple discriminant analysis (MDA) as a credit scoring and risk assessment model
Marcellina Mvula Chijoriga
Faculty of Commerce and Management, University of Dar es Salaam, Dar es Salaam, United Republic of Tanzania
Abstract
Purpose The purpose of this research is to investigate whether inclusion of risk assessment variables in the multiple discriminant analysis (MDA) model improved the banks ability in making correct customer classication, predict rms performance and credit risk assessment. Design/methodology/approach The paper reviews literature on the application of nancial distress and credit scoring methods, and the use of risk assessment variables in classication models. The study used a sample of 56 performing and non-performing assets (NPA) of a privatized commercial bank in Tanzania. Financial ratios were used as independent variables for building the MDA model with a variation of ve MDA models. Different statistical tests for normality, equality of covariance, goodness of t and multi-colinearity were performed. Using the estimation and validation samples, test results showed that the MDA base model had a higher level of predictability hence classifying correctly the performing and NPA with a correctness of 92.9 and 96.4 percent, respectively. Lagging the classication two years, the results showed that the model could predict correctly two years in advance. When MDA was used as a risk assessment model, it showed improved correct customer classication and credit risk assessment. Findings The ndings conrmed nancial ratios as good classication and predictor variables of rms performance. If the bank had used the MDA for classifying and evaluating its customers, the probability of failure could have been known two years before actual failure, and the misclassication costs could have been calculated objectively. In this way, the bank could have reduced its non-performing loans and its credit risk exposure. Research limitations/implications The valiadation sample used in the study was smaller compared to the estimation sample. MDA works better as a credit scoring method in the banking environment two years before and after failure. The study was done on the current nancial crisis of 2009. Practical implications Use of MDA helps banks to determine objectively the misclassication costs and its expected misclassication errors plus determining the provisions for bad debts. Banks could have reduced the non-performing loans and their credit risks exposure if they had used the MDA method in the loan-evaluation and classication process. The study has proved that quantitative credit scoring models improve management decision making as compared to subjective assessment methods. For improved credit and risk assessment, a combination of both qualitative and quantitave methods should be considered. Originality/value The ndings have shown that using the MDA, commercial banks could have improved their objective decision making by correctly classifying the credit worthiness of a customer, predicting rms future performance as well as assessing their credit risk. It has also shown that other than nancial variables, inclusion of stability measures improves management decision making and objective provisioning of bad debts. The recent nancial crisis emphasizes the need for developing objective credit scoring methods and instituting prudent risk assessment culture to limit the extent and potential of failure. Keywords Financial analysis, Credit rating, Risk assessment Paper type Research paper

International Journal of Emerging Markets Vol. 6 No. 2, 2011 pp. 132-147 q Emerald Group Publishing Limited 1746-8809 DOI 10.1108/17468801111119498

Introduction While it is acknowledged that macro variables such as deregulation, lack of information among the bank customers, homogeneity of the banking business and government and political interferences are some of the causes of bank failures; internal (micro)-related factors such as reckless lending, corruption, fraud, stiff competition and management deciencies have also contributed to bank failures (Chijoriga, 1997, 2000; Basel, 2004; Liou and Smith, 2006). Past and recent literature show that the majority of failures are due to poor risk management and non-use of prudential classication and risk assessment methods (Williams, 1995). Recent nancial crises have also proved the importance of proper credit risk assessment and the need to correctly predict nancial failure. Statistical methods such as discriminant and logit analyses approaches have been used to predict business failure or success (Gepp and Kumar, 2008; Karbhari and Sori, 2008). While many past studies have used multiple discriminant analysis (MDA) for credit scoring, few have used the MDA as a risk assessment model. One of the problems in trying to get a risk assessment model has been about which variables to include in a credit scoring model (CSM). The purpose of this research was to investigate whether the inclusion of risk assessment variables in the MDA model improved a banks ability to make correct customer classications and credit risk assessments thereby improving management decisions. Specically, the study intended to assess what variables are important in order to correctly classify, discriminate, predict, and assess the credit risk of a customer or an applicant and whether an MDA as a credit scoring method can improve management decision making. Credit scoring methods and application of MDA A credit scoring method is an empirically derived and statistically sound valuation device for estimating the likelihood that a customer will not pay his obligations when due. Credit scoring methods use data on observed borrowers characteristics either to calculate the probability of default or to sort borrowers into different default classes. Credit scoring methods are both quantitative and qualitative in nature. The quantitative methods use statistical or mathematical methods to classify rms between groups, while the qualitative ones are more judgmental and subjective in nature. The major disadvantage of qualitative methods is that there is no objective base for making a decision or judgment about a customer. Thus, screening between and among good and bad customers is difcult. Quantitative methods are designed to remove the subjectivity and bias from judgmental systems and thereby provide an early reliable warning system so as to avert outright losses. Theoretically, there are many quantitative credit scoring methods including linear probability model, logit/probit, MDA, recursive partitioning algorithm, and neural network (Chijoriga, 1997; Yim and Mitchell, 2005; Liou and Smith, 2006; Purnanandam, 2007). Many of the statistical credit scoring methods are a variation of a similar theme involving a combination of quantiable nancial indicators of rms performance and additional variables (Altman, 2002). Despite some shortcomings, MDA is still considered the best credit classication method among all other parametric and non-parametric methods especially in the bank lending environment. MDA performs better one to two years prior to default. According to Coats and Fant (1993), no other classication methods have replaced MDA. Since the lending bank situation is a short-term situation

Application of MDA

133

IJOEM 6,2

134

of no more than two years, for classication purposes, it makes the linear MDA model the most appropriate classication model. MDA regression technique starts by identifying characteristics unique to each group. It derives a statistical function and scores to separate the groups with their distinguishing traits. The scores are then used to assign each observation to the appropriate categorization. The function is later used to validate a holdout/validation sample on its predictive/and classication accuracy, and determines which factors (ratios) give the function the most discriminatory power. Many ratios (averages for the study period) could enter in the original model and depending on its multi-collinearity, they are rened and restructured to acceptable tolerance level. MDA assumes that there are two discrete and known groups of failed/distressed and non-failed/non-distressed rms, and that each observation in each group has a set of characteristics which are different from the other group. MDA assumes that the variables arise from multivariate normal populations and that the variance/covariance matrix of the variables is assumed to be the same for each group. However, the mean of the variables in each group is different. The cutoff point used to separate the groups is the point which minimizes total misclassication errors of both types I and II. Accuracy of prediction is measured by percentage of correct classication, i.e. correct prediction of failure plus correct prediction of non-failure. Empirical evidence on the performance of MDA model Application of credit scoring methods to the commercial banks show that type I error misclassication costs range from 77 percent for regional banks to 83 percent for major banks. Performance wise, Altmans (1968) model had an overall correct prediction of 94 percent. Correct prediction for one year before failure was 95 percent, 72 percent for two years and 45 percent three years before failure. The linear MDA model outperforms other models such as the quadratic models in terms of correct prediction especially one to two years before failure, thereby having few misclassication errors of both types I and II (Coats and Fant, 1993). Percentage correct classications range from 95 to 70 percent for one year before failure to ve years before failure, respectively. Empirical evidence also shows that there are more correct classications of non-failed rms than failed ones (Marrais et al., 1984). Previous studies also show that estimation samples are more correctly predicted than validation samples (Dietrich and Kaplan, 1982). In Tanzania, the only study which was previously done by Rwegasira (1978) who investigated the use of credit decision making in a commercial bank using univariate and judgmental methods. Risks assessment in nancial institutions and banks All nancial institutions including banks, insurance agencies and mutual funds face a certain level of risk exposure (Dominic, 1993). Among commercial banks, risk exposures credit and liquidity risk are the main causes of bank failures (The Economist, 1993; Uyemura and Van Deventer, 1993; Greuning and Bratanovic, 2003; Bluhm et al., 2003; Kealhofer, 2003). Between the two risks, credit risk causes severe bank failures. When a customer fails, there is non-payment of both interest and principal whereas when there is an interest rate risk, there is only parity between the two interest rates which are relatively small. To limit the severity of credit risk, requires proper customer classication in terms of both their repayment probabilities as well as their risks.

Classifying the bank customers and loan applicants into their proper risk classes allows proper pricing of both the loan and the loan conditions as well as monitoring mechanisms required thereafter. This requires proper objective credit scoring and risk assessment methods. Risk assessment becomes important in credit scoring because the whole process of credit scoring is the process of trying to isolate between risky and non-risky customers. If an applicant has no ability to pay, he also has a high chance of defaulting and hence a high credit risk. In the process of classifying applicants, two types of misclassication errors, types I and II, are committed. The higher the errors, the higher the risk created. Thus, risk in credit scoring is the risk of misclassifying an applicant to a particular group to which it actually does not belong. It follows then that, an objective credit scoring method which reduces risk should be the method which minimizes the misclassication errors and hence the risk of misclassication. The more misclassication errors are made the more misallocation of resources, and the less it is likely that the earning power of the assets of the rm will be conserved. Research methodology and approach Based on the above argument, it was the intention of this study to demonstrate that a credit scoring model like MDA not only minimizes the misclassication errors, but also helps in assessing the credit risk. The MDA model was used in this study as both a CSM as well as a risk assessment model. Variables used in the MDA risk model were coefcients of variation as proxy measures of risk. The study was based on National Bank of Commerce (NBC) performing assets (PA) (loans) and non-performing assets (NPA) (loans). Given that most of the non-performing bank loans were short- and medium-term loans, the study limited itself to short-term and medium bank loans. Performance of MDA is more accurate one to two years before failure which matches with the contractual periods of short and medium loans. The data used in the study were from corporate customers of the largest privatized Tanzanian commercial bank, the NBC which had a deposit holding of 80 percent of the total banks deposits (by 1995). Owing to poor credit and risk management, most banks in Tanzania suffered large amounts of losses in terms of NPA. By the end of 1991, a total of 82 non-performing loans from various banks including NBC were transferred to the Loans and Advances Trust Fund (LART) which was supposed to be the caretaker and recipient of the loan recoveries in case a loan was later repaid. In exchange, the banks received government bonds. Of the total NPA, about 70 percent were offered by the NBC. Primary data were collected from the bank staff, while most of the secondary data were collected from rms past nancial statements from 1985 to 1994. Among the questions asked to the bank staff through personal interviews and questionnaires was which nancial ratios they considered important in credit assessment. Sample selection and sample size Originally, it was intended to review all loans issued by the bank, i.e. consumer and corporate loans. However, reality revealed that although the bank had many customers, not all were signicant to the loan portfolio. By 31 January 1996, the total loan customers issued by the bank was about 19,200 with a total amount of Tshs. In total, 178 billion including overdrafts, staff loans, and individual loans. Of the total loan portfolio, Tshs. 104.4 billion (58.4 percent) belonged to the 100 top corporate borrowers, representing

Application of MDA

135

IJOEM 6,2

136

the majority of the bank customers. The study therefore concentrated more on the top 100 corporate borrowers who had audited nancial statements. Included in the sample were large, medium, and small rms. For effective and improved model building, literature recommends to have paired samples of both failed and non-failed rms. Zmjewski (1984) recommended a 1:1 ratio for failed and non-failed rms, but due to technical problems including non-availability and incomplete data, the failed and non-failed rms were not matched. Financial statements from 70 rms were collected, however only 56 rms had complete and usable data. The 56 customers were divided into two samples of 28 rms each for the estimation and validation sample, respectively. The estimation sample contained ten rms as non-performing loan (failed rms) and 18 rms as performing loans (non-failed rms). For the validation sample, the bank had classied four rms as failed and 24 rms as non-failed. The estimation and validation samples covered the period from 1985 to 1990, and 1991 to 1994, respectively. The year 1990 was considered as the failure period. Failure was dened to be either a loan being transferred to LART for the estimation sample or classied as a loan loss by the bank for the validation sample, whereas non-failure meant the loan was classied by the bank itself as a current account and hence still in operation. Independent variable selection Literature argues that both nancial and non-nancial information should be used in model building and nancial ratios as independent variables (Keasey and Watson, 1987). The controversy is about which ratios should be included. In general, there is no theory to ratio selection. Ratios selected depend on practical use to the problem in question, ability to improve the discriminating power of models, frequency and general acceptability of the ratios in relation to their intended use and popularity in the literature, and appeal to the researcher. Inclusion in a model also depends on performance using statistical measures such as the step-wise method, direct method and factor analysis. Early application of MDA was by Altman (1968). Using the MDA method, Altman (1968) included working capital to total assets (WCTA), retained earnings to total assets (RETA), earnings before interest and taxes to total assets (EBITTA), market value of preferred and common equity to book value of total liabilities (MVBV) and sales to total assets (STA) in his model. Others tried to adjust the variables for price-level adjustments (Ketz, 1978; Norton and Smith, 1979; Mensah, 1983), and current cost accounting (Skogsvik, 1990). Dambolema and Khoury (1980) used stability measures in the MDA model. From the literature, it is argued that debt to equity (DE) ratios, cash ow ratios and liquidity ratios have more discriminative and predictive power. Many researchers have started with many ratios and then pruned them to get ratios which better t the model or problem under study. For example, Ezzamel et al. (1987) started with 152 ratios and ended up using 53 in the model. Others did not prune the ratios. Methods of pruning ratios which have been used in the literature include the step-wise method through the minimization of Wilks Lambda (Ohlson, 1980; Dambolema and Khoury, 1980); elimination of statistical correlations, using factor analysis, using F-statistics (Skogsvik, 1990); and personal judgment and relevance to the study in question. Empirical evidence shows that variables and variable coefcients change over time among models, hence affecting model stability (Barnes, 1987), etc. Making adjustments

in order to place all rms on a consistent set of accounting methods, price-adjusted variables, ination or using a non-reported accounting method signicantly improves the predictive ability of multivariate bankruptcy prediction models (Foster, 1986). There is an evidence that the age of the rm, rm size and industry group contain information on distress likelihood. This study used mean nancial ratios calculated from the 56 rms nancial statements. Selection of the original 17 ratios was based on being popular and with frequent appearances in the literature, importance and relevance to the lending bank situation, indicated as important by the banking ofcer responses, and having proved to have discriminating and predictive power in the model. Included were ratios used by Altman (1968) and Rwegasira (1978). Owing to data unavailability, no cash ow ratios were calculated. The nal MDA base model started with 17 ratios categorized as, liquidity (seven ratios), working capital (two ratios), leverage (two ratios), performance (one ratios) and protability (four ratios) and rm size (one ratio) (see the Appendix for denition of all variables). Given the commercial bank lending situation of short-term lending, it was more important to include more short-term liquidity and working capital ratios such as bank overdraft to quick assets (BOQA), bank overdraft to current assets (BOCA), current liabilities to owners equity (CLE) and working capital to sales (WCS), WCTA. Also included was the rm size measured by total assets to equity (TAE). Included in the risk model are covariance of variation variables of, net income (NI), earning before taxes, bank overdraft, BOCA, BOQA, cash to current liabilities (CCL), quick assets to current liabilities (QACL) and long-term debt to owners equity (LTDE) which were used as proxy measures of risk. Dependent variable The dependent variable for the MDA model was a Z-score which minimizes the misclassication errors. The lending bank situation is considered as a binomial problem, whereby offering a loan assumes non-failure and can be classied as 1; and to reject a loan application assumes a failing rm and hence classied as 0. The technique used in the SPSS program is based on Bayers rule that the probability that a case with discriminant score of say D belongs to group i is estimated by:   Gi P D=Gi P Gi 1 P P1 P D P D=Gi P Gi where P(Gi) represents the prior probability that a case belongs to a particular group when no information is available. P(D/Gi) is the conditional probability of D given the group. The likelihood membership in various groups given some information is the posterior probability which is denoted as P(Gi/D). The approved Z-score was then tested on the validation sample. For model test, a variation of ve models (including a misclassication cost model) were tested. Included were: (1) an MDA base model using the estimation sample and validation samples separately; (2) MDA validation model which tested the MDA base model using the validation sample data;

Application of MDA

137

IJOEM 6,2

138

(3) MDA lagged periods which tested the MDA base model on three lagged periods; (4) an MDA risk model which was based on MDA base model with the approved 12 variables plus eight additional risk stability measures; and (5) the NBC misclassication cost model which was based on the bank NBC misclassication costs and expected costs functions. All MDA models were tested using both the step-wise and direct method. Statistical results and research ndings General statistical tests Before model tests, some general statistical tests on both samples regarding normality, equality of covariance matrices, goodness of t and multi-co linearity were performed. The tests were done using the step-wise method and direct method within group and separate group co-variability. The SPSS program uses the Boxs M test, which is based on the determinants of group covariance matrices to test the equality of group covariance matrices. In addition to testing equality of group covariance matrices, the test is also sensitive to departures from multivariate normality. Thus, if the normality assumption is violated, the test tends to call matrices unequal. To classify cases, the SPSS uses the discriminate function values, and not the original variable. However, when covariance matrices are assumed identical, classication based on original variables and all canonical functions are equivalent. The normality and covariance tests also help to determine the type of model to be used. The Boxs M test was used to test both the equality of group covariance matrices and multivariate normality; and later applied the results to be the base for using the linear or quadratic discriminate functions for the nal MDA functions. The Wilks Lambda and x 2 values were used to test the difference between group means and the variability within groups. The smaller the Wilks Lambda the more different the two groups distributions are from each other and the smaller the variability within the distributions. This is because small Wilks Lambda are associated with functions that have much variability between groups and little variability within groups. And the smaller the Lambda, the higher the x 2. Test of a good function was tested by observing the eigenvalues. The higher the eigenvalue, the greater the goodness t of the function. The statistical results showed identity matrix, smaller Box Ms values and higher eigen and x 2 values. The results indicated that the sample data were approximately normal, and hence the use of linear transformation for estimating the MDA was appropriate. The higher eigen and x 2 values also indicate goodness of t of the MDA functions. The smaller Wilks Lambda followed by higher x 2 values implied that there were differences between the NPA and PA groups, and variability within groups was relatively small, hence the two distributions of PA and NPA were different. This helped in classifying correctly a case as belonging to a particular group and hence minimized misclassication errors. Correlation results on the estimation sample showed that there was a close correlation between NI to total assets (NITA) and earnings before interest and taxes and amortazation (EBITA), TAE and CLE, total debt to owners equity (TDE) and CLE, current assets to current liabilities (CACL) and QACL, WCS and NI to sales (NIS), NIS and LTDE, and TDE and LTDE. For the validation sample, there was close correlation between NITA and EBITTA, TAE and CLE, TDE and CLE, NIS and TDE,

TDE and LTDE, RETA and NITA, and RETA and EBITA. The results show that there is consistency in the variables correlated in both samples, except for last two pairs in each sample. Based on the multi-colinearity results, the variables which were removed for further tests were NITA, CLE, TDE, CACL, NIS and TDE. Elimination of a variable was based on its importance relative to the other variable to which it was correlated; and if it was correlated to more than one variable then that ratio was a candidate for removal. Test of variable importance and selection of variables for the base model Other than using statistical tests to test variable importance, the bank ofcers were requested to answer as to what importance they gave to various ratio categories and types. Based on the scale of 1-5, they were required to indicate a very important ratio by giving 1 point and 5 points to indicate a non-important one. From the total respondents, liquidity ratios ranked rst among different ratio categories followed by leverage and protability ratios, while cash ow ratios ranked last. Among the liquidity ratios, CCL and CACL ranked rst with an average of 1.4 points each. The second important ratio was the QACL. WCTA and WCS had below average points. For cash ow ratios, the most important ratio was the cash ow to debt. Within the leverage ratios, the DE ratio ranked as the most important ratio. Importance among protability ratios was in the following order, NITA, RETA and EBITTA. The bank ofcers were also requested to rank the importance between nancial and non-nancial information. Results showed that nancial statement information ranked rst followed by repayment experience and type of audit report. Model variable selection was tested using both the step-wise and direct method. Testing the estimation sample and including all 17 variables, the step-wise method selected variable X2(BOQA) and X15(TDE) with standard coefcient values of 0.94588 and 0.78091, respectively. The unstandardized canonical coefcients were 0.4740938 and 0.0258566 with a constant of 2 1.0934359. When multi-colinearity was removed the selected variables were X2(BOQA) and X16(WCS) with their standard coefcients of 0.85066 and 2 0.73104. The unstandardized coefcients were 0.4263676 for X2, 2 0.0940894 for X16(WCS) with their standard coefcients of 0.85066 and 2 0.73104. The unstandardized coefcients were 0.4263676 for X2, 2 0.0940894 for X16 and a constant of 2 1.0557708. Using the direct method (separate and within) with all variables included showed that except for X15(TDE) all the variables were accepted at the nal function. The standardized variable coefcients were the same for both within and separate, but differed for un-standardized coefcients. With correlation eliminated, all the 12 variables which were not closely correlated to others remained, and these were X1, X2, X4, X6, X7, X10, X11, X12, X13, X14, X16 and X17. The standardized coefcients for both within and separate methods were the same. Testing variable selection on the validation sample separately showed that there was a shift in variables and coefcients accepted for the various methods. Using the step-wise method X1(BOCA), X7(LTDE) and X10(QACL) were accepted as end variables for both with or without eliminating multi-colinearity. However, the coefcients between the methods were different. Using the direct method before eliminating multi-colinearity, the variables not accepted were X15(TDE) and X17(WCTA). When correlations are eliminated, the variables increase to 13. Included were the same variables as in the estimation sample with the addition of variable X8(NIS). Again the coefcients also differed.

Application of MDA

139

IJOEM 6,2

140

In summary, variable selection shows that the step-wise method selected ve times variable X1(BOCA), four times variable X2(BOQA), three times x7(LTDE), X10(QACL), X16(WCS), and two times variable X6 EBITTA. The direct method selected almost all variables entered at any particular point. Except for X3 and X9, the variables which were also considered by the bank ofcers as very important and at least important were CCL(X4), CACL(X3), QACL(X10), DE(X7), NITA(X9), EBITA(X6) and RETA(X12). Within direct and step-wise methods results showed variable consistency, but there was variability in variable coefcients within and among methods. Based on frequency in selection within methods, variables BOCA, BOQA, LTDE, QACL and WCS were considered important variables. The results showed that Altmans (1968) variables (WCTA, RETA, EBBITA and STA) were not continuously and consistently accepted, except when the direct method was used and which usually takes all variables in consideration. MDA model tests and results The basic MDA model was the linear discriminant model based on the original Altmans model of 1968 in the form of: Z i w1 X 1i w2 X 2i wn X ni where: Zi The discriminant score for the ith rm. wn The discriminant coefcient for the nth independent variable. Xni The nth independent variable for the ith rm. Test results of the base model showed 92.9 and 96.4 percent correct prediction when the direct method was used and entering all original variables for both the estimation and validation sample separately. When the step-wise method was used it showed correct prediction of 78.5 and 92.9 percent for the estimation and validation samples, respectively. The misclassication costs were 20 and 25 percent for type I error and 0 percent each for type II error. Using the direct method for both the validation and estimation samples type I error ranged from 30 to 0 percent and type II from 5.6 to 0 percent. Considering that the direct method resulted in lower misclassication errors, the nal base MDA model adopted for further tests had 12 variables. The model performance had 92.86 percent correct classication with 100 percent classication of PA and 80 percent for NPA. Its type I and type II errors were 20 percent and 0 percent, respectively. The model was also the one with the highest eigenvalues (1.55) when multi-colinearity was removed, hence indicating the goodness of t of the model. Based on the above, the model recommended for nal testing on the validation sample and the three years lagged periods was given as: Z i w1 X 1i w2 X 2i wn X ni Z i 0:04X 1i 2 1:14X 2i 0:07X 4i 2 0:05X 6i 1:09X 7i 0:58X 10i 0:32X 11i 0:01X 12i 0:76X 13i 0:0:57X 14i 2:73X 16i 2 1:37X 17i 3 4 2

MDA base model tested on the validation sample Testing the validity of the base model on the validation sample showed correct classication of 82.1 percent with PA and NPA of 91.7 and 25 percent, respectively.

Firms which were misclassied were two small rms, one medium rm and two large rms. The misclassied medium rm was a private rm, while the large rm was a public one. The accuracy of the validation sample was lower than the estimation sample because of the following possible reasons: . The time frame for the validation sample was too short (three years) while that for the estimation sample was long, hence mean ratios and stability of ratio variables were from a short period. Usually, a good time frame for calculating meaningful means and variability measures is between ve and ten years. . The change in banking policy and regulations in 1991 could have an impact on the difference, as some of the data of the validation sample runs after the BFIA, 1991 (i.e. 1992, 1993 and 1994). The summary of classication and prediction of both estimation and validation samples are presented in Table I. MDA model performance for three years lagged periods Considering 1990 as the failure date, the model was further tested on its performance one (1989), two (1988) and three (1987) years before failure. When the MDA base model was tested for three years lagged period, the results indicated that, one year lagged period had 78.5 and 82.1 percent correct classication for the step-wise and direct methods, respectively. For lagged two years the results indicated correct classication of 71.4 percent for both step-wise and direct methods. The three years lagged period had 75 and 85.1 percent correct classications for the step-wise and direct methods, respectively. Overall, the results showed the two years lag had higher correct classication using the direct method while one-year lag had higher correct classication for the step-wise method. Generally, the model performance differences were not signicantly different within methods and among methods. The results also showed a shift in rms, which were misclassied. From the results, it can be concluded that although the models indicated low correct classications percentages, the MDA model performance is still good even three years before failure prediction (Table II). The MDA risk model All the 12 variables in the nal base MDA model were used to develop the risk assessment model. In addition, the coefcient of variation of NI, EBIT, BOCA, BOQA, CCL, QACL and LTDE were included. The covariance of variation of long-term debt was used as a measure of the overall risk of the rm. Because of the importance of the short-term liquidity and the variability of the short-term loans (bank overdraft)
Actual Failed Non-failed (NPA) (PA) Estimation sample Validation sample Estimation sample Validation sample (%) (%) (%) (%) 80 20 25 85 0 100 8.3 91.7

Application of MDA

141

Prediction Failed (NPA) Non-failed (PA)

Table I. Summary estimation and validation samples actual classication and MDA prediction matrices

IJOEM 6,2

Details Correct (%) PA (%) NPA (%) Type I (%) Type II (%) Variable Misc. rms

Lag 1 year (1989) Stepwise Direct 78.6 100 40 60 0 X2 and X16 1, 2, 3, 4, 6, and 9

Lag 2 years (1988) Stepwise Direct

Lag 3 years (1987) Stepwise Direct 75 85.1 94.4 94.4 40 66.7 60 33.3 5.6 5.6 X6 12 variables 1, 3, 6, 7, 1, 5, 6, and 10 10, 11, and 19

142
Table II. MDA model performance for lagged periods

82.14 71.4 85.7 88.9 88.9 88.9 70 40 80 30 60 20 11.1 11.1 11.1 12 variables X6 12 variables 1, 3, 9, 11, and 1, 6, 7, 10, 1, 3, 19, and 22 15 11, 15, 18, and 19

to the bank, it was decided to include also the covariance of variation of overdraft (OVER), BOCA, BOQA, CCL and QACL. Although results indicated close correlation between and BOQA with net income coefcient of variation (NICOV) (2 0.76148), DECOV with NICOV8 (2 0.77399), LTDE with WCS (2 0.73715) and QACL coefcient of variation with TAE (0.90005), no variables were excluded in the MDA analysis due to multi-colinearity. Again some statistical tests were performed before the risk model was developed. The risk MDA model showed eigenvalues of 1.0566 for the step-wise method, and 3.6614 for the direct method. Its Wilks Lambda and x 2 values were 0.486230 and 17.666, respectively. For the direct method these were 0.214529 and 24.629, respectively. The Boxs M values were 66.24764 and 0.22059 for respective step-wise and direct method. Given that the eigen and x 2 values were higher there is evidence that the risk model was also good, and hence passes the test of goodness of t. Using the estimation sample, the MDA results indicate that, BOQA(X2), OVERCOV(X24) and WCS(X17) were the accepted variables for the MDA nal risk model when the step-wise method was used. The model had 78.57 percent correct classication, respectively. Type I and type II errors were equal to 40 and 11.1 percent, respectively. When the direct method was used, the results indicated that, in addition to the originally accepted 12 variables the following eight new variables were added, X18, X19, X20, X21, X22, X23, X24, and X25. (see the Appendix for denition of variables). Using covariance of variation of eight variables as a measure of risk in the risk model all variables were accepted for the direct method and X2(BOQA), X16(WCS), and X24(Overcov) for the step-wise method. Model performance indicated 100 percent correct classication using the estimation sample data and 60.71 percent correct classication for the validation sample. Type I error was 15 percent while type II error was very high reaching 75 percent. The results showed improvement in classication when stability measures were included. Contributing factors for large differences in the model could be the same as mentioned earlier, i.e. the validation sample having a short period (three years) and the changes in banking policy and regulation in 1991. The results have proved that using the MDA helps both, classifying more correctly the applicants in terms of their credit worthiness as well as their risk. It implies therefore that if NBC had used the MDA model for the credit scoring and risk assessment of its applicants it could have been more objective in determining the credit worthiness of a rm and assessed their risks accordingly. In this way, the bank could have avoided/limited the extent of default and allocated its resources efciently and effectively. The results showed

that, rms actual classication and model predictions have nothing to do with rm size or ownership type (private or public (government). However, political, government intervention and institutional characteristics existing in a particular country could affect risk assessment methods (Chijoriga, 2000). Testing the MDA Z-score of 2 0.45 as the average point for an average good rm (A), the risk classications on both sides could be rated as in Table III: The NBC misclassication costs and resource misallocation In order to calculate the NBC relative misclassication cost and resource misallocation the following model was used: LLR 5 C1 1 2 GLL C2 r 2 i 6

Application of MDA

143

Formula was adopted from Altman et al. (1977). Where C1 type I error cost, C2 type II error cost and LLR amount of loan losses recovered (loans collected by LART after expiry date of the loan). In this study, the loan losses recovered by LART up and until 12 February 1996 totaled Tshs. 2,777,425,49. GLL gross loan losses (charged off), this was the total loan transferred to LART. The total amount transferred to LART from NBC as at 30 June 1991 was Tshs. 18,886,550,000. The type two error variables are dened as, r effective interest rate on loan. The yearly interest rate averages for 1985 to 1990 as per 30 June 1994 BOT annual report were used to calculate the average interest rates, whereas i effective opportunity cost of the bank (resulting from the loan being rejected but a successful loan) (alternative investment choices returns, e.g. government risk free bond). For this study treasury bills interest rates were used as proxies for the government bonds. Based on the above data, the type I error costs rate was 85.3 percent, and type II error of 10.03 percent. Expected loss to the bank is then summed as: E MCF p1 M 12 C 1 p2 M 21 C 2 7

Where p1 and p2 are prior probabilities for NPA (p1) (0.35714) and for PA (p2)(0.64286), respectively, M12(20 percent) and M21(0 percent) type I and II errors, and C1(85.3 percent) and C2(10.8 percent) are type I and II error costs, respectively. NBCs expected misclassication costs would have been 6.1 percent implying that for any decision the bank made it committed an expected misclassication costs equal to about 6.1 percent. Using the loan portfolio as at 31 December 1995 of Tshs. 120,789,432,868, the bank could
Z-score Z . 3.0 2.0 . Z # 3.0 1.0 . Z # 2.0 2 0.45 . Z # 1.0 2 1.0 . Z # 2 0.45 2 2.0 . Z # 2 1.0 Z # 2 2.0 Rating AAA AAA AA A BBB BB B Riskness Excellent company Very good company Medium good company Average good company Average risky company Medium risky company Very risky company Firm number 14 20, 21 and 24 12, 13, 17, 25, 22, 23, 26 and 27 11, 15, 16,18,19 and 28 1, 3 and 6 10, 5, 7 and 9 2, 4 and 8

Table III. Firms risk classication for the estimation sample

IJOEM 6,2

have made a loss equal to Tshs. 7,359,468,712 (US$ 13,141,908). The loan loss is the gain/savings the bank could have made if it had used an objective assessment method. The bank could also use the expected misclassications cost rate to make provisions for its bad debts. Conclusion The results have shown that nancial ratios are good predictor variables of rms performance. Important ratios in the lending environment are the liquidity and leverage ratios. The ratios could be used in an MDA model to correctly classify, discriminate and predict a customer or loan applicant. The variables can also assess the credit risk of a bank customer or loan applicant. From the study, it can be concluded that the MDA is a good and objective credit classication method with high correct classication in the year of failure. It also showed that inclusion of risk stability measures improves correct credit classication and can assist in correctly grouping customers in their respective risk classes. Furthermore, the MDA also helps the bank to objectively determine misclassication costs and its expected misclassication errors plus determining the provisions for bad debts. Overall, the bank could have reduced the non-performing loans and its credit risks exposure if it had used the MDA method in the loan evaluation and classication process. The study has proved that quantitative CSMs improve management decision making as compared to subjective assessment methods. The recent nancial crisis has emphasized the need for banks and lending institutions to develop objective credit scoring methods as well as instituting a prudent risk assessment culture to limit the extent of failure or loan defaults. However, as pointed out by Altman (2002) and other researchers, credit risk models should be used in conjunction with other decision-making criteria. Therefore, a combination of both qualitative and quantitative models in credit assessment should be considered.
References Altman, E.I. (1968), Financial ratios, discriminant analysis and prediction of corporate bankruptcy, Journal of Finance, Vol. XXIII No. 4, pp. 589-609. Altman, E.I. (2002), Corporate distress prediction models in a turbulent economic and Basel II environment, Credit Rating: Methodologies Rational and Default Risk, Risk Books, London. Altman, E.I., Haldeman, R.G. and Narayanan, P.N. (1977), Zeta analysis. A new model to identify bankruptcy risk of corporations, Journal of Banking and Finance, June, pp. 29-51. Barnes, P. (1987), The analysis and use of nancial ratios: a review article, Journal of Business Finance & Accounting, Vol. 14 No. 2, pp. 449-61. Basel (2004), Bank failures in mature economies, Working Paper No. 13, Basel Committee on Banking Supervision, Basel. Bluhm, C., Overbeck, L. and Wagner, C. (2003), Credit Risk Modeling, Wiley, New York, NY. Chijoriga, M.M. (1997), An application of credit scoring methods and nancial distress prediction models to commercial bank lending: the case of Tanzania, PhD thesis, Vienna. Chijoriga, M.M. (2000), The interrelationship of bank failures and political interventions in Tanzania, paper presented at the Nordic Africa Institutes Conference on Financial Institutions in the Political Economy 11-14 June 1998, Bergen, published in the African Journal of Finance and Management, Vol. 9 No. 1, pp. 1-23.

144

Coats, P.K. and Fant, F.L. (1993), Recognizing nancial distress patterns using a neural network tool, Financial Management Journal, Autumn, pp. 142-55. Dambolema, I.G. and Khoury, S.J. (1980), Ratio stability and corporate failure, Journal of Finance, Vol. 35, September, pp. 1017-26. Dietrich, J.R. and Kaplan, R.S. (1982), Empirical analysis of the commercial loan classication decision, The Accounting Review, January, pp. 18-38. Dominic, C. (1993), Facing Up the Risks: How Financial Institutions can Survive and Prosper, Wiley, New York, NY. (The) Economist (1993), A survey of international banking, The Economist, April 10 (special issue). Ezzamel, M., Brodie, J. and Mar-Malinero, C. (1987), Financial patterns in UK manufacturing companies, Journal of Business Finance & Accounting, Vol. 14 No. 4, pp. 519-35. Foster, G. (1986), Financial Statement Analysis, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ. Gepp, A. and Kumar, K. (2008), The role of survival analysis in nancial distress prediction, International Research Journal of Finance and Economics, No. 16. Greuning, H. and Bratanovic, S.B. (2003), Analyzing and Managing Banking Risk: A Framework for Assessing Corporate Governance and Financial Risk, 2nd ed., The World Bank, Washington, DC. Karbhari, Y. and Sori, Z.M. (2008), Prediction of corporate nancial distress: evidence from Malaysian listed rms during the Asian nancial crisis, working paper series, Social Science Research Network, Madison, WI. Kealhofer, S. (2003), Quantifying credit risk I: default prediction, Financial Analysts Journal, Vol. 59 No. 1, pp. 30-44. Keasey, K. and Watson, R. (1987), Non-nancial systems and the prediction of small company failure: a test of Argentis hypotheses, Journal of Business Finance & Accounting, Vol. 14 No. 3, pp. 335-54. Ketz, J.E. (1978), The effect of general price level adjustments on the predictive ability of nancial ratios, Journal of Accounting Research, Vol. 16, pp. 273-84 (supplement). Liou, D.-K. and Smith, M. (2006), Macroeconomic variables in identication of nancial distress, working paper series, Social Science Research Network, Madison, WI. Marais, M.L., Pattell, J. and Wolfson, M.A. (1984), The experimental design of classication models: an application of recursive partitioning and bootstrapping to commercial bank loan classication. In studies on current econometric issues in Accounting Research, Journal of Accounting Research, Vol. 22, pp. 87-114 (supplement). Mensah, Y.M. (1983), The differential bankruptcy predictive ability of special price level adjustments: some empirical evidence, The Accounting Review, Vol. 58 No. 2, pp. 228-46. Norton, C.L. and Smith, R.E. (1979), A comparison of price level and historical cost adjustments in the prediction of bankruptcy, The Accounting Review, January, pp. 2-87. Ohlson, J.S. (1980), Financial ratios and probabilistic prediction of bankruptcy, Journal of Accounting Research, Spring, pp. 109-31. Purnanandam, A.K. (2007), Financial distress and corporate risk management: theory and evidence, working paper series, Social Science Research Network, Madison, WI. Rwegasira, K. (1978), The application of a credit strength index to improve nancial analysis in Tanzanian commercial bank lending, unpublished PhD thesis, Colombia University, New York, NY. Skogsvik, K. (1990), Current cost accounting ratios as predictors of business failure: the Swedish case, Journal of Business Finance & Accounting, Vol. 17 No. 1, pp. 137-60.

Application of MDA

145

IJOEM 6,2

146

Uyemura, A.D.G. and Van Deventer, D.R. (1993), Risk Management in Banking: The Theory and Applications of Assets and Liability Management, Banking Publication, Honolulu, HI. Williams, E.J. (1995), Risk management comes of age, Journal of Commercial Lending, Vol. V, January, pp. 17-26. Yim, J. and Mitchell, H. (2005), A comparison of corporate distress prediction models in Brazil: hybrid neural networks, logit, models and discriminant analysis, Nova-Economia-Belo Horizante, Vol. 15 No. 1, pp. 73-93. Zmijewski, M.E. (1984), Methodological issues related to the estimation of nancial distress prediction models, Studies on current econometric issues in Accounting Research, Journal of Accounting Research, Vol. 22 (supplement).

Further reading Johnson, G. (1970), Ratio analysis and the prediction of failure, Journal of Finance, Vol. 225, December, pp. 1167-73. Pinches, G.E. (1980), Factors inuencing classication results from multiple discriminant analysis, Journal of Business Research, Vol. 8 No. 4, pp. 429-56. Treacy, W.F. and Carey, M.S. (1998), Credit risk rating at large US banks, Federal Reserve Bulletin, November.

Appendix. Denitions of ratios used in the MDA base model Liquidity ratios: X1 BOCA bank overdraft to current assets. X2 BOQA bank overdraft to quick assets. X3 CACL current assets to current liabilities. X4 CCL cash to current liability. X5 CLE current liabilities to owners equity. X10 QACL quick assets to current liabilities. X11 QATA quick assets to total assets. Working capital ratios: X17 WCTA working capital to total assets. X16 WCS working capital to sales. Performance ratios: X13 STA sales to total assets. Leverage ratios: X7 LTDE long-term debt to owners equity. X15 TDE Total debt to owners equity. Protability ratios: X6 EBITTA earnings before interest and taxes to total assets. X8 NIS net income to sales. X9 NITA net income to total assets. X12 RETA retained earnings to total assets.

Firm size: X14 TAE total assets to owners equity. Denition of other variables used for MDA risk model BOCACOV (BOCA coefcient of variation) X18. BOQACOV (BOQA coefcient of variation) X19. CCLCOV (CCL coefcient of variation) X20. EBICOV (EBIT to total assets coefcient of variation) X21. DECOV (long-term debt to owners equity coefcient of variation) X22. NICOV (net income coefcient of variation) X23. OVERCOV (overdraft coefcient of variation) X24. QACLCOV (QACL Coefcient of Variation) X25. Corresponding author Marcellina Mvula Chijoriga can be contacted at: cellina@fcm.udsm.ac.tz

Application of MDA

147

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like