You are on page 1of 48

142

CHAPTER 5

DATA ANALYSIS

The first and foremost procedure in the data analysis stage was to
verify the quality of collected data for finalizing the tools required for further
analysis.

5.1 ANALYZING THE QUALITY OF DATA

This section involves procedures adopted in verifying and cleaning


of data for further analysis. This included steps such as

Verification of missing values

Identification of Outliers

Analysis of Normality

Analysis of validity and Reliability

5.1.1 Verification of Missing Values

The responses collected from 500 respondents using structured


questionnaire was entered in SPSS 17 under different variable names. To
identify missing variables a frequency test was done. Missing responses were
noticed in 105 cases where respondents fail to mark their responses related to
certain questions which were critical in analysis point of view and hence these
cases were deleted. After deletion of missing responses, 395 usable responses
were obtained.
143

5.1.2 Identification of Outliers

Outliers are created due to various reasons such as data entry


errors, sampling errors as well as biased responses from the respondents.
Some cases of outliers are noticed and were treated in the following manner
in this research. Statisticians have devised several ways to detect univariate
outliers. Grubbs' test is particularly easy to follow. This method is based on Z,
which is calculated as the difference between the outlier and the mean divided
by the SD. If Z is large, the value is far from the others.

Hair et al (1998) suggest that as common rule of thumb, z scores


can range from ± 3 to ± 4 for samples of more than 80.In this research to
determine outlier, an outlier calculator which performs Grubbs' test available
at www.graphpad.com was used. Three outliers located by this procedure
were eliminated.

The multivariate assessment of outliers was conducted using the


DfBeta Influence Statistics method using SPSS17. To estimate effect of
outliers in the study, the following rules were used:

where n = Sample Size i.e. 384(minimum required), and hence all cases
where DfBeta > 0.0914 shall be considered as outlier. The above procedures
detected 7 cases as outliers, which were eliminated.

Outliers represent cases whose scores are substantially different


from all others in a particular set of data. A univariate outlier has an extreme
144

score on a single variable, whereas a multivariate outlier has extreme scores


on two or more variables (Kline, 2005). A common approach to the detection
of multivariate outliers is the computation of the squared Mahalanobis
distance (D2) for each case. This statistic measures the distance in standard
deviation units between a set of scores for one case and the sample means for
all variables (centroids). Typically, an outlying case will have a D2 value that
stands distinctively apart from all the other D2 values. Therefore to
countercheck for multivariate outliers, squared Mahalanobis distance (D2)
was verified from the Amos output. A review of these values showed minimal
evidence of serious multivariate outliers.

5.1.3 Analysis of Normality

Many of the statistical methods require the assumption that the


variables observed are normally distributed. With multivariate statistics, the
assumption is that the combination of variables follows a multivariate normal
distribution. Since there is no direct test for multivariate normality, we
generally test each variable individually and assume that they are multivariate
normal if they are individually normal, though this may not necessarily the
case. In SEM model, estimation and testing are usually based on the validity
of multivariate normality assumption, and lack of normality will adversely
affect goodness-of-fit indices and standard errors (Baumgartner and Homburg
1996; Hulland et al 1996; Kassim 2001).

To assess normality, skewness and kurtosis are commonly used by


the statisticians. Skewness refers to the symmetry of a distribution whereas
kurtosis relates to the peakedness of a distribution. A distribution is said to be
normal when the values of skewness and kurtosis are equal to zero
(Tabachnick and Fidell; 2001). However, there are few clear guidelines about
how much non-normality is problematic.It is suggested that absolute values of
145

univariate skewness indices greater than 3.0 seem to describe extremely


skewed data sets (Chou and Bentler 1995). Regarding kurtosis, there appears
that kurtosis index greater than 10.0 may suggest a problem.

Analysis for univariate normality done using Kolomogorov-


Smirnov test with Lillefors significance correction revealed that none of the
variables are normally distributed. However skewness was between -0.9 to
0.1 range showing most of the data negatively skewed. Non-normality of the
data was anticipated as most of the respondents preferred to agree or strongly
agree to the survey dimensions indicating bulk of the values (including the
median) lie to the right of the mean. In this study, all the variables fall under
the kurtosis value of 3, inferring kurtosis was not problematic in this research.

Amos16.0 provides normality checks for data including skewness ,


kurtosis indexes and Mardia’s coefficient which is a test of multivariate
normality. Critical ratios provided by Amos output as attached to kurtosis
represents Mardia’s normalized estimate of multivariate kurtosis. Bentler
(2005), has suggested that, in practice, values > 5.00 are indicative of data
that are non-normally distributed. To correct for non-normality in the
underlying database, use of Bollen-Stine bootstrap and associated p-value was
considered in this study. For all constructs to moderate the effect of
multivariate non-normality, the maximum likelihood (ML) estimation, which
is relatively robust against departures from multivariate normality even in a
small manner (Anderson and Gerbing 1988; Sweeney 2000; Tabachnick and
Fidell 2001), was applied with Bollen-Stine bootstrap procedure. The boot
strap sample of 1000 was adopted in this study.

5.1.4 Analysis of Validity and Reliability

In undertaking a statistical analysis, unidimensionality should be


always assessed first, prior to examining reliability and validity (Hair et al.
146

1995). This step reduces the possibility of misspecifications (Gerbing and


Anderson 1988), because the analysis of reliability and validity is based on
the assumption of unidimensionality (Nunnally and Bernstein 1994). Validity
determines whether the scale truly measures what it was intended to measure.

Testing the reliability of survey data is the pre-requisite for data


analysis and inference. Reliability analysis tests whether a scale consistently
reflects the subset it measures (Churchill 1979; Nunnally and Bernstein
1994). By consistency it is firstly meant that a respondent should score
questionnaire the same way at different times. Secondly, two respondents
with the same attitude towards service quality should identically score the
survey. According to Field, (2005), values between 0.7 and 0.8 of Cronbach’s

items make a difference and in extreme cases they can lead to a negative
Cronbach's alpha (Field 2005). In this study reverse scored items were not
included as it may lead to problems in reliability of the data if the respondents
answer without proper understanding of the question. In this study both
reflective and Formative measures were used. The approaches to test
reliability of these constructs are different. The reliability of reflective
constructs was ascertained using the above criterion.

As formative constructs are composed of different aspects of a


construct, their indicators are not necessary to correlate with each other.
Diamantopoulos and Winklhofer (2001), stated that “it is not clear that
reliability is a concept that applies well to formative constructs”. This
statement was also supported by Diamantopoulos and Siguaw (2006) and
Rossiter (2002) and hence concluded that no reliability test are mandatory for
formative indicators. Reliability evaluation for formative constructs is in
ascertaining the absence of multicollinearity (Diamantopoulos and
147

Siguaw 2006). Muticollinearity can be tested using Variance Inflation Factor


(VIF). The guidelines applied in this regard were as follows:

VIF should be less than 3.3 (Diamantopoulos and Siguaw


2006).

If VIF is less than 10 explains the absence of Collinearity


(Hair et al 1998).

Various validity and reliability criteria adopted in this study were


explained in Table 4.5 above.

5.2 ANALYSIS OF SERVICE QUALITY DIMENSIONS

The next step in the analysis procedure was to explore the service
quality construct and confirm the existence of various dimensions by which it
was assumed to be formed. This was done in two stages

Exploratory factor Analysis using Factor 7.0 developed at the


Rovira I Virgili University, Spain

Confirmatory factor Analysis using Amos 16.0

5.2.1 Exploratory Factor Analysis

The indicator variables related to Service quality construct were


subjected to an exploratory factor analysis to identify the underlying factors
and to test whether the factors extracted are similar to the dimensions
proposed in the study. The analysis was conducted by Factor 7.0,which is a
freeware program developed at the Rovira i Virgili University, Spain by
Urbano Lorenzo-Seva and Pere J. Ferrando( 2005). An important feature of
this program was that it generates goodness of fit of the data simultaneously.
28 scale items were used to measure service quality in the Banking context as
148

explained in the previous chapters. In order to identify the naturally occurring


dimensions of service quality all 28 items were subjected to a factor analysis.
This approach was recommended in the literature as a means of identifying
actual, rather than perceived, factor groupings (Rosen and Surprenant, 1998).
The role of factor analysis is to identify the components or factors that derive
from a set of variables, i.e. to identify the subset of correlated variables that
form a subset which is reasonably uncorrelated with other subsets (Hair et al
1998; Tabachnick and Fidell; 2001).

An Exploratory Maximum Likelihood factor analysis with varimax


rotation was performed as it incorporates common, specific and error variance
and was appropriate when the objective was to identify the minimum number
of factors associated with the maximum explanation of variance (Hair et al
1998). The items that load higher than 0.5 are retained while low loading
items are dropped. In general, higher factor loading is considered better, and
typically loadings below 0.30 are not interpreted. As general rule of thumb,
loadings above 0.71 are excellent, 0.63 very good, 0.55 good, 0.45 fair, and
0.32 poor (Tabachnick and Fidell 2007).

The Exploratory Maximum Likelihood factor analysis identified


five components with an Eigen value greater than 1, which together explained
over 66.36 percent of the variance indicated a good fit and hence it was
assumed that model represents the data. The Kaiser-Meyer-Olkin Measure of
Sampling Adequacy was 0.926 and the Bartlett Test of Sphericity was
significant (p<0.001) with a Chi Square value of 7203.0 with 378 degrees of
freedom which was considered to be very good for further analysis and
provided support for the factorization (Table 5.1). The Goodness of fit
statistics are shown in Table 5.2.
149

Table 5.1 Adequacy of the correlation Matrix

Table 5.2 Goodness of fit statistics after EFA of service quality construct
150

Table 5.3 below provides the details of each factor along with items
contributing it with component loadings for each item. The total number of
items for service quality construct got reduced to 27 as one item could not
load more than 0.45 in factor extraction. Therefore the item “technically
skilled staff” was deleted.

Table 5.3 Factor loadings of service quality construct


151

Explained Variance and reliability of rotated factors as obtained


from the output of Factor 7.0 analysis (Table 5.4) shows adequate reliability
for extracted factors.

Table 5.4 Explained Variance and Reliability of Rotated Factors

The following conclusions were drawn from the exploratory factor


analysis conducted.

There existed five underlying factors which represent the service


quality construct in the banking context in Kerala.

Each item was mainly related to only one factor except for cross
loading shown by certain indicators which can be theoretically
justified as correlations among reflective measures are expected
and possibility of respondents conceive a different factor
perception for certain indicators cannot be ruled out.

Some of the “Image” related indicators showed considerable


cross loadings with “Human” factor. Indicator variables such as
“Helpful to customers” and “punctual in service delivery”
showed higher loading to “Human” dimension which can be
justified on content grounds as these indicators represent
contribution from the employees also. Hence it was decided to
include these indicators along with “Human” dimension for
further analysis.
152

One indicator variable attached to “products and services”


dimension namely “less documentation for products and
services” was showing higher loading to “Convenience” factor
and hence decided to include this indicator along with
“Convenience” factor for further analysis.

Two indictors “modern systems for service” and “wide network


of ATM for easy service” were showing higher loadings to
“products and services” dimension rather than “System” and
hence decided to include them with “products and services” for
further analysis.

On indicator variable “technically skilled staff” showed loading


less than 0.45 and hence excluded from further analysis.

The next step was to conduct a confirmatory factor analysis for the
service quality dimensions identified.

5.2.2 Confirmatory Factor Analysis-Service Quality Dimensions

The primary objective of conducting CFA was to determine the


ability of a predefined factor model to fit an observed set of data. It provides
estimates for each parameter of the measurement model. The various
parameters used for evaluation of the model are shown in Table 5.5.

Table 5.5 Various Parameters to be considered for model evaluation

Sl.NO Parameter
1 Factor loadings,
2 Factor Variances
3 Covariance
4 Indicator Error Variances
5 Error Covariances
CFA is useful in
153

Testing the significance of a specific factor loading.

Testing the relationship between two or more factor loadings.

Testing whether a set of factors are correlated or uncorrelated.

Assessing the convergent and discriminant validity of a set of


measures.

CFA has strong links to structural equation modeling and hence the
procedures involved are as explained under heading 4.4.Prior to validating the
full structural model with all latent variables, it was required to validate each
of the measurement models as a preliminary step. The measurement model is
the part of an SEM model that deals with the latent variables and their
indicators. The measurement model was evaluated for validity like any other
SEM model, using goodness of fit measures. The major data considerations to
be addressed before conducting CFA are

Table 5.6 Various Data Considerations

Sl.No. Data Considerations


1 Absence of missing data
2 Absence of outliers
3 Adequacy of sample size
4 Existence univariate and multivariate normality

The data were found free from missing values and outliers as
explained in headings 5.1. Unfortunately, there is no easy way to determine
the sample size needed for CFA. There are some very rough guidelines for
sample sizes: less than 100 is considered “small” and may only be appropriate
for very simple models; 100 to 200 is “medium” and may be an acceptable
minimum sample size if the model is not too complex; and greater than 200 is
154

“large”, which is probably acceptable for most models (Kline, 2005).


Analysis of normality was done in the univariate level and multivariate level
as explained in the heading 5.1.3.

Maximum likelihood (ML) estimation method was used in all


analysis using Amos.16. Maximum likelihood “aims to find the parameter
values that make the observed data most likely (or conversely maximize the
likelihood of the parameters given the data)” (Brown, 2006). It has several
desirable statistical properties:

it provides standard errors (SEs) for each parameter estimate,


which are used to calculate p -values (levels of significance)
and

it provides confidence intervals, and its fitting function is used


to calculate many goodness-of-fit indices

5.2.2.1 Measurement Model for “Image” Dimension

The seven indicator variable model of “Image” dimension was


suggesting poor fitting model in the first estimate. The normed alpha,
RMSEA and CFI were above the permissible level. On verification of
modification indices two indicator variables “ img2” and “img8” were
showing cross loadings to many other variables and was found to be a major
cause for poor fit and hence were removed. The resulting model was found to
be good fitting model with recommended indices as illustrated in Figure 5.1.
All the paths shown in the model are significant as critical ratios were
above 1.96.
155

.39

img1 i1
.40
.63
.63 img3 i3
.76
.87
img img5 i4
.74 .55
.62
img6 i5
.39

img9 i7

CMIN/df-2.96,CFI-0.98,SRMR-0.027,RMSEA-0.07,PClose-0.166
HoelterNat0.05-287, Composite reliability-0.706,AVE-0.698

Figure 5.1 Measurement Model for "Image" dimension

5.2.2.2 Measurement Model for “Human” Dimension

The seven indicator variable model of “Human” dimension was


suggesting poor fitting model in the first estimate. The normed alpha,
RMSEA, and NFI were above the permissible level. On verification of
modification indicators indicator variables “img7” was showing cross
loadings to many other variables and was found to be a major cause for poor
fit and hence were removed. The resulting model was found to be much better
but still needed modification except for low squared multiple correlation of
indicator variable “Human5” which is considered for removal in the second
stage. The resulting model was found to be good fitting model with
recommended indices as illustrated in Figure 5.2. All the paths shown in the
model are significant as critical ratio were above 1.96.
156

.44
img4 h1
.65
.66
human1 h3
.81
.84
.91
Human human2 h4
.96 .92
.80
human3 h5
.64
human4 h6

CMIN/df-2.74,CFI-0.99,SRMR-0.019,RMSEA-0.07,PClose-0.12
HoelterNat0.05-234, Composite reliability-0.83,AVE-0..829

Figure 5.2 Measurement Model for "Human" dimension

5.2.2.3 Measurement Model for “Convenience” Dimension

The initial five indicator variable model reported a poor level of fit
as the RMSEA (0.134) was outside the recommended tolerances. To modify
the model, the indicator variable “convei1” was removed due to poor squared
multiple correlation. The resulting model was found to be good fitting model
with recommended indices as illustrated in Figure 5.3. All the paths shown in
the model are significant as critical ratio were above 1.96.

.53
convei2 h3
.73 .44
.66
Convenience convei3 h4
.81 .65
.71 convei4 h5
.51
prd1 h6

CMIN/df-3.34,CFI-0.995,SRMR-0.015,RMSEA-0.079,PClose-0.2
HoelterNat0.05-434, Composite reliability-0.835,AVE-0.727

Figure 5.3 Measurement Model for "Convenience" dimension


157

5.2.2.4 Measurement Model for “Products and Services” Dimension

The four indicator variable model of “products and services”


dimension was suggesting poor fitting model in the first estimate. The normed
alpha and RMSEA were above the permissible level. An indicator variable
“prd2” was removed from further analysis due to poor loading to get a well fit
model with all indices considered above the desired level and with significant
paths as illustrated in Figure 5.4.

.47
.69 prd3 p2
.66
.81
prd&services sys1 p3
.81 .66
sys2 p4

CMIN/df-1.13,CFI-0.99,SRMR-0.013,RMSEA-0.017,PClose-
0.491 HoelterNat0.05-234, Composite reliability-0.705,AVE-0.765

Figure 5.4 Measurement Model for "products & services" dimension

5.2.2.5 Measurement Model for “System” Dimension

The five indicator variable model related to “system” dimension


was suggesting poor fitting model in the first estimate. The normed alpha,
RMSEA and CFI were above the permissible level. As per modification
indices, an error correlation was added between indicator variables “sys6”
and “sys7” considering the theoretical grounds, as to correlate error terms
there needs to be a strong theoretical justification behind such a move
(Joreskog and Long 1993) to develop a well-fit and significant model as
illustrated in Figure 5.5. These variables represent responses related to user-
friendly website and up to date web site and hence theoretically there is a
158

chance for their error variables to have correlation. All the paths shown in the
model are significant as critical ratio were above 1.96.

.73
sys3 s1
.84
.85
.92 sys4 s2
.82
.90
system
sys5 s3
.81
.66
.80
sys6 s4
.64
.36
sys7 s5

CMIN/df-3.74,CFI-0.99,SRMR-0.011,RMSEA-0.07,PClose-0.08
HoelterNat0.05-235, Composite reliability-0.839,AVE-0.858

Figure 5.5 Measurement Model for "system" dimension

5.2.2.6 Structural Model for Service Quality Construct

Structural equation models with latent variables (SEM) are more


often used to analyse relationships among variables. The relationships among
latent variables were tested only after obtaining a statistically significant well-
fitting model which represents the data.

The statistical significance of relationships among Service quality


and its extracted dimensions such as Image, Human, Convenience, products
and Services and system were of interest to this study. The well-fit
measurement models of service quality dimensions such as Image, Human,
Convenience, products and Services and system are taken together to arrive at
a fitting structural model for service quality. The model developed is
illustrated in Figure 5.6.Two important considerations are used to test the
statistical significance using Amos output.
159

The critical ratio (C.R.), which represents the parameter estimate


divided by its standard error; as such, it operates as a z-statistic in testing that
the estimate is statistically different from zero. Based on a probability level
of .05, the critical ratios are to be > ±1.96 for statistical significance.
Non-significant parameters, with the exception of error variances, can be
considered unimportant to the model; in the interest of scientific parsimony
they should be deleted from the model (Barbara.M.Byrne 2010).The standard
residual co-variance should be less than 2.58 to conclude statistically
significant co-variance between two variables (Barbara.M.Byrne 2010).
Hence such observations can also be considered for exclusion in further
analysis.

The first model developed needed re-specification as the standard


residual co-variance between some of the variables was above 2.58. The
model re-specification on the basis of modification indices was adopted to
finalize a good-fitting model explaining the service quality construct.
However a scientific theory based reasoning is essential in adopting
suggestion offered by modification indices in an urge to find better fit for the
structural model. The indicator variable “image 4” attached to “Human”
dimension was selected for removal at re-specification stage due to two
reasons

The squared multiple correlation<0.5

This item was showing cross loadings with many other


variables and hence problematic.
160

The re-specified model is illustrated in Figure 5.7.

.43
img1 .40 x1
.65
.63 img3 .72 x2
.85
Img .74 img5 .55 x3
.63 .39
img6 x4
img9 .47 x5
.78
.68 img4 .66 x6
.81 human1 .83 x7
.51 .91
Human .95 human2 .90 x8
.81
human3 .66 x9
.45 human4 .52 x10
.60
.72 convei2 .42 x11
.33 .65
.79 convei3 .62 x12
.49 Convenience .75 convei4 .56 x13
prd1 .43 x14
.34 .59 .65 prd3 .72 x15
.85
.79 sys1 .62 x16
.47 Prd&services
sys2 .73 x17
sys3 .85 x18
.57 .85
.92 sys4 .81 x19
.90
System .82 sys5 .67 x20
.80 .64 x21
sys6
sys7 x22 .35
Fig 5-6 Confirmatory model for Service quality construct-1
CMIN/df-2.11,CFI-0.96,SRMR-0.043,RMSEA-0.054,PClose-0.186
HoelterNat0.05-213,Re-specification needed as some of std.redidual covariences >2.58

Figure 5.6 Confirmatory model for Service quality construct-1


161

.43
img1 .40 x1
.65
.63 img3 .72 x2
.85
img .75 img5 .56 x3
.63
img6 .40 x4

.77 img9 x5
.65
.81 human1 .83 x7
.91
.51 Human .96 human2 .92 x8
.81
human3 .65 x9

.60 human4 .52 x10


.45
.72 convei2 .42 x11
.65 convei3 .62 x12
.33 .48 .79
Convenience .75 convei4 .56 x13
prd1 x14
.34 .43
.59
.65 prd3 .72 x15
.85
.79 sys1 .62 x16
.47 Prd&services
sys2
.73 x17
.57 sys3 .85 x18
.85
.92 sys4 .81 x19
.90
System .82 sys5 .67 x20
.80
sys6 .64 x21
sys7 x22 .35

Fig 5-7 Confirmatory model for Service quality construct-CMIN/df-1.7,


CFI-0.976,SRMR-0.037,RMSEA-0.0434,PClose-0.929
HoelterNat0.05-267,

Figure 5.7 Confirmatory model for Service quality construct

The stages in development of a confirmed model are summarized


as follows.

Stage-1: from the initial list of 28 indicator variables, 1 variable


related to “technically skilled staff” was removed for
poor loading at exploratory factor analysis.
162

Stage-2: Five variables were removed at the stage of development


of measurement models for dimensions identified after
exploratory factor analysis.

Stage-3: One variable was removed in the re-specification stage


of evaluating the confirmatory model for service quality.

The details of dimensions with their indicators in perceived service


quality scale developed for banking context are given in Table 5.7. The
overall reliability of the scale was 0.922.The model fit summary and
estimates are provided in Appendix 2.

Table 5.7 Variables after Confirmatory Factor Analysis


163

5.2.2.7 Validation of the Perceived Service Quality Scale

To demonstrate the soundness of measurement scale developed,


first of all, it was necessary to address the issue of Common methods variance
(CMV). Common methods variance can be a major source of measurement
error in data collection when variables are latent and measured using the same
survey at one point of time. CMV may inflate the true correlations among
latent constructs and threaten the validity of conclusions. Harman's single-
factor test is most widely known approach for assessing CMV in a single-
method research design (Podsakoff and Organ 1986). In single-factor test, all
of the items in the study are subjected to exploratory factor analysis (EFA).
CMV is assumed to exist if

a single factor emerges from unrotated factor solutions, or

a first factor explains more than 50% the variance in the


variables (Podsakoff and Organ 1986)

The EFA conducted with all variables in the study yielded five
distinct factors with an eigenvalue above 1. The first factor accounts for
22.5% of the variance at unrotated stage and all factors together account for
66.5% of the total variance. When the initial solution was rotated using a
varimax rotation in principal component analysis the same factor accounts for
less than 12% of the total variance and hence confirmed that CMV was not a
major concern in this study.

Convergent validity was established when the relationship between


measurement items and the factor were significantly different from zero.
Based on this criterion, critical ratios were used to evaluate the statistical
significance. Parameters which have a critical ratio greater than 1.96 were
considered significant based on the level of p=0.05 (Anderson and Gerbing
1988). In this study, all of the measurement items represented their factors
164

significantly, as the critical ratio of every item exceeded the 1.96 value;
hence, all of the measurement items satisfied the convergent validity test
(Table 5.8). Also, the standardized regression weights should be significantly
linked to the latent construct and have at least loading estimate of 0.5 and
ideally exceed 0.7 (Hair et al 2006).In this study the factor loading ranged
from 0.629 to 0.920 and no loading was less than recommended 0.5.

Table 5.8 Estimates and squared multiple correlation of all indicators

The convergent validity assessment also included the measure of


construct reliability and average variance extracted. According to Fornell and
Larcker (1981), variance extracted refers “the amount of variance that is
captured by the construct in relation to the amount of variance due to
165

measurement error”. Further, Fornell and Larcker (1981), suggested that


variance extracted to be a more conservative measure than construct
reliability. The other criteria used to assess convergent validity are:

As a rule of thumb good reliability is suggested if Cronbach’s


alpha estimate is t higher than 0.7.

Variance extracted (VE) for a construct should be larger than


0.5 indicate reliable factors (Hair et al 1995,Holmes-Smith
2001)

As a rule of thumb composite reliability is considered high if


squared multiple correlation R2 (“smc”) greater than 0.5,
moderate if between 0.3 and 0.5 and poor if less than 0.3
(Holmes-Smith 2001),suggesting construct reliability

Online CONSTRUCT VALIDITY Calculator version 2.0 available


at http://www.hishammb.net/cvc2 is used for calculating construct reliability
and variance extracted by each dimensions used for service quality
(Table 5.9).

Table 5.9 Composite Reliability and Variance Extracted by each constructs

Discriminant validity was confirmed by examining correlations


among the constructs. As a rule of thumb, a 0.85 correlation or higher
indicates poor discriminant validity in structural equation modeling
166

(David 1998). None of the correlations among variables were above 0.85
(Table 5.10). The results suggested adequate discriminant validity of the
measurement

Table 5.10 Correlations among constructs

Further, to confirm discriminant validity the squared inter construct


correlation (SIC) were calculated and compared with average variance
extracted. All variance extracted (AVE) estimates in the Table 5.9 were larger
than the squared inter construct correlation estimates (SIC) provided in Table
5.10. Therefore it was confirmed that the indicators have more in common
with the construct they were associated with than they do with other
constructs.

Nomological validity was tested by examining whether the


covariances between the constructs in the measurement model make sense.
The construct covariances are used to assess this. All the covariances were
positive and significant as seen in Table 5.11 confirming nomological
validity.

From the above observations, it was confirmed that the scale


developed was having adequate psychometric soundness for measuring
perceived service quality of banking services in Kerala, India
167

Table 5.11Covariances among constructs

Multi-group Comparison. A multi-group CFA was conducted to


identify potential developmental differences in factor structure. In a multi-
group CFA, measurement scale was tested to check whether the items
comprising a particular measuring instrument operate equivalently across
different populations (e.g., gender, age, bank type etc). There are three
primary steps in a multi-group CFA:

Determining the factor structure of the measure across each


group freely estimating the factor loadings (unconstrained
model);

Determining the factor structure of the measure across each


group constraining the factor loadings to be equal (constrained
model);

Comparing the goodness-of-fit indices between the constrained


and unconstrained models.

Differences between groups were assessed by comparing the


goodness- of-fit indices of the model with factor loadings constrained to be
equal to the unconstrained base model. If significant differences are observed
between the constrained and unconstrained model goodness-of-fit indices,
this indicates that factor structure is not same. If no significant differences are
168

observed between the constrained and unconstrained model goodness-of-fit


indices, this indicates factor structure is considered to be the same.

A multi-group comparison was conducted to determine whether the


scale has the same theoretical structure for each bank type. No significant
differences between the constrained and unconstrained models were
identified. The comparative indices are listed in Table 5.12 below.

Table 5.12 Comparative goodness of fit for nested models

All indices are showing similar values and hence it can be assumed
that model fits to all type of population. Thus the construct validity of the
measurement model for service quality is fully established. The objective of
the study to understand the various dimensions and indicators that can form a
valid scale to measure perceived service quality in the kerala context was thus
achieved.

5.3 STRUCTURE OF PERCEIVED SERVICE QUALITY


CONSTRUCT

An important issue to be addressed in this study was whether


Perceived service quality needs to be defined as a formative or a reflective
construct. A reflective construct implies that the separate dimensions of PSQ
,such as image, human, convenience, products and services and system are
actually different manifestation of the PSQ construct and as such ‘‘reflect’’
169

the content of PSQ, whereas a formative construct suggests that PSQ is


defined as the outcome formed of its dimensions. For example, increases in
any one of the dimensions say “Human”, if results in an increase in all the
other dimensions of PSQ, then PSQ should be conceptualized as reflective.
On the other hand, when an increase in any one of the dimensions increases
the overall magnitude of PSQ, without necessarily affecting the rest of the
dimensions, PSQ should be defined as formative. The researcher
conceptualized PSQ as first-order reflective and second-order formative
construct on theoretical grounds and content validity need to be established
for this assumption.

For statistical validity of both reflective and formative models of


perceived service quality and to understand which model represents the data
in a better manner two models as shown in Figures 5.8 and 5.9 are developed
and tested for goodness of fit analysis. Identification of formative indicator
constructs in Amos 16 required following procedures (Jarvis et al 2003;
MacCallum and Browne 1993).

(1) The scale of measurement for the latent construct was


established by constraining a path from one of the construct’s
indicators to be equal to 1 or by constraining the residual
error variance for the construct to be equal to 1 and

(2) To resolve the indeterminacy associated with the construct


level error term, a formative Construct should emits paths to

at least two unrelated latent constructs with reflective


indicators

at least two theoretically appropriate reflective


indicators, or

one reflective indicator and one latent construct with


reflective indicators
170

Hence it was assumed that formative construct of PSQ emit paths


to five indicators used to measure satisfaction development. This move can be
theoretically justified as PSQ was assumed to cause satisfaction development
among customers. The same indicators were used in reflective PSQ model
also.

.43
img1 .40 x1
.65
.63 img3 .72 x2
.85 .56
Img .75 img5 x3
.63 .40
img6 x4
.77 img9 .65 x5
.81 .83
.06 .91 human1 x7
Human .96 human2 .92 x8
.51 .81
.69 human3.65 x9
.32
e2 satdevelop1.70 x .45 .60 human4 .52 x10
.83
e3 satdevelop2.49 .84 .33 .72 convei2 .42 x11
.65
.49
e4 satdevelop3
.70 .33 .07
.48 .79 convei3.62 x12
.70 PSQ Convenience .75 convei4 .56 x13
.44
e5 satdevelop4.19 prd1 .43 x14
.34-.06
e6 satdevelop5 .59 .65 prd3 .72 x15
.85
.32 .79 sys1 .62 x16
.47 Prd&services
sys2 .73 x17

.57 .85 sys3 .84 x18


.92 sys4 .81 x19
.90
System .82 sys5 .67 x20
.80
sys6 .65 x21
sys7 x22 .34
Fig 5-8 Confirmatory model for Formative Service quality construct
CMIN/df-1.79,CFI-0.964,SRMR-0.043,RMSEA-0.045,PClose-0.881
HoelterNat0.05-245,

Figure 5.8 Confirmatory model for Formative Service quality construct


171

.42
e1
img1 .40 x1
.59.65
.63 img3 .73
.85 x2
.74 img5 .55 x3
Img .63
img6 .40 x4
e2 img9 .65 x5
.66 .81
.83
.21 .77 .91 human1 x7
Human .96 human2.92 x8
e6 .80
satdevelop1.22
.81 human3.64 x9
.46
e7 satdevelop2.17 .47 e3 human4 .53 x10
.00
.41 x .72 .51 .73 convei2 .41 x11
e8 satdevelop3.19 .64
.44 PSQ .79 convei3.63 x12
Convenience .74 convei4 .55 x13
e9 .29
satdevelop4.09
.64 prd1 .43 x14
e10 satdevelop5 e4
.41 .66
.85 prd3 .72 x15
.55 .78 sys1 .62 x16
Prd&services
sys2 .73 x17
e5 sys3 .84 x18
.30 .85
.92 sys4 .82 x19
.90 .67 x20
System .82 sys5
.80 .64 x21
sys6
sys7 x22 .35
Fig 5-9 Confirmatory model for Reflective Service quality construct
CMIN/df-3.95,CFI-0.858,SRMR-0.089,RMSEA-0.088,PClose-0.000
HoelterNat0.05-111,

Figure 5.9 Confirmatory model for Reflective Service quality construct

The model no 1 demonstrated in Figure 5.8 emerged superior to


other model with regard to goodness of fit. The model 1 explained the relation
among first order factors to second order PSQ construct in the formative
manner. This finding was in tune with the initial conceptualization of
perceived service quality construct as a multidimensional second order
formative construct with five first order reflective constructs. Hence the
172

content validity was further established. The findings confirmed that the
structure of perceived service quality in the Kerala construct is hierarchical
one formed with five first order dimensions all measured in the reflective
manner. Thus objective in this regard is satisfied.

5.4 ANALYSIS OF BANK PERFORMANCE BASED ON


SERVICE QUALITY

The previous sections have explained the steps in developing and


confirming measurement scale for perceived service quality with regard to
banking context. The next step was to analysis of bank’s performance based
on perceived service quality for each type of banks like private sector, public
sector and new generation in the Kerala context. This study adopted
perception alone method suggested by Cronin and Taylor (1992), in
SERVPERF analysis for evaluating the performance of each type of banks
considered for the study. The SERVPERF score is the cumulative score
obtained for each bank on all attributes (items) included in the validated scale
of perceived service quality. Table 5.13 gives the SERVPERF score for each
type of banks.

Table 5.13 SERVPERF details of each type of banks


173

On verification of the results, it was observed that service quality in


the new generation banks are ahead compared to other types of banks
followed by private sector banks and public sector banks. The retained
indicators after confirmatory factor analysis were used to calculate
SERVPERF scores for each type of banks. In image and human dimensions
private sector banks dominated perceived service quality supremacy whereas
in other three dimensions new generation banks are more accepted in the
Kerala banking context. The public sector banks even though enjoy second
position in system dimension was found last in overall service quality
perceptions.

A test of Homogeneity was performed to check whether the


assumption of homogeneity is violated to make valid inferences. Levene’s
test for homogeneity was not significant (p>0.05) as shown in Table 5.14 and
hence, it can be concluded that population variance of each group are
approximately equal. In order to find out the significant difference in the
perception towards various dimensions of perceived service quality among
the customers of three groups of banks, one-way analysis of variance was
administered. The resulted ‘F’ statistics are illustrated in Table 5.15 which
suggests that except products and services and system dimensions there exists
no significant difference in perceptions of customers in perceived service
quality at 0.05 levels.

Table 5.14Test of Homogeneity of Variances


174

Table 5.15 F-tests for significance among service quality dimensions

** Significant at 0.05 level

5.5 MEASUREMENT MODEL FOR “DESIRED EXPECTATION”


CONSTRUCT

In this study apart from perceived service quality dimensions,


reflective model was conceptualized for the construct “Desired Expectation”.
Hence validation of this construct was also done using Amos 16.0.

The Nine indicator variable model for “Desired Expectation”


dimension was suggesting poor fitting model in the first estimate. Two
indicator variables namely “helpful employees” and “Punctual” were showing
very poor values for squared multiple correlation and hence were removed for
further analysis. The resulting model was showing a better fit but further
improvement was required. As per modification indices, two error
correlations were added between indicator variables “easy transaction” and
175

“system consistency” as well as “speedy action” and grievance redressal”.


These steps can be theoretically justified as the chance of responses in
correlated manner to these questions was possible due to complimentary
nature of these questions. All the paths shown in the model are significant as
critical ratio were above 1.96 and the model is illustrated in Figure 5.10.In the
further analysis desired expectation was considered as a reflective construct
with seven indicator variables.

.43
Knowledgeable employees e2
.64
.65
Safety e3
.80 .78
.88 e4
Privacy
.73 .53
desiex
.82 Easy transaction .68 e6
.75 .34
.64 System consistency .57 e7
Speedy action .41 e8
.26
Grievance redressal e9

CMIN/df-2.71,CFI-0.987,SRMR-0.026,RMSEA-0.067,PClose-0.139
HoelterNat0.05-248, Composite reliability-0.701, AVE-0.754

Figure 5.10 Measurement Model for "Desired expectation"

5.6 HYPHOTHESIS TESTING – RESEARCH MODEL ANALYSIS

For the analysis of the thesis model, instead of covariance based


structural equation modeling, a variance based or component based Partial
least square (PLS) approach was adopted in this study. PLS-based SEM has
several key advantages over covariance-based SEM, including the following:
176

it always yield a solution, even in complex models

it does not require variables to meet parametric analysis criteria,


such as multivariate normality and large sample sizes

it enables the estimation of parameters in models with formative


LVs as well as reflective and doesn’t give rise to identification
problems as the case in Amos 16.0.

Most relationships between variables describing natural and


behavioral phenomena seem to be nonlinear, with U-curve and S-curve
relationships being particularly common (NedKock 2009). WarpPLS1.0
introduced in 2009 identifies nonlinear (or “warped”, hence the name of the
software) relationships among LVs and corrects the values of path
coefficients accordingly. Hence in this study Warp PLS 2.0 (current version)
was used for analysis of relationships among latent variables. The main
features of Warp PLS 2.0 are

It estimates P values for path coefficients automatically and


hence significance can be easily established.

It estimates several model fit indices for checking whether data


is well represented by the model.

It enables evaluation of measurement model as well as


structural model simultaneously

The software allows users to view scatter plots of each of the


relationships among LVs together with the regression curves
that best approximate those relationships.
177

It calculates variance inflation factor (VIF) coefficients for LV


predictors associated with each LV criterion.

It pre-process the data before SEM analysis and hence make it


easy to correct problems with the data, such as identical column
names, columns with zero variance, and missing values

In this study two constructs namely perceived service quality and


satisfaction were conceptualized as second order constructs. For analysis of
second order constructs using WarpPLS2.0, it is required to calculate the LV
scores at first by creating models with latent variables and indicators without
linking. These LV scores are used to define the second order construct in the
final model. The Path coefficients and associated p-values are obtained by
running WarpPLS 2.0 with a bootstrapping procedure. Boot strapping method
of re-sampling was adopted due to the reason it tends to generate more stable
path coefficients with samples sizes more than 100 (Nevitt and Hancock
2001). Various analysis algorithms used by Warp PLS are Warp3 PLS
Regression, Warp2 PLS Regression, PLS Regression, and Robust Path
Analysis. In this study Warp3 PLS Regression algorithm was used for
analysis.

The estimated model with path co-efficients and corresponding p


values are provided in Figure 5.11. The detailed results of analysis are
provided in Appendice-3. A pre condition for accepting the estimated model
for further interpretation was that the model should fit with the data.
Similarly the various validity and reliability criterion should be met. A model
possessing required reliability and validity conclude that the levels of
measurement errors in the data are relatively less and the results of analysis
credibly tests the hypotheses proposed in the study.
178

Figure 5.11 Estimated Research Model

Latent variable coefficients of the variables in the model are shown


in Table 5.16.

Table 5.16 Latent Variable Coefficients of the variables in the model


179

5.6.1 Model Validation-Verifying the Model Fit

To assess the model fit with the data, it was recommended that the
p-values for both the average path coefficient (APC) and the average
r-squared (ARS) be both lower than .05. In addition, it was recommended
that the average variance inflation factor (AVIF) be lower than 5 (Ned Kock
2009). Table 5.17 below provides the model fit indices with p values of the
estimated model. It was found that, all the three fit criteria were met and can
reasonably assume that the model have acceptable predictive and explanatory
quality as the data is well represented by the model.

Table 5.17 Model fit indices and P values of the Research Model

5.6.2 Validity of Reflective Constructs in the Model

The following rules were adopted to check the validity of reflective


constructs. The reflective constructs used in this model were “Desired
expectation” and the first order dimensions of “Perceived service quality”
construct.

5.6.2.1 Validation of “Desired Expectation Construct”

The construct of “Desired expectation” was found reliable since the


indicators like composite reliability co-efficient(0.927), Cronbach
alpha(0.907) and the average variance extracted (AVE=0.646) obtained after
the estimation of the model were above the threshold limits.
180

To check the Convergent validity loadings of each indicator of the


construct and their p values were considered. All the loadings were above 0. 5
and were significant at p <0.05and thus established convergent validity.
(Table 5.18).

Table 5.18 Factor loadings and p values for “Desired expectation” construct

The latent variable correlations in the model are considered for


ascertaining the discriminant validity of both reflective and formative
constructs. If the square root of the average variance extracted to be higher
than any of the correlations involving that latent variable (the values on the
diagonal latent variable correlation table of Warp PLS output should be
higher than any of the values above or below them, in the same
column).(Table 5.19).

Table 5.19 latent variable correlations of constructs


181

5.6.2.2 Validation of Various Dimensions of PSQ

The tables below (Tables 5.20, 5.21, and 5.22) establishes the
reliability, convergent validity and discriminant validity of the five
dimensions of the perceived service quality construct as per guidelines
mentioned in previous section. The results re-confirmed the findings from
confirmatory factor analysis.

Table 5.20 Reliability analysis of PSQ dimensions

Table 5.21 Factor loadings and p values for PSQ dimensions


182

Table 5.22 Latent variable correlations among PSQ dimensions

(All the correlations significant at p<0.001)

5.6.3 Validity of Formative Constructs

5.6.3.1 Validation of “Customer Satisfaction” Construct

The following conditions were verified to establish validity of


formative constructs

Absence of Multi collinearity was verified by checking the


Variance inflation factors (VIF) and found that they were less
than recommended value of 3.3 (Table 5.23).

Table 5.23 Indicator weights and VIFs of “Satisfaction” Construct


183

All the indicator weights were with P values lower than .05 and
hence merit validity in formative latent variable measurement
(Table 5.23).

All square root of AVE shown in the diagonal of the latent


correlation matrix are higher than the correlation involving that
latent variable, establishing Discriminant validity (Table5.24).

Table 5.24 Latent variable correlation of the “Satisfaction” Construct

(All correlations are significant at p<0.001)

5.6.3.2 Validity of other Formative Constructs

Formative indicators were used in this study for measurement of


all constructs except “Desired Expectation” and first order
dimensions of “Perceived Service Quality”. All this measures
were found valid as explained below

Absence of Multi collinearity and indicator weights with P


values lower than .05 confirm validity of other formative
constructs (Table 5.25)

The Discriminant validity was established as under heading


5.5.2.1

In this model all the Average variance extracted(AVE) which is


measure of the amount of variance captured by a latent
construct in relation to the variance due to random measurement
error were above 0.5establishing discriminant validity of the
model (Table 5.16).
184

Table 5.25 Indicator weights of formative constructs

5.7 ANALYSIS OF RELATIONSHIP BETWEEN SERVICE


QUALITY DIMENSIONS TO CUSTOMER SATISFACTION
AND BEHAVIORAL INTENTIONS

To find out the service quality dimensions which have significant


relationship to customer satisfaction a model was developed as illustrated in
Figure 5.12 and was estimated using WarpPLS2.0. The model developed was
valid model with regard to fit indices. The significant dimension to have
direct impact on Customer satisfaction at p<0.01 was the “System” dimension
and the “Human” dimension was found significant at p<0.05. With regard to
influence of service quality dimensions to behavioral intentions of the
customer, it was observed that none of the dimensions are significant at0.01
level but two dimensions namely “Human” and “Products and Services”
influence positive behavioral intentions of the customer significantly at 0.05
185

level (Figure 5.13) which can be considered as valid information for drawing
conclusions on customer behavior in the banking context.

Figure 5.12 Model for PSQ dimensions to Satisfaction

Figure 5.13 Model for PSQ dimensions to Behavioral Intentions


186

5.8 SERVICE QUALITY AND CUSTOMER SATISFACTION –


DEMOGRAPHIC FACTORS

The Hypothesis to examine the association between demographics,


and service quality perceived and customer satisfaction was done by
developing another model as illustrated in Figure 5.14 and estimating the
model using Warp3 PLs algorithm with boot strapping procedure. On
verification of model fit indices with p values of the estimated model, it was
found that, all the three fit criteria are met and can reasonably assume that the
model have good fit with the data.

Figure 5.14 Model for demographic factors to PSQ and Satisfaction

On verification of p values for the path co-efficients from each


parameter to latent constructs PSQ and Satisfaction, it was found that, age of
the respondents have significant relation with perceived service quality
(p<0.05) at 0.05 level whereas none of the other demographic factors had
187

significant relation with perceived service quality or Customer satisfaction.


Thus Hypothesis H8 was only partly supported.

The p values for path co-efficients from bank type to PSQ and
satisfaction were found to be significant at 0.01 level (p<0.01) whereas length
of association with the bank was found to have significant relationship with
PSQ and satisfaction at 0.05 level (p<0.05) establishing the support for
hypothesis H9.

5.9 DESCRIPTIVE STATISTICS OF RESPONDENTS

The summary of demographic profile of the respondents was listed


below.

75.3% of the respondents are male

21.6% of the respondents are customers of private banks,37.1%


public sector banks and 41.3% new generation banks

19.2% of the respondents in the age group less than 20,36.1% in


the age group 20-35,32.2% between 35-50 and 12.5% above
50yrs

1.8% of the respondents were matriculates,35.3%


graduates,31.4% post-graduates and 31.4 professionals

10.9%ofthe respondents having annual income less than 2


lakhs,31.4% between 2lakh and 4 lakh,24.9% between 4 lakh
and 6 lakh and 32.7% above 6lakh

2.9% of the respondents were having length of association with


their bank for less than 1yr, 11.4% between 1yr and 3 yr, 15.6%
between 3yr and 5 yrs and 70.1% above 5 yrs.
188

5.10 ANALYSIS OF PATHS-TESTING OF HYPOTHESES

The next step of data analysis involved examining the structural


models in order to test various hypotheses proposed for the study. All the
paths in the model except two were found significant at 0.01level.The
hypotheses found insignificant were, H2a- desired expectation to service
quality and H 6b-Service quality to negative BI. The abstract of hypothesis
tested are provided in Table 5.26. The third objective of the study to identify
the linkages among various constructs used in the study was thus satisfied.
Except two, all other hypotheses proposed were found significant. The
insignificant relation between PSQ and desired expectation can be justified
from the fact that on determining perceived service quality the factors which
contribute are more critical rather than what are the desired expectations
regarding quality parameters. Also the significant relation desired
expectations bear with customer satisfaction underlines its importance in
developing satisfaction. Similarly from the quality perceptions of the sample
under study negative behavioral intentions were not expected exclusively due
to perceived service quality alone and found customer satisfaction has a vital
role in the development of both positive and negative behavioral intentions.

All the paths from indicators to corresponding constructs were


found significant in this study. The measurement variables to first order
service quality latent constructs such as ‘Image’, ‘Human’, ‘Convenience’,
‘Products services’ and ‘system’ were found significant in the confirmatory
factor analysis(Tables 5.8 and 5.20 above).All the indicators irrespective of
formative or reflective were found significant and thus confirmed content
validity of the theory developed.
189

Table 5.26 Results of hypothesis testing

You might also like