You are on page 1of 70

Ideal, Nonideal, and No Marker Variables:

The CFA Marker Technique Works When it Matters

Williams, Larry J. & OBoyle, Ernest H.

IN PRESS: JOURNAL OF APPLIED PSYCHOLOGY


http://dx.doi.org/10.1037/a0038855
This article may not exactly replicate the final version published in the APA journal. It is
not the copy of record

Ideal, Nonideal, and No Marker Variables:


The CFA Marker Technique Works When it Matters
ABSTRACT
A persistent concern in the management and applied psychology literature is the effect of
common method variance on observed relations among variables. Recent work (i.e., Richardson,
H. A., Simmering, M. J., & Sturman, M. C. (2009). A tale of three perspectives examining post
hoc statistical techniques for detection and correction of common method variance.
Organizational Research Methods, 12, 762-800) evaluated three analytical approaches to
controlling for common method variance, including the CFA Marker Technique. Their findings
indicated significant problems with this technique, especially with nonideal marker variables
(those with theoretical relations with substantive variables). Based on their simulation results,
Richardson et al. concluded that not correcting for method variance provides more accurate
estimates than using the CFA Marker Technique. We reexamined the effects of using marker
variables in a simulation study and found the degree of error in estimates of a substantive factor
correlation was relatively small in most cases, and much smaller than error associated with
making no correction. Further, in instances where the error was large, the correlations between
the marker and substantive scales were higher than that found in organizational research with
marker variables. We conclude that in most practical settings the CFA Marker Technique yields
parameter estimates close to their true values, and the criticisms made by Richardson et al. are
overstated.

Ideal, Nonideal, and No Marker Variables:


The CFA Marker Technique Works When it Matters
Over the past 25 years, organizational researchers have consistently used employee selfreports as a data collection method. As a result, they have been forced to address the problem of
common method variance and seek a magic bullet that will silence editors and reviewers
(Spector & Brannick, 2010, p. 2). The problem is the possibility that the use of a common
measurement method when studying two or more variables may introduce artifactual covariance
that leads to a conclusion that a substantive relation exists when it does not (Podsakoff & Organ,
1986). Recently, Spector and Brannick (2010) have noted that the issues raised by method
variance discussions are at the heart of our being able to draw appropriate inferences about
observed relations among our measures and what we can and cannot conclude from studies using
single or multiple methods (p. 2). We also add that the topic of common method variance is not
without controversy, as it is not universally accepted that common method variance is a problem
(e.g., Spector, 2006).
Current research in this area has addressed questions of the extent of method variance,
causes of method variance, and analytical strategies for controlling method variance. Within this
latter category of research, the use of marker variables has become popular since originally being
proposed by Lindell and Whitney (2001). They defined a marker variable as a variable that is
theoretically unrelated to substantive variables within a given study and for which its expected
correlation with these substantive variables is zero, and demonstrated its use with partial
correlation analysis. Others have advocated the use of latent variable techniques to investigate
effects associated with marker variables (e.g., Podsakoff, MacKenzie, & Podsakoff, 2012). This
latent variable approach was originally presented by Williams, Hartman, and Cavazotte (2003)

and has been labeled the CFA Marker Technique (Richardson et al., 2009). A recent review by
Williams et al., (2010) identified 15 studies in organizational research that have used this
approach, but as evidence of the CFA marker techniques increasing popularity, the total number
of studies using this approach in management and applied psychology alone has now risen to 47
studies as of September, 2013.
The current study contributes to the ongoing discussion about whether use of the CFA
Marker Technique is appropriate and leads to proper conclusions about relations among latent
variables. We present just the second investigation of the effectiveness of the Technique, while
also contributing to the broader literature on common method variance analysis. Our research
presents a detailed examination of the Technique by investigating its performance using both
analytical and Monte Carlo simulations with very specific sets of research design conditions
(instead of combining findings across groups of conditions). We also compare its performance
relative to an alternative approach of making no correction for marker based common method
variance. Then, we consider which results related to the accuracy of conclusions about relations
among substantive latent variables are most applicable to organizational and applied researchers.
Toward this end, we conduct a meta-analysis to find the typical correlation from organizational
research between a marker and substantive variable, after which we determine which specific
corresponding simulation results are most likely to apply to management and applied psychology
research. This approach will yield an assessment as to whether under typical conditions
organizational researchers should use the CFA Marker Technique, or whether they would be
better off not implementing a statistical correction procedure.
The CFA Marker Model

A graphical representation of a latent variable model associated with the CFA Marker
Technique is shown in Figure 1, and it is based on Figure 3a from Richardson et al., (2009),
while adding our own labels for key parameters. We will refer to this model as the CFA Marker
Model, and it includes two substantive latent variables (independent variable- IV, and dependent
variable- DV), and a marker latent variable (MV), each represented with four indicators and
linked to these indicators via substantive factor loadings (e.g., 1, 3, 7). Also in this model, the
substantive latent variables are correlated with each other (IV, DV) and with the marker latent
variable (IV, M; DV,M). When using the CFA Marker Technique, these latter two factor
correlations are fixed (i.e., set to zero) and as such are often omitted from graphical displays of
the technique (e.g., Figure 3a of Richardson et al). However, these two paths are included in our
figure because they play a key role in our analysis and our focus on the differences between ideal
and nonideal marker variables. As labeled by Richardson et al., an ideal marker variable is
defined as being uncorrelated with substantive variables in the CFA model (IV, M; DV,M = 0).
Such an orthogonal specification is necessary for the model to be identified, and it can be traced
back to the earliest applications of latent variable models for method effects (e.g., Bagozzi, 1980;
Joreskog, 1971, 1974, 1979; Kenny, 1979; Marsh, 1989). It also allows for relatively easy
decomposition of indicator variance into construct, method, and random error components.
Alternatively, a non-ideal marker variable is defined as being correlated with substantive latent
variables in the model (IV, M; DV,M 0).
The effects of marker based method variance are represented with the CFA Marker Model
through the factor loadings from the marker latent variable to the indicators of the two
substantive independent and dependent latent variables (e.g., 5, 8). Higher values for these
marker method factor loadings represent a higher amount of marker based method variance. By

including these method factor loadings, the CFA Marker Technique attempts to extract the
method factor variance and provide an unbiased estimate of the substantive relation between the
latent variables (IV, DV). The focus most relevant to theory testing is how much the relation
between the substantive latent variables changes when common method variance is controlled by
adding the marker latent variable and its factor loadings linking it with the indicators of the
substantive latent variables.
------------------------------Insert Figure 1 about here
------------------------------Previous Research and the CFA Marker Technique
Relevant to the literature on marker variable analysis are Monte Carlo studies that have
investigated models in which non-marker based method factors are studied, typically within the
context of a multitrait-multimethod design. In these studies, the method factors have been
proposed to be uncorrelated with substantive factors, matching the characteristic of an ideal
marker. This research has focused on that the average bias in the estimation of a factor
correlation, defined as the mean of the differences between the population factor correlation and
all of the sample estimates of the factor correlation for a given condition or combination of
conditions from the Monte Carlo analysis. Reported results show that when estimating
substantive factor correlations with correctly specified models (including orthogonal method
factors), the bias has been non-zero but very small. For example, in one of the earliest
comparisons of alternative models for multitrait-multimethod data, Marsh and Bailey (1991)
found that the average estimated factor correlation from models with method parameters was .
037 larger than the average of the known population trait correlation. In another study involving

multitrait-multimethod matrices, Conway et al. (2004) concluded there was no evidence of bias
based on finding a mean bias of -.01 for estimated trait factor correlations as compared to their
population values. Further, in a Monte Carlo investigation of assessment center construct validity
models, Lance, Woehr, and Meade (2007) obtained differences between corresponding
population values and average sample values of .001, -.002, and .002. More recently, Le,
Schmidt, and Putka (2009) reported an average simulation value of .503 when the true
correlation was .500 with a correctly specified model involving method parameters, a difference
or bias of .003. All of these values for bias are less than the .05 criterion of Hoogland and
Boomsma (1998) reported by Bandalos (2012) as a standard for judging whether estimates are
unbiased.
In summary, the simulation literature on method variance corrections with CFA has been
relatively consistent in the conclusion that properly specified models (i.e., as would be true with
ideal marker latent variables) recover parameters with near-perfect accuracy. This explains the
popularity and widespread use of CFA based models for controlling for a variety of types of
method variance, including those associated with multitrait-multimethod designs, measured
method effect variables such as social desirability and negative affectivity, and item wording
effects (e.g., Podsakoff, MacKenzie, Lee, & Podsakoff, 2003; Williams, Ford, & Nguyen, 2002).
Indeed, CFA and a latent variable approach have been recommended for marker variable analysis
as well. As noted by Podsakoff, MacKenzie, and Podsakoff (2012) in the Annual Review of
Psychology, If the specific source of method bias is unknown or valid measures of the source of
bias are not available, then we recommend the CFA Marker Technique (p. 564).
Richardson et al., (2009) recently contributed to the simulation literature on CFA and
method variance with a Monte Carlo investigation of analysis strategies for marker variables and

common method variance. They studied two broad sets of conditions based on whether common
method marker variance was absent or present, and for the latter conditions they included two
specific sets of cases, some of which assumed method effects were equal across variables (noncongeneric) and others which did not assume such equal effects (congeneric). For the situations
involving method variance, they considered cases in which the marker variable was ideal (truly
uncorrelated with the substantive latent variables) and also cases in which the marker variable
was non-ideal (resulting in a violation of the orthogonality assumption previously mentioned).
Finally, for all conditions Richardson et al. compared results based on use of the CFA Marker
Technique with those results obtained while making no correction. As a general finding,
Richardson et al. (2009) support the use of the CFA Marker Technique for detecting the presence
of common method variance associated with marker variables. For purposes of the present study
we will focus on other results, those for error in estimating true substantive correlations.
Specifically, Richardson et al. (2009) operationalized correction accuracy as absolute
error or the magnitude of the average of the absolute differences between true population
correlation and sample estimates of the correlation. We note that this absolute error is different
from bias mentioned earlier, in that the former is based on the absolute differences between the
population parameter and the sample estimates, while the latter does not and so negative and
positive differences can offset each other. For cases in which an ideal marker was used and no
method variance was present, Richardson et al. found an average absolute error using the CFA
Marker Technique of .07 and .06, depending on whether the true substantive correlation was zero
(IV, DV = .00) or non-zero (IV, DV = .2, .4, or .6, results combined). These are relatively small
degrees of error and this is consistent with previous simulation work (e.g., Marsh & Bailey,
1991; Conway et al., 2004; Lance et al., 2007; Le et al., 2009). However, once non-congeneric

(i.e., equal) method variance was introduced, even with an ideal marker, the error obtained by
Richardson et al. was .31 and .19 (again, depending on whether the true substantive correlation
was .0 vs. .2, .4, or .6). For non-ideal markers with no method variance present they reported
error of .11 and .08 (depending on IV, DV), while with non-ideal markers and method variance
present the error was .17 and .12. In contrast, using no correction when method variance was not
present the error obtained by Richardson et al. was .05 and .09, and was .32 and .14 when
making no correction when method variance was present.
The comparison of results based on the CFA Marker Technique with those obtained using
no correction is what led Richardson et al. to conclude that in most cases across all conditions
and techniques [including the CFA Marker Technique], applying statistical correction when
CMV is absent can produce less accurate relations than applying no statistical correction (pg.
793). They added that for data with noncongeneric and congeneric CMV, applying a statistical
correction does not necessarily produce more accurate estimates of relations than doing nothing
(pg. 793). In providing advice to reviewers they summarized that the CFA approach does not
necessarily produce accurate corrected estimates of relations and sometimes performs
significantly worse than no correction, and we do not recommend using the CFA marker
technique for the purpose of producing corrected correlations (pg. 795). In short, while
Richardson et al. concluded the CFA Marker Technique may have practical value for detecting
CMV, their conclusions offered a rather pessimistic assessment of the merits of the CFA Marker
Technique for reaching valid conclusions about relations among substantive latent variables. In
fact, they suggested the remedy for CMV (CFA Marker Technique) may be worse than the
disease. This particular conclusion is important because the key focus of most organizational

research is testing theory by estimating relations among variables, in this case substantive latent
variables.
Toward a Better Understanding of the CFA Marker Technique
Need for Detailed Analysis. The present research advances understanding of the CFA
Marker Technique in three important ways. First, we extend Richardson et al. by testing the
sensitivity of violating the orthogonality assumption for the CFA Marker Model. A major
contribution of Richardson et al. is the conclusion that misspecification can occur when method
variance is present and/or the orthogonality assumption is violated. However, the large scope of
their project required significant aggregation in reporting results and in some cases the combined
information may unintentionally have masked important differences in the severity of error. A
more nuanced analysis will determine how different the specification error for substantive factor
correlations is for low levels of marker variance and less violation of the assumption of an ideal
marker as compared to higher levels of marker variance and more extreme nonideal markers.
An Alternative Approach to No Correction. Second, a key focus of Richardson et al. was
the comparison of results in latent variable specification error for the CFA Marker Technique as
compared to those obtained if no correction was made. Richardson et al. implemented the no
correction technique by computing the zero-order correlation between scales resulting from the
aggregation of the four indicators for each of the two substantive variables under the various
conditions in their design. As such, each computed scale correlation represented the combined
influence of marker common method variance and random error variance. It would be expected
that this influence represents the joint effect of inflation due to the common method variance and
an attenuation effect expected due to random measurement error variance. As such, it may not be
appropriate to consider this as making no correction for marker variance, as it is actually making

no correction for both marker variance and unreliability, whose effects would be expected to be
opposite in nature. Perhaps more importantly, it is not the choice that researchers considering the
use of the CFA Marker Technique would actually face. This is because they would already be
using latent variable models to deal with random measurement error, and as a result the more
appropriate implementation of no correction for them would be to focus on the substantive factor
correlation obtained from a model without any marker latent variable (as opposed to the scale
correlation). We will refer to this model as the CFA No Marker Model, and will use it as our
basis for comparing results for the CFA Marker Technique with those of making no correction.
Applicability of Findings. A third, and perhaps most important, advancement of the
current study is that we will determine which amount of factor correlation specification error is
most likely to occur with typical organizational and applied psychology research using the CFA
Marker Technique. It is important to study the applicability of simulation findings to research
contexts because using simulation conditions that do not reflect realistic conditions is tantamount
to navel-gazing. The issue of the match of simulation conditions to real-world data has been
discussed by Bandalos (2006) who stated, Monte Carlo studies are dependent on the
representativeness of the conditions modeled (p. 288); thus, one potential concern is that the
constructed model may not reflect real-world conditions (p. 392). If this is true, even the most
elegantly designed study may not be informative if the conditions included are not relevant to the
type of data one typically encounters in practice (Bandalos & Gagne, 2012, p. 96). This point
has been made in previous research on common method variance by Conway et al. (2004), who
noted that their study would be theoretically interesting, but of little practical importance if the
parameter values in our simulation represented extreme conditions that seldom arise in empirical
research (p. 553).

In terms of the present study, although Richardson et al. chose values for their
manipulated parameters based on a previous simulation study using single indicators (Williams
& Brown, 1984), it is not clear if these manipulated values with a multiple indicator model are
those that would lead to data and results that are representative of typical organizational research.
If the data are not representative, then the negative conclusions offered by Richardson et al.
(2009) about effects of specification error due to incorrectly assuming a marker latent variable is
ideal may not be applicable to organizational researchers. The same may be true of their
recommendations that making no correction may be better than using the CFA Marker
Technique.
To determine which simulation results would be most applicable to typical organizational
research, we conducted a meta-analysis of marker variables relations to substantive variables
from articles that have included marker variable since Lindell and Whitneys (2001) introduction
of the technique. This step yielded information about the range of marker-substantive variable
correlations typically found in organizational and applied psychology research. Next,
correlations expected among the composite scales that would be computed using the items
representing the substantive and marker latent variables based on the CFA Marker Model shown
in Figure 1 were be computed via algebraic analysis. For specific cells of our simulation design,
we then examined measures of the factor correlation error for the specific cases in which the
computed scale correlations matched those correlations actually found in our meta-analysis of
empirical research using marker variables. This process ensured that conclusions about factor
correlation specification error and recommendations about how to deal with marker variance are
based on results from specific conditions that match those found in most management studies.

METHOD
Conceptual Model. As noted earlier, the conceptual model for our research (shown in
Figure 1) is the same model used in Richardson et al, (2009) with two substantive latent
variables and a method latent variable. Four continuous indicators represented each latent
construct and the indicators are related to their respective latent variables (e.g., 1, 3, 7).
Additionally, the marker latent variable was also linked to the eight independent and dependent
latent variable indicators (e.g., 5, 8). Thus, there were three factor correlations (IV,DV; IV, M; DV,M
), 16 factor loadings, and 12 error variance parameters whose values we manipulated. The factor
loadings from the method factor to the method items (i.e., m1 m4) were set equal to the
substantive reliability parameters under the assumption of no marker method variance. Factor
variances were set to 1.0 to achieve identification. In creating the population covariance matrix
used in the simulations, the source of common method variance was attributable to a single
source (the marker variable), consistent with Richardson et al. (2009).
Design. To allow comparison with Richardson et al., we used their description to guide
the selection of parameter values in building our covariance matrices. Specifically, we built our
population covariance matrices by examining three levels of marker-substantive factor
correlations (IV,M DV,M = .00, .20, .40), and four levels for the strength of the relation between
the substantive variables (IV,DV = .00, .20, .40, .60). We also manipulated the level of reliability
for the indicators at three levels. Richardson et al. (2009) chose three levels of coefficient alpha
reliability for the independent and dependent constructs, based on internal consistency reliability
values of .70, .80, and .90. Thus, we needed to determine the item reliabilities for the indicators
that would result in composite reliabilities for the scales of .70, .80, and .90. Since each construct

has four indicators, the resulting item reliabilities for these three conditions were computed as .
368, .500, and .692 (which results in composite reliabilities of .70, .80, and .90).
Following Richardson et al. we also examined four levels of the ratio of true substantive
variance to common method variance, including 100:0, 80:20. 60:40, 40:60. Based on the item
reliabilities and the ratio of substantive to marker variance, we calculated the absolute amount of
substantive and method variance in each item for the four levels of the ratio above. We then
computed substantive and method factor loadings by taking the square roots of these absolute
amounts. For example, if the item reliability = .368 and the ratio of substantive to method
variance = 40:60, the amount of substantive variance was .40 x .368 = .147 and the square root
of this was .384, which was used as the substantive factor loading. Further, the amount of
method variance with this example was .60 x .368 = .221, and the square root was .470 and this
value was used as the method factor loading. The error variance was computed as 1- item
reliability, so in this example it would be 1- (.147+.221) = .632.
Within each model, we assumed a noncongeneric measurement model based on the
similarity of factor correlation error in the results of Richardson et al. (2009) with noncongeneric
measures versus congeneric measures1. Thus, within and across the substantive latent variables
the manipulated factor loadings and error parameters for their indicators were assumed to be
equal as they varied across the cells of our design. For the marker latent variable, its relation to
its indicators and the amount of unique variance were assumed to be equal to those for the
substantive indicators for a particular reliability condition. In sum, we used a 3 x 4 x 3 x 4 design
1

We also conducted simulations for congeneric conditions and found that the recovery of the substantive factor
correlation, IV,DV, was identical to the noncongeneric results. The recovery of the factor loadings for the method and
substantive factor loadings closely mirrored the noncongeneric results. For example, when the substantive to method
ratios were 80:20 for the IV and 60:40 for the DV, the specification errors for substantive and method factor loadings
were within the range of the specification errors of the 80:20 noncongeneric results and the 60:40 noncongeneric
results. Given the space that the results of 1728 conditions would require, we omit these results but the data are
available from the authors.

(3 levels of marker-substantive factor correlations, 4 levels of strength of substantive factor


correlations, 3 levels of item reliability, 4 levels of ratio of substantive to method variance),
which resulted in 144 cells or sets of conditions that we analyzed.
CFA Models. For our analyses, we used two CFA models based on the model shown in
Figure 1, a CFA Marker Model and a CFA No Marker Model. We implemented the CFA Marker
Model first assuming that the method and substantive latent variables were orthogonal, which is
to say we assumed the use of an ideal marker variable. For the group of models based on the use
of an ideal marker variable (IV,M DV,M = .00), any error in the obtained value of the substantive
factor correlation represents a basic flaw in the CFA Marker Technique, even when its key
assumption is met. With a second subset of models based on nonideal marker variables for which
the true marker-substantive factor correlations used to generate the data were nonzero (IV,M DV,M
= .20, .40), the difference between the obtained factor correlation and the one used to generate
the data represents the impact of incorrectly assuming a marker variable to be ideal when it is
really nonideal.
We also wanted to implement a CFA approach involving no correction for method
variance. One logical approach to implementing the no correction strategy is to remove the
effects of marker based common method variance from the CFA Marker Model. Thus, our CFA
No Marker Model is similar to the model shown in Figure 1, only it does not contain marker
method factor loadings linking the marker latent variable to the substantive indicators x1-y4; it
also assumes the marker latent variable is orthogonal to the substantive latent variables. As
before, the difference between the true substantive factor correlation associated with each
method variance condition and the correlation obtained from the CFA No Marker Model
represents specification error, but a different type than considered up this point. Results for the

substantive factor correlation with the CFA No Marker Model reveal the degree of error
associated with doing no correction and allow for a comparison of the differences in error
between including a marker variable when no common method variance is present and excluding
a marker variable when common method variance is present.
Analytical / Monte Carlo simulations and error. We conducted both analytical and Monte
Carlo simulations. Our CFA analyses were based on the 144 population covariance matrices
computed as described above based on the manipulated parameters. Research questions such as
those posed in the present study can be investigated using a population analytical simulation as
well as a Monte Carlo design in which sample parameter estimates are generated for chosen
population parameters. As noted by Bandalos (2006), population analytical simulations allow
researchers to create data in which various assumptions have been violated to determine the
effect of such violations on study results (p. 93). In our simulation, the assumption that was
violated in many conditions is that IV,M and DV,M = 0, which is to say in these conditions a
nonideal marker was used with the CFA Marker Technique that assumes an ideal marker. These
conditions were also examined with the CFA No Marker Model. Our analytical simulation used
as input the population covariance matrices associated with various levels of the manipulated
parameters that were then used subsequently with CFA models. In analyses with these population
covariance matrices we focused on the degree of error in the substantive factor correlation IV,DV,
which we will refer to as population error. This type of analytical simulation approach has been
used recently to investigate differences in population parameter values with equivalent models
(Williams, 2012).
We also conducted a Monte Carlo simulation to supplement our analytical simulation and
allow direct comparison of some of our findings to those of Richardson et al. (2009). As noted by

Bandalos (2006), the Monte Carlo approach can be valuable when sampling variability and/or
stability of results is of interest, and it was the approach used by Richardson et al. (2009). For
our Monte Carlo analyses, we examined three measures of error using parameter estimates to
supplement the assessment of population factor correlation error mentioned above. Our first
measure of error, bias, has been discussed by Bandalos (2012, eq. 6.1) and as noted earlier is the
mean of the differences between the population factor correlation error and all of the sample
estimates of the error for a given cell or combination of cells from the Monte Carlo analysis. We
also examined efficiency or the variability of the sample parameter estimates using the mean
squared error (MSE), calculated as the average of the squared deviations of the sample factor
correlation error estimates from their population values (e.g., Bandalos, 2012, eq. 6.2). Our
values for bias and MSE were obtained from output provided by MPlus 7.1 (Muthn & Muthn
(1998 2013). Finally, to obtain results that we could directly compare to the findings reported
by Richardson et al., (2009), we also examined absolute error following their calculation as the
average absolute difference between the population factor correlation error and the sample
estimates. Thus, we calculated the absolute values of the differences between the population
factor correlation error value and the sample estimates of this error. Detailed information on the
comparability of analytical simulation and Monte Carlo results for studying specification error is
presented in Appendix A. In general, given adequate sample size and/or an adequate number of
trials, a Monte Carlo simulation with a correctly specified CFA model will yield mean values of
parameters that converge to their population values used in an analytical simulation.
Finally, using results from our analytical simulation we also report the absolute error due
to model misspecification in other model parameters, including substantive and method factor
loadings for the substantive indicators, and the unique variance of the substantive indicators.

These parameters are important for judging the quality of the indicators, and if non-zero, these
errors represent additional effects of using nonideal markers. In discussing results of individual
cells, we will not use the qualifier absolute so we can discuss errors whose values are both
positive and negative.
Procedure
As a first step, to create the population covariance matrix for a given cell in our design,
parameters were fixed to the values associated with each set of conditions (calculated as
previously described) and then used as input with LISREL 8.80 (Joreskog & Sorbom, 2006).
This step resulted in a fitted covariance matrix, computed by LISREL as = X * * X + .
The values in this matrix represent those obtained through implementation of Equations1 a-e in
Appendix B, with LISREL providing the computations. This approach to generating the
population covariance matrix based on fixed parameter values has been discussed by Bandalos
(2006, p. 405). We then used the population covariance matrix as input with a program file based
on the same model that estimated the parameters and found that they were recovered perfectly.
In a second step for each cell, we used the fitted covariance matrix with the CFA Marker
Model that includes an ideal marker latent variable. In this latter step all parameters shown in
Figure 1 were freely computed by LISREL, with the exception of the two factor correlations
between the substantive factor and method factor (IV,M, DV,M = .00). As noted before,
constraining these two correlations to zero was necessary for the CFA Marker Model to be
identified. Within each of the 144 cells of our design, the extent to which error is created by
fixing these two correlations to zero when they are non-zero in the population (i.e., the fitted
covariance matrix) is reflected in the difference between (a) the value of the IV,DV parameter used
to create the population covariance matrix and (b) the value obtained implementing the ideal

marker variable model. The extent of error in other parameters due to incorrectly assuming an
ideal marker variable was similarly computed.
In a third step for each cell of our design, we used the 144 population covariance matrices
as input with the CFA No Marker Model, which as mentioned earlier had an orthogonal marker
method factor but no marker factor loadings on substantive indicators. Examination of results
from this model revealed the impact of ignoring potential marker method variance, which is to
say these results reveal the consequences of applying no correction technique.
Our fourth step in the research design involved the Monte Carlo analyses. For each
unique value of the population factor correlation error obtained with various cells of our design,
a set of 200 sample values were generated assuming a multivariate normal distribution, and this
was done based on three levels of sample size (n=100, 300,1000, following Richardson et al.).
These sample error values were then used to obtain our three additional indices reflecting the
amount of error in estimating the substantive factor correlation with correctly and incorrectly
specified CFA models.
Next, we used covariance algebra to calculate the scale correlations (IVs, Ms) that would be
obtained between the marker and independent variable scales across the various values of the
manipulated parameters; we did not need a separate calculation for the marker and dependent
variables correlation, since its value is the same given our assumption that IV,M and DV,M always
had equal values. This final step of computing population scale correlations with the simulated
data was taken so we could determine which error values were most representative of typical
organizational research. Formulas linking the model parameters to indicator variances and
covariances are presented in Appendix B and are based on the approach to using path diagrams
to determine covariances and variances of indicators in a path model as described by Loehlin

(2004). Once the scale correlations were computed, we focused our attention on the specific
population scale correlations that closely matched our meta-analysis results of marker
correlations from management research. For our population scale correlations that matched the
meta-analysis results, the population error and three Monte Carlo based errors (bias, mean
squared error, absolute error) were examined to determine the effectiveness of the CFA Marker
Technique in typical organizational research.
Meta-analysis of marker variables from organizational research
Search parameters. We used various combinations of the following keywords in Google
Scholar, PsycInfo, and ProQuest Dissertations & Theses; marker variable, Lindell and Whitney,
common method variance, and common method bias. In addition, we conducted a legacy search
of all articles that have cited either Lindell and Whitney (2001) or Williams et al., (2010). Our
search yielded 1091 unique sources.
Inclusion criteria. In order to be included in the analysis an article needed to have used a
marker variable and needed to have reported the relations between the marker and substantive
variables. This eliminated 307 studies. Because our interest was restricted to management and
applied psychology research, we eliminated 618 studies from the marketing, accounting, basic
psychology, and information systems literatures. We also eliminated 10 review pieces and
simulation studies. Finally, we were unable to locate 36 citations, most of which were
unpublished. Our final usable sample was 120 studies. The complete dataset is included in
Appendix E and the citations are denoted with asterisks in the Reference section.
Coding of studies. For each study, we coded the mean absolute value of the relations
between the marker variable and the substantive variables. In addition, we coded whether the
choice to test for CMV using a marker was a priori or post hoc, whether the partial correlation

strategy, CFA marker approach, or some variant (e.g., the statistical significance of the markersubstantive relations) was used, whether the marker variable had the same scaling as the
substantive variables, and finally whether the author found evidence of CMV. For studies by the
same author, we used Woods (2008) detection heuristics to determine whether the sample
appeared in multiple publications. Fifteen (15) studies used the smallest or second smallest
correlation as a proxy for CMV, and did not report the remaining relations with the other
substantive variables. In these cases, we coded the reported correlations. The results of these
fifteen studies were not systematically larger or smaller than those that reported all correlations
between marker and substantive variables (Q = .031, p = .58), therefore they were included in the
final sample. For the five studies that only reported the range of correlations between the marker
and substantive variables, we took the midpoint, and for the two studies that only reported beta
coefficients, we used the Peterson and Brown (2005) to r transformation. Results with and
without these studies were nearly identical.
Analytic procedure. We used Hunter and Schmidt (2004) equations for all analyses. We
report the weighted mean estimate, the estimate corrected for unreliability at the three
aforementioned levels of .70, .80, and .90, the 95% confidence interval, observed variance,
percentage of variance due to sampling error, and the 99% credibility interval. The credibility
interval speaks to the degree that an effect will generalize across various subpopulations. As
stated above, a simulation only provides value to the extent that its assumptions and parameters
are typical of that found in practice. We chose such a conservative credibility interval to
demonstrate the widest range of marker variable parameters in order to assess the reasonableness
of simulation parameters. The use of such a wide credibility interval was also chosen to
compensate for any potential outcome reporting bias that may occur since disclosing a large

amount of method variance in a manuscript would likely negatively affect the reviewers
evaluation.
RESULTS
Substantive factor correlation. Table 1 summarizes results on the error for the
Independent Variable Dependent Variable factor correlation, IV, DV. The information from our
study presented in Table 1 is obtained from a full table of results that is provided in Appendix C.
We first note that the population error in IV, DV did not depend on item reliabilities or the ratio of
substantive to method variance - as can been seen in Appendix C it varied based only on the
correlations between the two substantive latent variables IV, DV and between the marker and
substantive latent variables (IV, M; DV,M ). For example, in Appendix C it can be seen that error in
recovering the substantive factor correlation when IV, M; DV,M = .20 and IV, DV = .00 is .042
regardless of the ratio of substance to method variance or the reliability of the indicators. As a
result, relative to Appendix C, Table 1 eliminates redundant information and each value
presented actually represents the same value obtained across cells where the item reliabilities and
the ratio of substantive to method varied. For purposes of comparisons we will present in our
discussion, we also present in Table 1 relevant comparable findings from Richardson et al.
(2009), as well as our Monte Carlo results.
Returning to Table 1, the first set of population error values for the CFA Marker Model
shows that when an ideal marker variable (IV, M and DV,M = 0) was used in generating the data but
no common method variance was present, the IV, DV factor correlation was recovered perfectly,
and there was no error for any conditions across all four values of IV, DV (.0, .2, .4, .6). For this
population error of .00, the corresponding Monte Carlo estimates combined across the three
sample sizes were .001 (Bias), .004 (MSE) and .048 (AE). The second set of results for the CFA

Marker Model in Table 1 shows that when an ideal marker was assumed with the CFA Marker
Technique and CMV was present, there was also no error across all four conditions (IV, DV = .0, .2,
.4, .6). Given the lack of error, the Monte Carlo results for the first set of results also apply to the
second set.
The third and fourth set of error values focus on the use of nonideal markers. The third
set involved no common method variance. Using the CFA Marker Model with nonideal markers
and a zero substantive relation (IV, DV = .0), the error across both levels of being nonideal (IV, MV = .
2, .4) was .12, with much less error with the lower substantive-marker correlation of .2 (error = .
04) than the higher substantive-marker correlation of .4 (error = .19). Combining across the two
nonideal values of IV, MV, the Monte Carlo results indicated Bias = .121, MSE = .019, and AE = .
124. A similar pattern with nonideal markers emerged for conditions where the substantive factor
correlation was non-zero, with an overall error of .07 but a much lower value of .03 when IV, MV =
.20 and a higher error value of .11 when IV, MV = .40. The three corresponding Monte Carlo error
values were .071, .009, and .081 (Bias, MSE, and AE, respectively). With our last set of findings
involving the CFA Marker Model, the results for the fourth set based on the CFA Marker Model
with nonideal markers and CMV present show the same error values were obtained as when no
CMV was present.
Table 1 also presents the comparable results associated with use of the CFA No Marker
Model (all 144 conditions are reported in Appendix D), which as noted earlier includes a marker
latent variable but no marker method loadings on substantive indicators. These results are
summarized in the fifth set of Table 1 and show that when the CFA No Marker Model is used and
no marker common method variance is present, no specification error results. As a result, the
Monte Carlo results associated with Condition Sets 1 and 2 apply. In contrast, the sixth set of

results indicate that when common method variance is present but the CFA No Marker Model is
used and no correction for common method variance is applied, there is substantial specification
error in the IV, DV factor correlation when a marker variable is not used. Specifically, the results
show that when IV, DV = 0, the overall population error was .48, with values of .40, .49, and .56,
depending on whether the IV, MV correlation was .00, .20, or .40. Based on the combined
population error of .48, the Monte Carlo error values were .480 (Bias), .232 (MSE) and .480
(AE). Alternatively, when IV, DV was not zero (values of .20, .40, or .60), the average error value
was .29, with values of .24, .29, and .33, depending on the degree to which the marker variable
was ideal or nonideal. For this population error of .29, the Monte Carlo error values were .290, .
088, and .290 (Bias, MSE, and AE, respectively).
Measurement parameters. Table 2 presents the true parameter values for the substantive
(s) and marker (m) factor loadings, as well as the error in these parameters across the three
values of IV,M and DV,M2. To see the impact of incorrectly assuming an ideal marker when
implementing the CFA Marker Model, note how the errors in the substantive and marker factor
loadings increase as IV, M and DV,M increase from .00 to .20 and .40. For example, when the item
reliability was .368 and the ratio of substantive to method variance was 80:20, the error for the
substantive factor loading increased from .000 to .026 to .072. These positive values indicate that
the true value was greater than the value associated with misspecification, which is to say the
substantive factor loadings were less than their true values (i.e., substantive factor loadings were
underestimated). Across all cells, the average absolute error for the substantive factor loadings
was .056. Using the same example as above, the error in method factor loadings increased from .

These errors did not depend on the substantive factor correlation (IV, DV). The same is true of the computed scale
correlations (IVs MVs). As a result, Table 3 eliminates redundant information, and each value presented represents the
same value obtained across four cells where IV, DV varied. For a complete list of results across all conditions, see
Appendices C and D.

000 to -.098 to -.191, indicating these factor loadings had values greater than their true values.
Across all cells, the average absolute error for the method factor loadings was .146. The results
of the specification errors for the CFA No Marker Model conditions are also presented in Table
2. Because the method factor loadings were not estimated in this model, we only discuss the
specification errors in the substantive factor loadings, and these did not depend on the value of
IV,M. Across the 12 sets of conditions, the specification errors ranged from .000 to -.306 with an
average absolute error of .125.
IV-marker variable scale correlations in simulation. As noted in the introduction, a key
aspect of any simulation concerns the degree to which realistic conditions are examined, and
information is needed about the amount of factor correlation specification error associated with
marker variable use in typical management research. Toward this end, we first calculated the
scale correlations IVs MVs across the various cells of our design. These correlations are shown in
Table 2. An examination of the condition where IV, M (and DV,M ) = 0 shows that even in the
absence of a relation between the independent and marker latent variables (i.e., an ideal marker),
these scale correlations range from .000 in the no CMV condition to .632 in the 40:60 CMV
condition. For the first non-ideal marker condition, IV, M = .200, scale correlations ranged from .
203 to .694, and for the second non-ideal marker condition, IV, M = .400, scale correlations
ranged from .406 to .752.
Meta-analysis: IV-marker variable scale correlations in practice. The results of our metaanalyses are presented in Table 3 and a complete list of the studies is provided in Appendix E.
The weighted mean correlation between the marker variable and substantive variable, robs, was .
065 (95% CI .051; .078). Corrected for unreliability at the .90, .80, and .70 levels for both the
marker and substantive measures yielded estimates of .072, .081, and .093, respectively. We note

that while these relatively low values may suggest that common method variance is not a large
problem, it is equally likely that any given marker variable may not capture the full range of
factors and processes that create common method variance. The percentage of variance
attributable to sampling error was only 31.4%, thus there is evidence of potential moderation.
Based on an examination of the confidence intervals, one can see that researchers using the CFA
marker technique reported larger marker variable relations than those using the partial correlation
technique (.116 vs. .057). In addition, marker variables with the same scaling as the substantive
variables showed larger relations than those with different scaling (.092 versus .029). Despite
evidence of differences based on marker variable technique and scaling, it is important to note
that all of the subsample effect sizes are small to non-existent by Cohen (1980) standards (range .
03 - .12). However, in certain situations (e.g., a substantive relation near the .05 threshold of
statistical significance), even small differences in marker variable relation magnitudes can have
large effects on the conclusion drawn by the researcher. Further, we note that even after
disaggregating the data by type, technique, and scaling, the percentage of variance attributable to
sampling error was in many cases still indicating potential moderators. As with the choice of
analytic technique, the choice of marker variables can have an important impact on findings and
conclusions.
Generalizability of results: which conditions are relevant? The central purpose of the
meta-analysis was to determine the conditions, in both our simulation as well as those in
Richardson et al. that best typify what is observed in practice. To understand whether the
simulation parameters yield data that are an accurate reflection of typical management and
applied psychology research, we rely on the credibility interval. As described above, we use the
very conservative 99% credibility interval of the overall estimate. The interval ranged from -.094

to .223, and of the 36 conditions, only six fell within this range, in that our computed scale
correlations were less than .223. These six conditions are bolded in Table 2. Even if we build the
credibility interval around the overall estimate that assumes the lowest reliability (rc.70 = .114),
the 99% credibility interval ranges from -.134 to .318, and even if we include those nearing this
upper bound (i.e., r < .35), this only encompasses five more for a total of 11 conditions. These
conditions are underlined in Table 2. Therefore, using the most conservative standards based on
the meta-analytic results, these 11 conditions deserved the greatest attention, as it is only these
eleven conditions that are even remotely plausible in practice. This is further evidenced by the
fact that only 3 of the 120 studies included in the meta-analysis contained absolute mean marker
variable relations in excess of .30. Finally, for all sets of conditions for the most non-ideal
marker variable (IV, M = .40) where specification error was greatest, no computed scale
correlations fell within the range of what is typically seen in management and applied
psychology research using marker variables. Specifically, for these conditions the scale
correlations ranged from .41 to .75. This latter finding reveals that the conditions where the nonideal marker created the greatest specification errors in our simulation are extremely unlikely to
be encountered in practice.
Specification error for 11 conditions typical to management research. We next focused on
the 11 conditions representative of management research and examined Appendix C to determine
the population factor correlation error associated with each condition. This search revealed five
population error values, which are shown in Table 4, along with their corresponding Monte Carlo
error results. First we note that the population factor correlation error values ranged from .000
to .042, with the value of .000 occurring in seven of the 11 conditions. The average of the
remaining four non-zero values (.017, .025, .033, .042) was .029. Second, we draw attention to

the importance of sample size, as the values in Table 4 show how the three Monte Carlo based
estimates of error were larger with sample sizes of 100 vs. 300 and 1000. For example, with the
largest population error value of .042, the three estimates of bias dropped from .051 to .039 and .
040 as the sample increased from 100 to 300 and 1000. Finally, we note that only in one case did
the estimate of bias exceed the previously mentioned threshold value of .05 offered by Hoogland
and Boomsma (1998). Specifically, for the population error of .042 when evaluated with sample
sizes of 100, the bias was .051, barely exceeding the threshold value. In sum, these results show
the CFA Marker Technique works very well under nearly all conditions likely to be encountered
by organizational researchers.
DISCUSSION
There has been a longstanding interest in the biasing effects of common method variance,
and a variety of techniques for avoiding and statistically controlling for such bias have been
advocated and implemented. Most recently, use of marker variables that are substantively or
theoretically unrelated to constructs in a given study has been suggested and incorporated into
statistical procedures implemented with confirmatory factor analysis, including the CFA Marker
Technique. Richardson et al. (2009) evaluated the CFA Marker Technique. While supporting the
use of the technique for detecting the presence of CMV, their concern regarding its use for
obtaining acceptable estimates of substantive latent variable relations has been widely noted by
researchers from management and other disciplines. As we consider our results, we will frame
our comments around conditions that differ on whether common method variance is present or
not and whether marker variables are ideal or nonideal, and then we compare these results to
those of the No Marker Model. We will also place special emphasis in our discussion on where

and why our results differ from those of Richardson et al. Most importantly, we will also focus
on our results that match data from typical organizational research contexts.
CFA Marker Technique and Factor Correlation Error. We recognize that at the beginning
of a researchers use of the CFA Marker Technique in any given study, they do not know whether
marker CMV is present. In fact, this is why they might use the Technique, so that if it is present
they might statistically control for it to best estimate the substantive relation and obtain the best
test of the underlying theory. We first consider the outcomes that will occur in such a situation if
no marker based common method variance is present. The results from Set 1 shown in Table 1
indicate that if a researcher uses the CFA Marker Model with an ideal marker, but no CMV is
present, there is little error in the obtained value of IV, DV. And, the results show this finding
holds regardless of the true substantive factor correlation (.00, .20, .40, .60). From our Monte
Carlo results combining across our three sample sizes, our absolute error was .048. Richardson et
al. reported error values of .07 and .06 for this set of conditions.
Alternatively, if in such a situation a marker is correlated with substantive variables (i.e.,
nonideal) but does not contribute to shared common method variance (Set 3), the overall error
we obtained was .12 if the two substantive variables were uncorrelated and .07 if they were
correlated at either .2, .4, or .6 levels. Our corresponding Monte Carlo absolute error values were
.124 and .081, respectively. Once again, these values are very similar to the Richardson et al.
values of .11 and .08. Thus, our results for Sets 1 and 3 are fairly consistent with prior simulation
work. Our bias findings match those of previous Monte Carlo method variance studies, and our
absolute error values are fairly close to those of Richardson et al.
We now move to conditions where common method variance associated with a marker
variable is present and the CFA Marker Technique is used. One of our most important findings is

shown in Set 2. Specifically, if the marker is ideal, the CFA Marker Technique works for the
researcher and there is no population error, regardless of the true substantive correlation. Our
associated Monte Carlo results include a value for absolute error of .048 when combining data
from the three sample sizes. However, Table 1 shows that both our analytical simulation and
Monte Carlo simulation results for Set 2 diverge considerably from those reported by Richardson
et al., who reported absolute errors of .31 and .19. It is not clear why our results diverge to this
extent, but we emphasis that in this situation, a correctly specified model is being examined. Put
differently, the specified model is the same as the population model (i.e., what is being fitted is
correct in the population). We believe that since a correctly specified model is being examined,
the error would be expected to be near zero, and this is corroborated by past research using other
method variance models that found error very near to zero. It may be recalled that findings from
this past research obtained bias values ranging from .003 to .010 (Marsh & Bailey, 1991;
Conway et al., 2004; Lance et al., 2007; Le et al., 2009).
For Set 4, our results based on the presence of a nonideal marker variable are the same as
Set 3 and show that if the true substantive correlation is zero, error identical to that where no
common method variance is present would be obtained by the researcher. The fact that the error
with nonideal markers is the same with and without common method variance is due to the error
originating from the constraint of the non-zero factor correlation to zero in the implementation of
the CFA Marker Technique- the added method variance is accounted for by the Technique and no
additional error occurs. When the substantive factor correlation was zero, the error was .12, and
when it was non-zero the error was .07. Our associated Monte Carlo absolute error values were .
124 and .081. However, in contrast to Set 3 where our results were similar to those of Richardson
et al., for the present condition these values (.124, .081) were somewhat lower than those

reported by Richardson et al. of .17 and .12. Thus, for sets 2 and 4 involving common method
variance, our results are very different than those of Richardson et al. (2009).
These differences warrant consideration of possible reasons our findings for conditions
involving the presence of CMV do not match Richardson et al. (2009). Unfortunately, as
described below we are not able to determine specifically why our findings in conditions where
method variance is present are so different. Our desire was to replicate their study in all ways
except the manner in which our Monte Carlo analyses were implemented (we used MPlus). One
possibility for our different findings involves the specific values chosen for the parameters that
were manipulated but needed to be calculated (substantive and marker factor loadings,
uniquenesses). In our case, we report these specific values as well as how they were calculated
(as shown in Appendices B and C), and this practice was also followed in the four simulation
studies mentioned above (Marsh & Bailey, 1991; Conway et al. 2004; Lance et al., 2007; Le et
al., 2009). Richardson et al., on the other hand, did not report these values. Although they
described the various levels for reliability and method variance included in their design, they did
not present the specific parameter values for the substantive and method factor loadings or the
uniqueness values associated with the reliability and method variance conditions. Thus, maybe
differences in actual parameter values used to generate the data resulted in our different findings.
Another possible source of difference in our findings involves how the values for the
factor correlations between the two substantive variables and between the marker and substantive
IV/DV latent variables were manipulated. In our study, the various values for IV,M and DV,M, as
well as IV,DV, were implemented by including them in the matrix used to generate the
predicted covariance matrix for each model computed as = X * * X + . This approach
follows that used in a variety of simulation studies of various types of confirmatory factor

models and is a well-accepted practice. Alternatively, in implementing common method variance


as shared variance among indicators, Richardson et al. state the random variable was added to
all the relevant observed scores (p. 777), and for noncongeneric common method variance it
was added equally to the items of each construct to help yield each observed score (p. 778).
While this describes how they manipulated the amount of common method variance, it does not
explain how they built their data so that the method latent variable was correlated (or
uncorrelated with) the substantive latent variables. Thus, differences in implementing the
marker-substantive latent variable correlation may explain our divergent results.
Finally, it may also be helpful to compare the approach we used to verify our data
generation with that used by Richardson et al. (2009). As described in our procedure section, we
built our covariance matrices by manipulating fixed values for three types of parameters (, ,
), and we then used these matrices as input into analyses with correctly specified models that
estimated parameters that had previously been fixed. These parameter estimates matched those
used to generate the data and our bias results matched what would be expected given our
population error, indicating our success at producing the desired data. Richardson et al. used a
different strategy. They checked the accuracy of their simulation by looking at the distribution
of errors between the observed and the expected correlation derived mathematically and tested
whether the means of these distributions were significantly different from zero (p.786). They
did this based on the level of alpha and the amount of CMV, and noted that the confidence
interval around the mean error score always included zero. However, the fact that the
distributions matched those expected mathematically does not necessarily mean that the data
generated in method variance conditions matched those expected by the latent variable model
shown in Figure 1.

CFA No Marker Model and Factor Correlation Error. Richardson et al. concluded that,
even in the presence of method variance, use of the CFA Marker Technique was worse than
applying no statistical correction. They based their conclusion on the fact that sometimes the
technique performed significantly worse than no correction. Once again, we will start with where
our results and those of Richardson et al. converge. In cases with no common method variance
(set 5), our absolute error value of .048 was not that different from the error values of .059 and .
090 obtained by Richardson et al. Thus, our results combined with those of Richardson et al.
suggest that when method variance is not present, the No Marker Model performs rather well.
This has important implications to management and applied psychology because if, as some have
suggested (e.g., Spector, 2006), the presence of method variance has been overstated, then not
applying a correction is the best course of action.
However, our findings in Set 6 contrast with those of Richardson et al. Specifically, our
findings in Set 6 show that if the researcher ignores common method variance that is present by
not making any correction (via use of the CFA No Marker Model, which does not include marker
method factor loadings), very large error would result. Specifically, for conditions with no true
substantive factor correlation, the error was .48, and with non-zero substantive factor
correlations, the error was .29. Our Monte Carlo results were consistent with these values, with
absolute error of .48 and .29. In both cases, these values were considerably larger than the .32
and .14 values that Richardson et al. obtained. For situations common to organizational
researchers (IV, MV = .0, .2), such a strategy would result in errors of .40 or .49 when the
substantive factors were uncorrelated, depending on whether the marker was ideal or not. Errors
of .24 and .29 would be expected when the substantive factors were correlated.

In considering these differences, it may be recalled that part of our justification for the
use of the CFA No Marker Model as compared to examining scale correlations as done by
Richardson et al. was that in the latter case potential inflating effects of CMV would be partially
or completely offset by the unaccounted for attenuating effects of unreliability. This suspicion
was confirmed by the fact that with the CFA No Marker Model, which via the use of latent
variables and indicators accounts for random measurement error, our error values were much
greater. On this point, we diverge from Richardson et al., but we speculate that the difference in
conclusion is largely due to whether or not unreliability is accounted for by the specific analytic
technique. Assuming that it is desirable to control for unreliability, our recommendation is that if
method variance is a concern in a latent variable context (e.g., CFA, SEM), use of the CFA
Marker Technique will yield far more accurate estimates than applying no correction.
Factor Correlation Error in Typical Organizational Research. Perhaps most
importantly, we claimed in our introduction that it is critical when evaluating simulation
results to make sure conditions match the type of data used in those contexts. In the
present case, this means that the results that should be given the most emphasis would be
those obtained in scenarios where the scale correlations of marker variables match those
reported in typical organizational research. We remind the reader that for this purpose,
scale correlations must be used because substantive-marker factor correlations are usually
not reported because they are not commonly estimated with implementations of the CFA
Marker Technique, while scale correlations typically are reported.
There are a couple of ways to identify the value of a typical marker variable correlation,
so that it can be used to determine which estimates of error in the substantive factor correlation
are most relevant. In discussing null hypothesis tests, Edwards and Berry (2010) discussed

empirical relations among variables that are independent from a conceptual standpoint (p.
671). They reported values from Lykken (1968) that theoretically unrelated psychological
variables share about 4% to 5% common variance, which translates into expected correlations
ranging from .20 to .22 (p. 671). Edwards and Berry also present results from a review by
Meehl (1990) of various substantive domains in which measures designed to be unrelated
exhibited correlations around .20 to .30 (p.671). In both cases, the key is that a correlation in
the range of .20 to .30 can be possible, even in cases where there is not an expected theoretical
relation, cases that match those with an ideal marker that is chosen because it is theoretically
unrelated to substantive variables in the study.
However, a better approach for obtaining a representative marker variable correlation is
through meta-analysis. We found from our meta-analysis of marker studies that the vast majority
of the correlations (117 of 120) between marker scales and substantive scales fell well below the
computed values associated with the conditions in Richardson et al. that are replicated in our
study. This was also reflected in the 99% credibility interval of our most liberal estimate of the
relation between marker and substantive variables, which showed that across all subpopulations
of marker variables, 99.5% will have a population estimate less than .318. As shown in Table 4,
in the 11 cases that match the meta-analysis values the bias is minimal and does not meet the .05
level of severity. Thus, we can conclude that under realistic conditions the Marker Variable
Model recovers estimates with either no error (ideal marker) or relatively minor error (nonideal
marker). Perhaps most importantly, in both cases the error is much less than that if no correction
was attempted.
Error in measurement parameters. As noted earlier, we also extend previous work on this
topic by examining the error in estimating other parameters of CFA Marker Model with the use

of ideal and nonideal marker variables. Our summary of results in Table 2 indicates that with a
CFA Marker Model and an ideal marker, substantive and marker method variance factor loadings
were accurately recovered and there was no error in estimation. Alternatively, with nonideal
markers the error in estimating substantive factor loadings (s) ranged from .008 to .017 when
IV, M and DV,M = .20 , and from .032 to .070 when IV, M and DV,M = .40. For the method factor
loadings (m), the error ranged from -.077 to -.166 when IV, M and DV,M = .20, and -.154 to -.298
when IV, M and DV,M = .40.
Although some of these values may seem extreme, it should be remembered that by
squaring them we can obtain estimates of variance accounted for by substantive and method
factors. Across all of our conditions, the conclusion about the amount of substantive variance in
an indicator and the resulting percentage accounted for would be off between .3842 - .3522 or
2.3%, and .6922 - .6222 or 8.8%, with an average value of 5.8%. For the 11 conditions commonly
faced by organizational researchers, the average value was only .006 for the substantive factor
loadings and only -.048 for the marker variable factor loadings. For the No Marker Technique
with nonideal markers the error in estimating substantive factor loadings (s) ranged from .000 in
the no CMV conditions to .306 when CMV was greatest (i.e., 40: 60 conditions). In the eleven
conditions typically seen in management and applied psychology research, the average error in
the substantive factor loading was -.039. One positive finding with the CFA No Marker Model is
that the error variances of the factor loadings were recovered perfectly in all conditions.
To explain the error in estimation of the various parameters under conditions of the
different values for IV, M and DV,M, we examine values for the covariances among the indicators
of the model shown in Figure 1, using the equations provided in Appendix B. For example, for
the case in which IV, DV = .20, the item reliability was assumed to be .50, and the ratio of

substantive to method variance was 80:20. Using the values for the factor loading and
uniqueness parameters with Equation 1d resulted in a x1y1 covariance of .180 when IV, M and
DV,M = 0. Alternatively, when IV, M and DV,M = .20 the covariance was .259 and when IV, M and
DV,M =.4 it was .339. These values indicate how the covariance among indicators of the
independent and dependent latent variables increase as the marker latent variable becomes more
correlated with the independent and dependent latent variables. However, as this shared variance
is not accounted for with the CFA Marker Model with its assumed orthogonality, the estimated
value for IV, DV is less than its predicted value and underestimation error occurs.
We also can apply the process above to understand the positive error and underestimation
that occurs with the s substantive factor loadings. Applying the same values above, Equation 1c
shows how the covariances among the indicators of the independent latent variable (e.g., x1x2)
are impacted by different values for IV, M and DV,M. These calculations resulted in values of .498,
.578, and .658 for the x1x2 covariance as IV, M and DV,M increased from .00 to .20 and .40. Since
the substantive factor loadings are based on the within latent variable indicator covariances, as
these covariances increase so do the factor loadings. When this shared variance due to nonzero
values of IV, M and DV, M is not accounted for by the CFA Marker Technique model, the estimate
is less than the population value and positive error results.
Finally, we extend the process above to understand the overestimation of the
method factor loadings m when IV, M and DV,M are different from zero in the population
but assumed to be zero with the CFA Marker Technique. We used Equation 1b to examine
the covariance, x1m1, between indicators of the independent and marker latent variables
under the same conditions above (IV, DV = .00, item reliability = .50, substantive method
variance ratio of 80:20), using the marker indicator loading 1 (as described in the Methods

section). Based on these values, when IV, M and DV,M = .00 the x1m1 covariance was .223,
and as IV, M and DV,M increased to .20 and .40 the covariance increased to .312 and .402.
So, as with the independent dependent latent variables, and the substantive factor
loadings, shared variance among indicators increased as the marker latent variable became
more nonideal. However, as this shared variance increased, values for 5 and other s are
overestimated rather than underestimated under the orthogonality constraint of the CFA
Marker Technique. This can be understood by examining Equation 1b (COV(x1m1) = 1 5
+ 1IV, M 3). When IV, M is incorrectly assumed to be zero, the increased shared variance
among x1m1 that exists due to the marker variable actually being non-ideal is absorbed in
the value for 5, since the value for 1 is driven by the covariances among the marker
variable indicators and is not impacted by the assumption of orthogonality. As a result, in
the estimation process the value of 5 is inflated relative to its true value, and
overestimation occurs.
CFA Marker Technique: Assumptions, Strengths, and Weaknesses. We would like to
emphasize that success with the CFA Marker technique depends upon a variety of assumptions
being met. Of course, since CFA is being used, data distribution assumptions like multivariate
normality must be met, and having an adequate sample size is important (e.g., Williams,
Hartman, & Cavazotte, 2010). Also, it should be noted that as used so far this technique requires
the use of reflective indicators, in which arrows from the substantive latent variables go to the
indicators (and not in the opposite direction). It is also important that marker variables are
defined as capturing or tapping into one or more of the sources of bias that can occur in the
measurement response process (Williams et a., 2010, p 507). The CFA Marker Technique also is
based on a particular representation of effects of method processes, in which their impact is

modeled via method factor loadings linked to substantive indicators. Antonakis, Bendahan,
Jacquart, and Lalive (2010) have shown that if instead method factors are presumed to operate as
higher-order factors influencing lower level latent variables (rather than indicators), then
approaches like the CFA Marker Technique may be less effective. This technique also requires
the assumption that the substantive and method latent variables do not share a common cause,
which would compromise its effectiveness. Finally, we note that the CFA Marker approach
assumes unidimensional method effects, in that a single latent variable is used to represent the
effects. Current research with a similar technique, the Uncorrelated Latent Method Construct
(ULMC) approach (e.g., Richardson et al., 2009), shows that if a single latent method factor is
used when method effects are multidimensional, the full effects of the method variance will not
be captured and undercorrection of factor correlation errors can occur (Williams, 2014).
To the degree that these assumptions are met, the CFA Marker Technique offers several
advantages relative to the partial-correlation approach originally proposed by Lindell and
Whitney (2001). As discussed by Williams et al. (2010), it provides statistical tests for the
presence and impact of marker based method variance, while controlling for measurement error.
It also allows for investigation of method variance at the item level, since items can be used as
indicators. Among its disadvantages are a lack of certainty as to what is actually being controlled
for, as compared to use of multi-method designs or research using variables like social
desirability or negative affectivity (which directly measure the source of method variance rather
than indirectly measure as do marker variables). And of course the technique requires the
assumption of an orthogonal or ideal marker, although our results show that under most typical
conditions this may be relatively less important.
CONCLUSION

Richardson et al. (2009) are to be commended for taking an empirical approach to


investigating marker variables. They concluded the CFA Marker Technique appears to have
some, albeit limited, practical value (p. 795) for use in detecting the presence of CMV, but they
added that it does not necessarily produce accurate corrected estimates of relations. More
directly, they stated we do not recommend using the CFA marker technique for the purpose of
producing corrected correlations. Our results show these conclusions may be premature and
perhaps too extreme. Under conditions normally found in organizational research, our findings
show the specification error as a result of using non-ideal marker variables with the CFA Marker
Model to be much less than that reported by Richardson et al. (2009) and in the range where the
advantages associated with use of marker variables likely outweigh disadvantages. Further, in
conditions where method variance was present, results with the CFA No Marker Model showed
that not attempting to correct for method variance resulted in significant specification errors for
both factor loadings and factor correlations that were greater than those of any other
circumstances. These findings suggest reconsideration of the recommendations of Richardson et
al. that not attempting to correct for marker method variance is better than using the CFA Marker
Technique.
We close with one final important point. As noted earlier Podsakoff, MacKenzie, and
Podsakoff (2012) have claimed, If the specific source of method bias is unknown or valid
measures of the source of bias are not available, then we recommend the CFA Marker
Technique (p. 564). We share this view of the relative role of marker variables and the CFA
Marker Technique. It is much preferable for researchers to not need a magic bullet and instead
avoid common method variance problems through design and data collection methods, and if
there is concern about potential CMV bias it is much better to use valid measures of the source of

bias, such as social desirability or negative affectivity, and include such measures in a CFA
analysis. However, it is also important that given our findings, researchers who because of
limitations of their research designs use marker variables can believe the conclusions they reach
about the impact of CMV on estimated substantive relations. They do so knowing that even in
the worse case, where nonideal markers are used, the error in such estimations will be very small
and less than those resulting from making no correction.

REFERENCES
(studies marked with an * are included in the meta-analysis)
*Afsar, B., & Saeed, B. B. (2010). Subordinate's trust in the supervisor and its impact on
organizational effectiveness. Romanian Economic Journal, 13(38), 3-25.
*Agarwal, U. A., Datta, S., Blake-Beard, S., & Bhargava, S. (2012). Linking LMX, innovative work
behaviour and turnover intentions: the mediating role of work engagement. Career
Development International, 17(3), 208-230.
*Agarwal, J., Osiyevskyy, O., & Feldman, P. M. (2014). Corporate reputation measurement:
Alternative factor structures, nomological validity, and organizational outcomes. Journal of
Business Ethics.
*Alegre, J., Sengupta, K., & Lapiedra, R. (2013). Knowledge management and innovation
performance in a high-tech SMEs industry. International Small Business Journal, 31(4), 454470.
*Alge, B. J., Ballinger, G. A., Tangirala, S., & Oakley, J. L. (2006). Information privacy in
organizations: empowering creative and extrarole performance. Journal of Applied
Psychology, 91(1), 221-232.
*Allam, H., Bliemel, M., Blustein, J., Spiteri, L., & Watters, C. (2010, June). A conceptual model for
dimensions impacting employees' participation in enterprise social tagging. In Proceedings
of the International Workshop on Modeling Social Media.
*Ashill, N. J., & Jobber, D. (2010). Measuring state, effect, and response uncertainty: Theoretical
construct development and empirical validation. Journal of Management, 36(5), 1278-1308.
Bagozzi, R. P. (1980). Causal Models in Marketing. New York: Wiley.
*Bal, P. M., Chiaburu, D. S., & Diaz, I. (2011). Does psychological contract breach decrease
proactive behaviors? The moderating effect of emotion regulation. Group & Organization
Management, 36(6), 722-758.
*Bal, P. M., De Jong, S. B., Jansen, P. G., & Bakker, A. B. (2012). Motivating Employees to Work
Beyond Retirement: A MultiLevel Study of the Role of IDeals and Unit Climate. Journal
of Management Studies, 49(2), 306-331.

*Bammens, Y., Notelaers, G., & Van Gils, A. (2014). Implications of family business employment
for employees innovative work involvement. Family Business Review.
Bandalos, D. L. & Gagne, P. (2012). Simulation methods in structural equation modeling. In Hoyle,
R. H. (Ed.) Handbook of Structural Equation Modeling (pp. 92-108). New York: Guilford
Press.
Bandalos, D. L. (2006). The use of Monte Carlo studies in structural equation modeling research. In
G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (p.
385-426). Greenwich (Conn.): IAP.
*Bernerth, J. B., Armenakis, A. A., Feild, H. S., & Walker, H. J. (2007). Justice, cynicism, and
commitment a study of important organizational change variables. The Journal of Applied
Behavioral Science, 43(3), 303-326.
*Bernerth, J. B., Armenakis, A. A., Feild, H. S., Giles, W. F., & Walker, H. J. (2007). Is personality
associated with perceptions of LMX? An empirical study. Leadership & Organization
Development Journal, 28(7), 613-631.
*Bhattacharya, C., & Swain, E. S. D. (2011). Corporate social responsibility, customer orientation,
and the job performance of frontline employees. Working paper.
*Birken, S. A. (2011). Where the rubber meets the road: A mixed-method study of middle managers'
role in innovation implementation in health care organizations. (Unpublished dissertation,
University of North Carolina at Chapel Hill).
*Bock, A. J., Opsahl, T., George, G., & Gann, D. M. (2012). The effects of culture and structure on
strategic flexibility during business model innovation. Journal of Management Studies,
49(2), 279-305.
*Boichuk, J. (2010). When job dissatisfaction leads to customer-oriented citizenship behaviors.
(Doctoral dissertation, Brock University).
*Booth, J. E., Park, K. W., & Glomb, T. M. (2009). Employersupported volunteering benefits: Gift
exchange among employers, employees, and volunteer organizations. Human Resource
Management, 48(2), 227-249.

*Boso, N., Cadogan, J. W., & Story, V. M. (2012). Complementary effect of entrepreneurial and
market orientations on export new product success under differing levels of competitive
intensity and financial capital. International Business Review, 21(4), 667-681.
*Boso, N., Cadogan, J. W., & Story, V. M. (2013). Entrepreneurial orientation and market orientation
as drivers of product innovation success: A study of exporters from a developing economy.
International Small Business Journal, 31(1), 57-81.
*Bowling, N. A. (2010). Effects of job satisfaction and conscientiousness on extra-role behaviors.
Journal of Business and Psychology, 25(1), 119-130.
*Brees, J. R. (2012). The relation between subordinates' individual differences and their perceptions
of abusive supervision. (Unpublished dissertation, Florida State University).
Burton-Jones, A. (2009). Minimizing method bias through programmatic research. MIS Quarterly
33, 445-471.
*Bustamante, J. (2010). Investigating the effects of racioethnic diversity on organizational outcomes:
The mediating role of social capital and the moderating role of diversity climate
(Unpublished dissertation, Columbia University).
*Cameron, A. F., & Webster, J. (2011). Relational outcomes of multicommunicating: Integrating
incivility and social exchange perspectives. Organization Science, 22(3), 754-771.
*Cardon, M. S., Gregoire, D. A., Stevens, C. E., & Patel, P. C. (2013). Measuring entrepreneurial
passion: Conceptual foundations and scale validation. Journal of Business Venturing 28, 373396.
*Carver, J. R. (2009). CMO: chief marketing officer or chief" marginalized" officer. (Unpublished
dissertation, University of Arizona)
*Chang, J. J., Hung, K. P., & Lin, M. J. J. (2014). Knowledge creation and new product
performance: the role of creativity. R&D Management, 44(2), 107-123.
*Che-Ha, N., Mavondo, F. T., & Mohd-Said, S. (in press). Performance or learning goal orientation:
Implications for business performance. Journal of Business Research.
*Chiaburu, D. S., & Carpenter, N. C. (2013). Employees motivation for personal initiative: The
joint influence of status and communion striving. Journal of Personnel Psychology, 12(2),
97-103.

*Choi, S. (2010). Task and Relation Conflict in Subordinates and Supervisors Relations: Interaction
Effects of Justice Perceptions and Emotion Management (Doctoral dissertation, Texas A&M
University).
*Ciabuschi, F., & Martn, O. M. (2012). Knowledge ambiguity, innovation and subsidiary
performance. Baltic Journal of Management, 7(2), 143-166.
*Ciabuschi, F., Dellestrand, H., & Martn, O. M. (2011). Internal embeddedness, headquarters
involvement, and innovation importance in multinational enterprises. Journal of
Management Studies, 48(7), 1612-1639.
*Ciabuschi, F., Forsgren, M., & Martn Martn, O. (2012). Headquarters involvement and efficiency
of innovation development and transfer in multinationals: A matter of sheer ignorance?
International Business Review, 21(2), 130-144.
Conway, J. M., Lievens, F., Scullen, S., & Lance. C. (2004). Bias in the correlated uniqueness model
for MTMM data. Structural Equation Modeling, 11, 535-559.
Conway, J.M. & Lance, C.E., (2010). What reviewers should expect from authors regarding common
method bias in organizational research. Journal of Business and Psychology 25, 325-334.
*Crespo, C. F., Griffith, D. A., & Lages, L. F. (2014). The performance effects of vertical and
horizontal subsidiary knowledge outflows in multinational corporations. International
Business Review.
*Crossley, C. D., Cooper, C. D., & Wernsing, T. S. (2013). Making things happen through
challenging goals: Leader proactivity, trust, and business-unit performance. Journal of
Applied Psychology, 98(3), 540-549.
*De Clercq, D., Bouckenooghe, D., Raja, U., & Matsyborska, G. (2014). Servant leadership and
work engagement: The contingency effects of leaderfollower social capital. Human
Resource Development Quarterly, 25(2), 183-212.
*DelgadoGarca, J. B., RodrguezEscudero, A. I., & MartnCruz, N. (2012). Influence of Affective
Traits on Entrepreneur's Goals and Satisfaction. Journal of Small Business Management,
50(3), 408-428.

*Densten, I. L., & Sarros, J. C. (2012). The impact of organizational culture and social desirability
on Australian CEO leadership. Leadership & Organization Development Journal, 33(4), 342368.
*Dunham, A. H. (2010). Knowledge management in the context of an ageing workforce:
Organizational memory and mentoring. (Doctoral dissertation, University of Canterbury).
Edwards, J., & Berry, J. (2010). The presence of something or the absence of nothing: Increasing
theoretical precision in management research. Organizational Research Methods, 123, 668689.
*Engelen, A., Gupta, V., Strenger, L., & Brettel, M. (in press). Entrepreneurial orientation, firm
performance, and the moderating role of transformational leadership behaviors. Journal of
Management.
*Frenkel, S., Sanders, K., & Bednall, T. (2013). Employee perceptions of management relations as
influences on job satisfaction and quit intentions. Asia Pacific Journal of Management,
30(1), 7-29.
*Goffin, R. D., & Anderson, D. W. (2007). The self-rater's personality and self-other disagreement in
multi-source performance ratings: Is disagreement healthy?. Journal of managerial
psychology, 22(3), 271-289.
*Goldberg, C. B., Perry, E. L., Finkelstein, L. M., & Shull, A. (2013). Antecedents and outcomes of
targeting older applicants in recruitment. European Journal of Work and Organizational
Psychology, 22(3), 265-278.
*Gray, J. H., & Densten, I. L. (2007). How leaders woo followers in the romance of leadership.
Applied Psychology, 56(4), 558-581.
*Hallin, C., & Holmstrm Lind, C. (2012). Revisiting the external impact of MNCs: An empirical
study of the mechanisms behind knowledge spillovers from MNC subsidiaries. International
Business Review, 21(2), 167-179.
*Harris, K. J., Wheeler, A. R., & Kacmar, K. M. (2011). The mediating role of organizational job
embeddedness in the LMXoutcomes relations. The Leadership Quarterly, 22(2), 271-281.

*Hau, L. N., Evangelista, F., & Thuy, P. N. (2013). Does it pay for firms in Asia's emerging markets
to be market oriented? Evidence from Vietnam. Journal of Business Research 66, 2412-2417.
*Haynie, J. J. (2013). A combined model of uncertainty management theory and the group
engagement model of identity (Doctoral dissertation, Auburn University).
*Hong, J., Song, T. H., & Yoo, S. (2013). Paths to Success: How Do Market Orientation and
Entrepreneurship Orientation Produce New Product Success?. Journal of Product Innovation
Management, 30(1), 44-55.
*Heidenreich, S., Landsperger, J., & Spieth, P. (2014). Are innovation networks in need of a
conductor? Examining the contribution of network managers in low and high complexity
settings. Long Range Planning.
*Homburg, C., Allmann, J., & Klarmann, M. (2014). Internal and external price search in industrial
buying: The moderating role of customer satisfaction. Journal of Business Research, 67(8),
1581-1588.
*Housel, T., Dimoka, A., & Pavlou, P. A. (2006). Leveraging competence in the use of leveraging
collaborative tools competence: facilitating an Open Architecture approach to acquiring
integrated warfare systems. Working paper.
Hunter, J. E., & Schmidt, F. L. (Eds.). (2004). Methods of meta-analysis: Correcting error and bias
in research findings. Sage.
*Jaramillo, F., Mulki, J. P., & Boles, J. S. (2011). Workplace Stressors, Job Attitude, and Job
Behaviors: Is Interpersonal Conflict the Missing Link? Journal of Personal Selling and Sales
Management, 31(3), 339-356.
*Jiang, J. Y., Law, K. S., & Sun, J. J. (2013). LeaderMember Relation and Burnout: The Moderating
Role of Leader Integrity. Management and Organization Review.
*Jimmieson, N. L., & White, K. M. (2011). Predicting employee intentions to support organizational
change: An examination of identification processes during a rebrand. British Journal of
Social Psychology, 50(2), 331-341.

*Jimmieson, N. L., Peach, M., & White, K. M. (2008). Utilizing the theory of planned behavior to
inform change management an investigation of employee intentions to support organizational
change. The Journal of Applied Behavioral Science, 44(2), 237-262.
*Johnson, R. E., Rosen, C. C., & Djurdjevic, E. (2011). Assessing the impact of common method
variance on higher order multidimensional constructs. Journal of Applied Psychology, 96(4),
744-761.
Joreskog, K. G (1974). Analyzing psychological data by structural analysis of covariance matrices.
In R. C. Atkinson, D. H. Krantz, R. D. Luce, & P. Suppes (Eds.), Contemporary
developments in mathematical psychology (Vol. 2, pp. 1-56). San Francisco: W. H. Freeman.
Joreskog, K. G. (1971). Statistical analysis of sets of congeneric tests. Psychometrika, 36, 109133.
Jreskog, K. G. (1979). Basic ideas of factor and component analysis. In K. G. Jreskog & D.
Srbom (Eds.), Advances in factor analysis and structural equation models (pp. 5-20).
Cambridge, MA: Abt Books.
Jreskog, K.G., & Srbom, D. (2006). LISREL 8.80 for Windows [Computer software].
Lincolnwood, IL: Scientific Software International.
*Ju, M., Murray, J. Y., Kotabe, M., & Gao, G. Y. (2011). Reducing distributor opportunism in the
export market: Effects of monitoring mechanisms, norm-based information exchange, and
market orientation. Journal of World Business, 46(4), 487-496.
*Kane, R. E., Magnusen, M. J., & Perrew, P. L. (2012). Differential effects of identification on
extra-role behavior. Career Development International, 17(1), 25-42.
*Katsikea, E., Theodosiou, M., & Morgan, R. E. (2014). Why people quit: Explaining employee
turnover intentions among export sales managers. International Business Review.
*Kavanagh, P., Benson, J., & Brown, M. (2007). Understanding performance appraisal fairness. Asia
Pacific Journal of Human Resources, 45(2), 132-150.
Kenny, W. (1979). Correlation and Causality. New York: Wiley.
*Kim, D., Basu, C., Naidu, G. M., & Cavusgil, E. (2011). The innovativeness of Born-Globals and
customer orientation: learning from Indian born-globals. Journal of Business Research,
64(8), 879-886.

*Kim, T. Y., Bateman, T. S., Gilbreath, B., & Andersson, L. M. (2009). Top management credibility
and employee cynicism: A comprehensive model. Human Relations, 62(10), 1435-1458.
*Kim, A., Kim, Y., Han, K., Jackson, S. E., & Ployhart, R. E. (2014). Multilevel influences on
voluntary workplace green behavior individual differences, leader behavior, and coworker
advocacy. Journal of Management.
*Klijn, E. H., Ysa, T., Sierra, V., Berman, E., Edelenbos, J., & Chen, D. Y. (2013). Governance
networks and trust: Exploring the relation between trust and network performance in Taiwan,
Spain and The Netherlands. Paper presented at the 11th Public Management Research
Conference, Madison, Wisconsin, June 20-22, 2013.
*Knoll, D. L., & Gill, H. (2011). Antecedents of trust in supervisors, subordinates, and peers.
Journal of Managerial Psychology, 26(4), 313-330.
*Kotabe, M., Jiang, C. X., & Murray, J. Y. (2014). Examining the complementary effect of political
networking capability with absorptive capacity on the innovative performance of emergingmarket firms. Journal of Management.
*Kovjanic, S., Schuh, S. C., Jonas, K., Quaquebeke, N. V., & Dick, R. (2012). How do
transformational leaders foster positive employee outcomes? A selfdeterminationbased
analysis of employees' needs as mediating links. Journal of Organizational Behavior, 33(8),
1031-1052.
*Krishnaveni, R., & Deepa, R. (2013). Controlling common method variance while measuring the
impact of emotional intelligence on well-being. VIKALPA, 38(1), 41-47.
*Krush, M. T., Agnihotri, R., Trainor, K. J., & Nowlin, E. L. (2013). Enhancing organizational
sensemaking: An examination of the interactive effects of sales capabilities and marketing
dashboards. Industrial Marketing Management.
*Lages, C. R., & Piercy, N. F. (2012). Key drivers of frontline employee generation of ideas for
customer service improvement. Journal of Service Research, 15(2), 215-230.
Lance, C. E., Dawson, B., Birkelbach, D., & Hoffman, B. J. (2010). Method effects, measurement
error, and substantive conclusions. Organizational Research Methods, 13: 435-455.

Lance, C. Woehr, D., & Meade, A. (2007). A Monte Carlo investigation of assessment center
construct validity models. Organizational Research Methods, 10, 430-448.
*Lazar, B. (2005). Occupational and organizational commitment and turnover intention of
employees (Unpublished dissertation, University of Phoenix).
Le., H, Schmidt, F., & Putka, D. (2009). The multifaceted nature of measurement artifacts and its
implication for estimating construct level relations. Organizational Research Methods, 12,
165-200.
*Li, C. (2013). Ethical leadership in firms: antecedents and consequences (Doctoral dissertation,
University of Alabama Tuscaloosa).
Li, J. Loerbroks, A., Jarczok, M. N. Schllgen, I., Bosch, J. A., Mauss, D., Siegrist, J., & Fischer, J.
E. (2012). Psychometric properties and differential explanation of a short measure of effortreward imbalance at work: a study of industrial workers in Germany. American Journal of
Industrial Medicine 55, 808-815.
*Lin, B. C., Kain, J. M., & Fritz, C. (2013). Dont interrupt me! An examination of the relation
between intrusions at work and employee strain. International Journal of Stress
Management, 20(2), 77.
Lindell, M.K. & Whitney, D.J. (2001). Accounting for common method variance in cross-sectional
research designs. Journal of Applied Psychology, 86, 114-121.
Loehlin, J. C. (2004) Latent Variable Models (4th ed). Lawrence Erlbaum Associates, Mahwah, N.J.
*Love, J. H., Roper, S., & Vahter, P. (in press). Learning from openness: The dynamics of breadth in
external innovation linkages. Strategic Management Journal.
*Lui, S. S., & Ngo, H. Y. (2012). Drivers and Outcomes of Longterm Orientation in Cooperative
Relations. British Journal of Management, 23(1), 80-95.
Lykken, D. (1968). Statistical significance in psychological research. Psychological Bulletin, 70,
151-159.
Marsh, H. (1989). Confirmatory factor analyses of multitrait-multimethod data: Many problems and
a few solutions. Applied Psychological Measurement, 13, 335-361.
Marsh, H., & Bailey, M. (1991). Confirmatory factor analyses of multitrait-multimethod data: A
comparison of alternative models. Applied Psychological Measurement, 15, 47-70.

Meehl, P. (1990). Appraising and amending theories: The strategy of Lakatoisain defense and two
principles that warrant it. Psychological Inquiry, 1, 108-141.
*Mehta, A. (2009). Examining the role of personal, social exchange, and contextual fit variables in
employee work outcomes under continuous change: A field investigation. (Doctoral
dissertation, Auburn University).
*Meng, J., Fulk, J., & Yuan, Y. C. (in press). The Roles and Interplay of Intragroup Conflict and
Team Emotion Management on Information Seeking Behaviors in Team Contexts.
Communication Research.
*Moorman, R. H., Darnold, T. C., & Priesemuth, M. (2013). Perceived leader integrity: Supporting
the construct validity and utility of a multi-dimensional measure in two samples. The
Leadership Quarterly 24, 427-444.
Muthn, L. K., & Muthn, B. O. (1998 2013). Mplus users guide (7th ed.). Los Angeles, CA:
Muthn & Muthn.
*Ngo, L. V., & O'Cass, A. (2012). Innovation and business success: The mediating role of customer
participation. Journal of Business Research.
*Niedle, L. (2012). A comparison of LMX, communication, and demographic differences in remote
and co-located supervisor-subordinate dyads. (Doctoral dissertation, DePaul University).
*O'Cass, A., & Ngo, L. V. (2011). Examining the firm's value creation process: a managerial
perspective of the firm's value offering strategy and performance. British Journal of
Management, 22(4), 646-671.
*O'Cass, A., & Sok, P. (in press). The role of intellectual resources, product innovation capability,
reputational resources and marketing capability combinations in SME growth. International
Small Business Journal.
Pace, V. L. (2010). Method variance from the perspectives of reviewers: Poorly understood problem
or overemphasized complaint? Organizational Research Methods, 13, 421-434.
*Panagopoulos, N., Rapp, A., & Vlachos, P. A. (2011). Corporate social performance and
employees: construed perceptions, attributions and behavioral outcomes. Working paper.

*Parker, S. L., Jimmieson, N. L., & Amiot, C. E. (2010). Self-determination as a moderator of


demands and control: Implications for employee strain and engagement. Journal of
Vocational Behavior, 76(1), 52-67.
*PrezNordtvedt, L., Khavul, S., Harrison, D. A., & McGee, J. E. (in press). Adaptation to
Temporal Shocks: Influences of Strategic Interpretation And Spatial Distance. Journal of
Management Studies.
Peterson, R. A., & Brown, S. P. (2005). On the use of beta coefficients in meta-analysis. Journal of
Applied Psychology, 90, 175-181.
Podsakoff, P. M., & Organ, D. W. (1986). Self-reports in organizational research: Problems and
prospects. Journal of Management, 12: 531-544.
Podsakoff, P.M., MacKenzie, S.B., & Podsakoff, N.P. (2012). Sources of method bias in social
science research and recommendations on how to control it. Annual Review of Psychology,
65, 539-569.
*Poon, J. M. (2006). Trust-in-supervisor and helping coworkers: moderating effect of perceived
politics. Journal of Managerial Psychology, 21(6), 518-532.
*Poon, J. M., Rahid, M. R., & Othman, A. S. (2006). Trust-in-supervisor: antecedents and effect on
affective organizational commitment. Asian Academy of Management Journal, 11(2), 57-72.
*Rafferty, A. E., & Griffin, M. A. (2004). Dimensions of transformational leadership: Conceptual
and empirical extensions. The Leadership Quarterly, 15(3), 329-354.
*Rafferty, A. E., & Griffin, M. A. (2006). Refining individualized consideration: Distinguishing
developmental leadership and supportive leadership. Journal of Occupational and
Organizational Psychology, 79(1), 37-61.
*Rafferty, A. E., & Jimmieson, N. L. (2010). Team change climate: A group-level analysis of the
relations among change information and change participation, role stressors, and well-being.
European Journal of Work and Organizational Psychology, 19(5), 551-586.
*Raffiee, J., Feng, J., & Coff, R. (2013). Perceptions of Firm-Specific Human Capital: Untenured
and Uncommitted. Working paper.

Richardson, H. A., Simmering, M. J., & Sturman, M. C. (2009). A tale of three perspectives
examining post hoc statistical techniques for detection and correction of common method
variance. Organizational Research Methods, 12, 762800.
*Rutten, J. (2012). The influence of different leadership roles on contextual ambidexterity. (Doctoral
dissertation, University of Amsterdam).
*Salter, A., ter ter Wal, A., Criscuolo, P., & Alexy, O. (2012). Open for Ideation: Individual-level
Openness and Idea Generation in R&D. Paper presented at the DRUID Copenhagen,
Denmark, June 19-21, 2012
*Schneider, M., & Engelen, A. (2014). Enemy or friend? The cultural impact of cross-functional
behavior on the EOperformance link. Journal of World Business.
*Seong, J. Y., Hong, D. S., & Park, W. W. (2012). Work status, gender, and organizational
commitment among Korean workers: The mediating role of person-organization fit. Asia
Pacific Journal of Management, 29(4), 1105-1129.
*Sexton, J. C. (2012). The Creation Of New Knowledge Through The Transfer Of Existing
Knowledge: Examining The Conundrum Of Creation And Control In Innovation.
(Unpublished dissertation, Florida State University).
Sharma, R., Yetton, P., & Crawford, J. (2009). Estimating the effect of common method variance: the
method-method pair technique with an illustration from TAM research. MIS Quarterly, 33,
473-490.
*Shipton, H., Armstrong, C., West, M., & Dawson, J. (2008). The impact of leadership and quality
climate on hospital performance. International Journal for Quality in Health Care, 20(6),
439-445.
*Shou, Z., Chen, J., Zhu, W., & Yang, L. (in press). Firm capability and performance in China: The
moderating role of guanxi and institutional forces in domestic and foreign contexts.
Journal of Business Research.
Siemsen, E., Roth, A., & Oliveira, P. (2009). Common Method Bias in Regression Models with
Linear, Quadratic, and Interaction Effects. Organizational Research Methods, 13: 456-476.

*Sirn, C. A., Kohtamki, M., & Kuckertz, A. (2012). Exploration and exploitation strategies, profit
performance, and the mediating role of strategic learning: Escaping the exploitation trap.
Strategic Entrepreneurship Journal, 6(1), 18-41.
Spector, P.E. & Brannick, M.T. (2010). Common method Issues: An introduction to the feature topic
in organizational research methods. Organizational Research Methods, 13: 403-406.
*Strauss, K., Griffin, M. A., & Rafferty, A. E. (2009). Proactivity Directed Toward the Team and
Organization: The Role of Leadership, Commitment and Rolebreadth Selfefficacy. British
Journal of Management, 20(3), 279-291.
*Strobel, M., Tumasjan, A., & Welpe, I. (2010). Do business ethics pay off?: The influence of ethical
leadership on organizational attractiveness. Zeitschrift fr Psychologie/Journal of
Psychology, 218(4), 213-224.
*Tayfur, O., Bayhan Karapinar, P., & Metin Camgoz, S. (2013). The mediating effects of emotional
exhaustion cynicism and learned helplessness on organizational justice-turnover intentions
linkage. International Journal of Stress Management, 20(3), 193.
Teo, T. (2011). Considering common method variance in educational technology research. British
Journal of Educational Technology, 42, 94-96.
*Tillman, C. J. (2011). Character, Conditions, and Cognitions: The Role of Personality, Climate,
Intensity, and Moral Disengagement in the Unethical Decision-Making Process (Doctoral
dissertation, The University of Alabama Tuscaloosa).
*Tiwana, A. (2008). Does technological modularity substitute for control? A study of alliance
performance in software outsourcing. Strategic Management Journal, 29(7), 769-780.
*Vlachos, P. A., Panagopoulos, N. G., & Rapp, A. A. (in press). Feeling Good by Doing Good:
Employee CSR-Induced Attributions, Job Satisfaction, and the Role of Charismatic
Leadership. Journal of Business Ethics, 1-12.
*Waldman, D. A., Javidan, M., & Varella, P. (2004). Charismatic leadership at the strategic level: A
new application of upper echelons theory. The Leadership Quarterly, 15(3), 355-380.

*Walsh, G., Bartikowski, B., & Beatty, S. E. (in press). Impact of customerbased corporate
reputation on nonmonetary and monetary outcomes: The roles of commitment and service
context risk. British Journal of Management.
*Wang, G., Harms, P. D., & Mackey, J. D. (2014). Does it take two to Tangle? Subordinates
Perceptions of and Reactions to Abusive Supervision. Journal of Business Ethics.
*Wang, Y. D., & Hsieh, H. H. (2014). Employees' reactions to psychological contract breach: A
moderated mediation analysis. Journal of Vocational Behavior, 85(1), 57-66.
*Webster, J. R., Adams, G. A., & Beehr, T. A. (2014). Core work evaluation: The viability of a
higher-order work attitude construct. Journal of Vocational Behavior, 85(1), 27-38.
*Whiteley, P., Sy, T., & Johnson, S. K. (2012). Leaders' conceptions of followers: Implications for
naturally occurring Pygmalion effects. The Leadership Quarterly, 23(5), 822-834.
Williams, L. J. (2012). Equivalent models: Concepts, problems, and alternatives. In R. Hoyle (Ed.),
The Handbook of Structural Equation Modeling. Pgs. 247-260. Guilford.
Williams, L. J., Hartman, N., & Cavazotte, F. (2010). Method variance and marker variables: A
review and comprehensive CFA marker technique. Organizational Research Methods, 13,
477514.
Wood, J. (2008). Methodology for dealing with duplicate study effects in a meta-analysis.
Organizational Research Methods, 11, 79-95.
*Wong, C. W. (2013). Leveraging Environmental Information Integration to Enable Environmental
Management Capability and Performance. Journal of Supply Chain Management, 49(2), 114136.
*Wong, C. W., Lai, K. H., Shang, K. C., Lu, C. S., & Leung, T. K. P. (2012). Green operations and
the moderating role of environmental management capability of suppliers on manufacturing
firm performance. International Journal of Production Economics, 140(1), 283-294.
*Wong, C. W., Wong, C. Y., & Boon-itt, S. (2013). Green service practices: Performance
implications and the role of environmental management systems. Service Science, 5(1), 6984.

*Yang, J., & Mossholder, K. W. (2010). Examining the effects of trust in leaders: A bases-and-foci
approach. The Leadership Quarterly, 21(1), 50-63.
*Yang, J., Mossholder, K. W., & Peng, T. K. (2009). Supervisory procedural justice effects: The
mediating roles of cognitive and affective trust. The Leadership Quarterly, 20(2), 143-154.
*Zhang, J., & Wu, W. P. (2012). Social capital and new product development outcomes: The
mediating role of sensing capability in Chinese high-tech firms. Journal of World Business
48, 539-548.
*Zhang, Q., & Zhou, K. Z. (in press). Governing interfirm knowledge transfer in the Chinese
market: The interplay of formal and informal mechanisms. Industrial Marketing
Management.

Figure 1. Example Latent Variable Marker Variable Model


IV, DV
a
Dependent
Variable (DV)

Independent
Variable (IV)

IV, M

x1

x2

7
x3

DV,M

y1

x4

y2

y3

y4

Marker
Variable
(MV)

1
m1

2
m2

m3

m4

Note: Uniqueness terms ( ) as well as well factor loading notations () for those not included in
the covariance algebra used to calculate the scale correlations omitted from Figure 1 for clarity

Table 1. Error in values for IV, DV and selected results from Richardson et al.
Pop.
Err.

Bias

Mean
Sq. Err

.00
.00

.001
.001

.004
.004

.048
.048

.043; .053
.043; .053

.07
.06

2. CFA Marker Model: Ideal marker CMV present


IV, DV = .00
IV, DV = .20, .40, .60

.00
.00

.001
.001

.004
.004

.048
.048

.043; .053
.043; .053

.31
.19

3. CFA Marker Model: Nonideal marker and no CMV


IV, M = .20 when IV, DV = .00
IV, M = .40 when IV, DV = .00
IV, M = .20 & .40 combined when IV, DV = .00

.04
.19
.12

.121

.019

.124

.113; .130

.11

IV, M = .20 when IV, DV = .20, .40, .60


IV, M = .40 when IV, DV = .20, .40, .60
IV, M = .20 & .40 combined when IV, DV = .20, .40, .60

.03
.11
.07

.071

.009

.081

.063; .080

.08

4. CFA Marker Model: Nonideal marker CMV present


IV, M = .20 when IV, DV = .00
IV, M = .40 when IV, DV = .00
IV, M = .20 & .40 combined when IV, DV = .00

.04
.19
.12

.121

.019

.124

.113; .130

.17

IV, M = .20 when IV, DV = .20, .40, .60


IV, M = .40 when IV, DV = .20, .40, .60
IV, M = .20 & .40 combined when IV, DV = .20, .40, .60

.03
.11
.07

.071

.009

.081

.063; .080

.12

5. CFA No Marker Model and no CMV


IV, DV = .00
IV, DV = .20, .40, .60

.00
.00

.001
.001

.004
.004

.048
.048

.043; .053
.043; .053

.05
.09

6. CFA No Marker Model CMV present


Ideal Marker IV, M = .00 when IV, DV = .00
Nonideal marker IV, M = .20 when IV, DV = .00
Nonideal marker IV, M = .40 when IV, DV = .00
IV, M = .00, .20 & .40 combined when IV, DV = .00

.40
.49
.56
.48

.480

.232

.480

.474; .486

.32

Ideal marker IV, M = .00 when IV, DV = .20, .40, .60


Nonideal marker IV, M = .20 when IV, DV = .20, .40, .60
Nonideal marker IV, M = .40 when IV, DV = .20, .40, .60
IV, M = .00, .20 & .40 combined when IV, DV = .20, .40, .60

.24
.29
.33
.29

.290

.088

.290

.283; .298

.14

Condition Sets
1. CFA Marker Model: Ideal marker and no CMV
IV, DV = .00
IV, DV = .20, .40, .60

Abs.
Err.

95% CI of
Abs. Err.

RSS

Note: Pop. Err. = error in substantive factor correlation IV,DV, Bias = mean difference between population factor
correlation error and all sample estimates of error from the Monte Carlo analysis, Mean Sq. Err = average squared
deviations of the sample factor correlation error estimates from their population values, Abs. Err. = average absolute
difference between the population factor correlation error and the sample estimates, RSS = Richardson et al (2009)
estimates, IV, M = factor correlation between substantive (independent) latent variable and method latent variable, IV, DV
= factor correlation between substantive latent variables.

Table 2. Selected population values and specification error for substantive and method factor loadings (s, m), and computed values for
the scale correlation between the substantive scale and method scale (IVsMs)
IV, M = .000
True (Population) Parameter
Values
item
S/M
rel.1
ratio
s
m
.368
100:00
.607
.000
80:20
.543
.271
60:40
.470
.384
40:60
.384
.470

CFA
Marker

CFA No
Marker

IV, M = .200
Scale3

CFA Marker

IV, M = .400

CFA No
Marker

Scale3

CFA Marker

CFA No
Marker

Scale

s
.000
.000
.000
.000

m
.000
.000
.000
.000

s
.000
-.064
-.137
-.223

m
n/a
n/a
n/a
n/a

IVsMs
.000
.184
.338
.473

s
.012
.026
.025
.021

m
-.121
-.098
-.078
-.058

s
.000
-.064
-.137
-.223

m
n/a
n/a
n/a
n/a

IVsMs
.203
.324
.429
.522

s
.051
.072
.067
.055

m
-.243
-.191
-.151
-.113

s
.000
-.064
-.137
-.223

m
n/a
n/a
n/a
n/a

IVsMs
.406
.459
.515
.570

.500

100:00
80:20
60:40
40:60

.707
.632
.548
.447

.000
.316
.447
.548

.000
.000
.000
.000

.000
.000
.000
.000

.000
-.075
-.159
-.260

n/a
n/a
n/a
n/a

.000
.222
.400
.551

.014
.036
.036
.029

-.141
-.110
-.084
-.060

.000
-.075
-.159
-.260

n/a
n/a
n/a
n/a

.253
.390
.505
.606

.059
.094
.089
.073

-.282
-.212
-.162
-.117

.000
-.075
-.159
-.260

n/a
n/a
n/a
n/a

.506
.551
.605
.660

.692

100:00
80:20
60:40
40:60

.832
.744
.645
.526

.000
.374
.526
.645

.000
.000
.000
.000

.000
.000
.000
.000

.000
-.088
-.187
-.306

n/a
n/a
n/a
n/a

.000
.268
.467
.632

.017
.015
.013
.011

-.166
-.122
-.088
-.059

.000
-.088
-.187
-.306

n/a
n/a
n/a
n/a

.316
.467
.588
.694

.070
.128
.121
.099

-.333
-.233
-.169
-.113

.000
-.088
-.187
-.306

n/a
n/a
n/a
n/a

.632
.657
.702
.752

Item reliabilities of .368, .500, and .692 correspond to scale reliabilities of .70, .80, and .90, respectively, and error variances ( ) of .632, .500,
and .308, respectively.
2
Positive values indicate a downward bias with recovered estimate being less than the population parameter and negative values indicate an
upward bias with recovered estimate being more than the population parameter.
3
Bolded and underlined values indicate that the scale correlation from that condition is consistent with marker variable correlations reported in the
literature.
Note: IV, M = factor correlation between substantive (independent) latent variable and method latent variable, item rel. = reliability of substantive
latent variables indicators, S/M ratio = ratio of substantive to method variance used, s = substantive factor loading, m = method factor loading,
IVs MVs = correlation between the independent variable scale and method variable scale.

Table 3. Meta-analytic results of marker variable usage

MV-construct
Type
a priori
post hoc

k
120

n
69308

robs
.065

rc.90
.072

rc.80 rc.70
.081 .093

95% CI
.051; .078

99% CV
-.094; .223

Var
.006

VarSE
.002

%VarSE
31.4%

|Range|
.00-.53

89

56907

.061

.068

.076 .087

.046; .076

-.095; .217

.005

.002

30.0%

.00-.53

31

12401

.074

.082

.093 .106

.053; .110

-.081; .243

.006

.003

38.6%

.00-.42

87

53934

.057

.063

.071 .081

.043; .072

-.092; .207

.005

.002

32.4%

.00-.50

17

6805

.116

.129

.145 .166

.081; .150

-.020; .252

.005

.002

46.8%

.00-.53

91

37260

.092

.102

.115 .131

.074; .109

-.088; .271

.007

.002

33.3%

.00-.53

24

27081

.029

.032

.036 .041

.013; .045

-.041; .098

.002

.001

55.2%

.00-.33

Technique
partial r
CFA
Scaling
same
different

NOTE: k = number of independent samples, n = total sample size, robs = weighted mean correlation, rc = correlation corrected for
unreliability assuming Cronbachs alphas of .90, .80, and .70 for both marker and substantive variables, 95% CI = confidence interval,
99% CV = credibility interval, Var = observed variance in effect sizes, VarSE = variance attributable to sampling error %VarSE =
percent of variance attributable to sampling error, |Range| = range of absolute correlations between marker scales and substantive
scales excluding one outlier of .78.

Table 4. Error results for conditions representative of organizational research


Conditions
represented
7

Pop.
Error
.000

.017

.025

.033

.042

N
100
300
1000
100
300
1000
100
300
1000
100
300
1000
100
300
1000

Bias
.009
-.003
-.002
.026
.014
.015
.034
.022
.023
.043
.030
.031
.051
.039
.040

Mean
Sq. Err
.008
.003
.001
.009
.003
.001
.009
.004
.002
.010
.004
.002
.011
.005
.003

Abs.
Err.
.073
.044
.026
.076
.046
.029
.078
.048
.033
.081
.052
.038
.084
.056
.044

95% CI of
Abs. Err.
.066; .081
.039; .049
.024; .029
.068; .083
.041; .051
.026; .032
.070; .086
.043; .053
.030; .036
.073; .089
.047; .057
.034; .041
.076; .093
.051; .062
.040; .048

Note: Conditions represented = number of conditions within range of meta-analytically derived markersubstantive scale relations estimates, Pop. Err. = error in substantive factor correlation IV,DV, Bias = mean
difference between population factor correlation error and all sample estimates of error from the Monte
Carlo analysis, Mean Sq. Err = average squared deviations of the sample factor correlation error estimates
from their population values, Abs. Err. = average absolute difference between the population factor
correlation error and the sample estimates.

Appendix A: Comparison of Analytical and Monte Carlo Simulation Methods


Ideal Marker (True Model)

Nonideal Marker (Misspecified Model)

Analytical Sim. (population)

(1)

(3)

Monte Carlo (samples)

(2)

(4)

Definition of terms:
(cell 1) population substantive IV-DV factor correlation from True Model
(cell 2) average of sample estimates of true substantive factor correlation from Monte Carlo design
(cell 3) population substantive IV-DV factor correlation from Misspecified Model
(cell 4) average of sample estimates of true substantive factor correlation from Misspecified Model
from Monte Carlo design
Comparison:
can be different from

in a monte carlo design, but the difference will get smaller as the number

of samples and size of each sample get larger. The difference approaches zero asymptotically. Thus, values
for cell 2 will converge to be equal to those of cell 1. The same will be true of misspecified models:
can be different from

in a Monte Carlo design, but the difference will get smaller as the number of

samples and size of each sample get larger. Thus, values for cell 4 converge to be equal to those of cell 3.
An analytical simulation compares cell 1 vs. cell 3 to investigate effects of assuming an ideal (orthogonal)
marker when it is really nonideal, resulting in error due to model misspecification. A Monte Carlo
simulation compares cells 1 vs. cell 4 to examine consequences of specification error and sampling error.
If cell 4 values approach cell 3 values (as explained above), then the results of the comparison of cell 1 vs.
cell 3 will approach the results of the comparison of the comparison of cell 1 and cell 4, if the number of
samples and sample sizes of the Monte Carlo design are adequate. Thus, the analytical simulation and the
Monte Carlo approaches converge to yield comparable findings regarding the impact of specification error
on model parameters. Said differently, the analytical simulation comparison of cells 1 and 3 should yield
conclusions comparable to the Monte Carlo based comparison of cells 1 and 4.

Appendix B. Formulas to compute scale correlations (XM) based on Model parameters


Equation 1: Relates four types of item covariances for (a) those among the marker indicators,
(b) those between marker and independent variable, (c) those between the independent
variable indicators x1 and x2, and (d) those among the independent and dependent variable
indicators. Equation 1e shows the rule for computing the variance of the first substantive
indicator.
COV(m1m2) = 1 2
(a)
(b)
COV(x1m1) = 1 5 + 1IV, M 3
(c)
COV(x1x2) = 3 4 + 5 6 + 3IV, M 6 + 4IV, M 5
(d)
COV (x1y1) = 5 8 + 3IV, DV 7 + 3IV, M 8 + 5DV, M 7
(e)
V(x1) = 32 + 52+ 3IV, M 5 + 1
Equation 2: used to compute the covariance between the independent variable scale (IVs) and
marker variable scale (MVs) using the sum of the 16 covariances as computed following the
examples from Equation 1.
COV(IVsMs) = COV(x1m1) + COV(x1m2) + COV(x1m3) + COV(x1m4) + COV(x2m1) +
COV(x2m2) + COV(x2m3) + COV(x2m4) + COV(x3m1) + COV(x3m2) + COV(x3m3) +
COV(x3m4) + COV(x4m1 + COV(x4m2) + COV(x4m3) + COV(x4m4)
Equation 3: used to compute (a) the variance of the independent variable scale (IVs) using the
variances and covariances of its four substantive indicators, and (b) the variance of the marker
variable scale (MVs) using the variances and covariances of its four indicators.
(a)
V(IVs) = V(x1) + V(x2) + V(x3) + V(x4) + 2[COV(x1x2) + COV(x1x3) + COV(x1x4) +
COV(x2x3) + COV(x2x4) + COV(x3x4)]
(b)
V(Ms) = V(m1) + V(m2) + V(m3) + V(m4) + 2[COV(m1m2) + COV(m1m3) +
COV(m1m4) + COV(m2m3) + COV(m2m4) + COV(m3m4)]
Equation 4: used to compute the covariance between the IV and MV scales and the variances
of the two scales, which were used to calculate the correlation between the two scales.
IVs MVs = COV(IVsMVs)/[ V(IVs)1/2 * V (MVs)1/2]
Note: IVs = independent variable scale, Ms = Four item method variable scale, x1-x4 =
Individual items for IVs, m1-m4 = Individual items for Ms, s = Substantive factor loading for
x1 through x4, m = method factor loading for x1 through x4, = Uniqueness/error variance
of individual items, IV, M = factor correlation between method latent variable and substantive
latent variable, IVs MVs = Scale correlation, V = Variance, COV = Covariance.

Appendix C. Population values and specification error for all parameters for CFA Marker Model
IV, M = .000

True Parameter Values


item
reliab.

.368

.5

S/M
ratio

IV, DV

100/0
100/0
100/0

IV,DV

IV, M = .200
m

.000
.200
.400

error
.000
.000
.000

error1
.000
.000
.000

error
.000
.000
.000

error
.000
.000
.000

100/0

.600

.000

.000

.000

80/20
80/20
80/20

.000
.200
.400

.000
.000
.000

.000
.000
.000

80/20

.600

.000

60/40
60/40
60/40

.000
.200
.400

60/40
40/60
40/60
40/60

IV,DV

error
.042
.033
.025

error
.012
.012
.012

.000

.017

.000
.000
.000

.000
.000
.000

.000

.000

.000
.000
.000

.000
.000
.000

.600

.000

.000
.200
.400

.000
.000
.000

40/60

.600

100/0
100/0
100/0

error

IV, M = .400
m

IV,DV

error
.190
.152
.114

error
.051
.051
.051

error
.000
.000
.000

error
-.243
-.243
-.243

.000
.000
.000

error
-.121
-.121
-.121

.012

.000

-.121

.076

.051

.000

-.243

.042
.033
.025

.026
.026
.026

.035
.035
.035

-.098
-.098
-.098

.190
.152
.114

.072
.072
.072

.067
.067
.067

-.191
-.191
-.191

.000

.017

.026

.035

-.098

.076

.072

.067

-.191

.000
.000
.000

.000
.000
.000

.042
.033
.025

.025
.025
.025

.043
.043
.043

-.078
-.078
-.078

.190
.152
.114

.067
.067
.067

.080
.080
.080

-.151
-.151
-.151

.000

.000

.000

.017

.025

.043

-.078

.076

.067

.080

-.151

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.021
.021
.021

.043
.043
.043

-.058
-.058
-.058

.190
.152
.114

.055
.055
.055

.080
.080
.080

-.113
-.113
-.113

.000

.000

.000

.000

.017

.021

.043

-.058

.076

.055

.080

-.113

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.014
.014
.014

.000
.000
.000

-.141
-.141
-.141

.190
.152
.114

.059
.059
.059

.000
.000
.000

-.282
-.282
-.282

100/0

.600

.000

.000

.000

.000

.017

.014

.000

-.141

.076

.059

.000

-.282

80/20
80/20
80/20

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.036
.036
.036

.037
.037
.037

-.110
-.110
-.110

.190
.152
.114

.094
.094
.094

.069
.069
.069

-.212
-.212
-.212

80/20

.600

.000

.000

.000

.000

.017

.036

.037

-.110

.076

.094

.069

-.212

60/40
60/40
60/40

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.036
.036
.036

.045
.045
.045

-.084
-.084
-.084

.190
.152
.114

.089
.089
.089

.082
.082
.082

-.162
-.162
-.162

60/40
40/60

.600
.000

.000
.000

.000
.000

.000
.000

.000
.000

.017
.042

.036
.029

.045
.045

-.084
-.060

.076
.190

.089
.073

.082
.082

-.162
-.117

.692

40/60
40/60

.200
.400

.000
.000

.000
.000

.000
.000

.000
.000

.033
.025

.029
.029

.045
.045

-.060
-.060

.152
.114

.073
.073

.082
.082

-.117
-.117

40/60

.600

.000

.000

.000

.000

.017

.029

.045

-.060

.076

.073

.082

-.117

100/0
100/0
100/0

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.017
.017
.017

.000
.000
.000

-.166
-.166
-.166

.190
.152
.114

.070
.070
.070

.000
.000
.000

-.333
-.333
-.333

100/0

.600

.000

.000

.000

.000

.017

.017

.000

-.166

.076

.070

.000

-.333

80/20
80/20
80/20

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.053
.053
.053

.031
.031
.031

-.122
-.122
-.122

.190
.152
.114

.128
.128
.128

.056
.056
.056

-.233
-.233
-.233

80/20

.600

.000

.000

.000

.000

.017

.053

.031

-.122

.076

.128

.056

-.233

60/40
60/40
60/40

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.052
.052
.052

.037
.037
.037

-.088
-.088
-.088

.190
.152
.114

.121
.121
.121

.066
.066
.066

-.169
-.169
-.169

60/40

.600

.000

.000

.000

.000

.017

.052

.037

-.088

.076

.121

.066

-.169

40/60
40/60
40/60

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.042
.033
.025

.043
.043
.043

.037
.037
.037

-.059
-.059
-.059

.190
.152
.114

.099
.099
.099

.066
.066
.066

-.113
-.113
-.113

40/60

.600

.000

.000

.000

.000

.017

.043

.037

-.059

.076

.099

.066

-.113

Positive values indicate a downward bias with recovered estimate being less than the population parameter and negative values
indicate an upward bias with recovered estimate being more than the population parameter.
1

Note: pop. = population, reliab. = reliability, S/M ratio = substantive to method variance, s = substantive factor loading, =
substantive error variance, m = marker variable factor loading, IV, DV = factor correlation between substantive latent variables, IVsMs =
scale correlation between marker variable and substantive variable.

Appendix D. Population values and specification error for all parameters for CFA No Marker Model
True Parameter Values
item
reliab.

IV, M = .000

IV, M = .200

IV, M = .400

S/M
ratio
100/0
100/0
100/0

IV, DV
.000
.200
.400

IV,DV
error
.000
.000
.000

s error1
.000
.000
.000

error
.000
.000
.000

IV,DV
error
.000
.000
.000

s error
.000
.000
.000

error
.000
.000
.000

IV,DV
error
.000
.000
.000

s
error
.000
.000
.000

error
.000
.000
.000

100/0
80/20
80/20
80/20

.600
.000
.200
.400

.000
-.199
-.160
-.120

.000
-.064
-.064
-.064

.000
.000
.000
.000

.000
-.309
-.248
-.185

.000
-.064
-.064
-.064

.000
.000
.000
.000

.000
-.393
-.314
-.236

.000
-.064
-.064
-.064

.000
.000
.000
.000

80/20
60/40
60/40
60/40

.600
.000
.200
.400

-.080
-.400
-.320
-.240

-.064
-.137
-.137
-.137

.000
.000
.000
.000

-.124
-.498
-.399
-.299

-.064
-.137
-.137
-.137

.000
.000
.000
.000

-.157
-.569
-.455
-.342

-.064
-.137
-.137
-.137

.000
.000
.000
.000

60/40
40/60
40/60
40/60

.600
.000
.200
.400

-.160
-.599
-.480
-.360

-.137
-.223
-.223
-.223

.000
.000
.000
.000

-.199
-.665
-.532
-.399

-.137
-.223
-.223
-.223

.000
.000
.000
.000

-.228
-.712
-.570
-.427

-.137
-.223
-.223
-.223

.000
.000
.000
.000

.368

40/60

.600

-.240

-.223

.000

-.266

-.223

.000

-.285

-.223

.000

.5

100/0
100/0
100/0

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

100/0
80/20
80/20
80/20

.600
.000
.200
.400

.000
-.199
-.160
-.120

.000
-.075
-.075
-.075

.000
.000
.000
.000

.000
-.309
-.248
-.185

.000
-.075
-.075
-.075

.000
.000
.000
.000

.000
-.393
-.314
-.236

.000
-.075
-.075
-.075

.000
.000
.000
.000

80/20
60/40
60/40
60/40
60/40

.600
.000
.200
.400
.600

-.080
-.400
-.320
-.240
-.160

-.075
-.159
-.159
-.159
-.159

.000
.000
.000
.000
.000

-.124
-.498
-.399
-.299
-.199

-.075
-.159
-.159
-.159
-.159

.000
.000
.000
.000
.000

-.157
-.569
-.455
-.342
-.228

-.075
-.159
-.159
-.159
-.159

.000
.000
.000
.000
.000

.692

40/60
40/60
40/60

.000
.200
.400

-.599
-.480
-.360

-.260
-.260
-.260

.000
.000
.000

-.665
-.532
-.399

-.260
-.260
-.260

.000
.000
.000

-.712
-.570
-.427

-.260
-.260
-.260

.000
.000
.000

40/60

.600

-.240

-.260

.000

-.266

-.260

.000

-.285

-.260

.000

100/0
100/0
100/0

.000
.200
.400

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

.000
.000
.000

100/0
80/20
80/20
80/20

.600
.000
.200
.400

.000
-.199
-.160
-.120

.000
-.088
-.088
-.088

.000
.000
.000
.000

.000
-.309
-.248
-.185

.000
-.088
-.088
-.088

.000
.000
.000
.000

.000
-.393
-.314
-.236

.000
-.088
-.088
-.088

.000
.000
.000
.000

80/20
60/40
60/40
60/40

.600
.000
.200
.400

-.080
-.400
-.320
-.240

-.088
-.187
-.187
-.187

.000
.000
.000
.000

-.124
-.498
-.399
-.299

-.088
-.187
-.187
-.187

.000
.000
.000
.000

-.157
-.569
-.455
-.342

-.088
-.187
-.187
-.187

.000
.000
.000
.000

60/40
40/60
40/60
40/60

.600
.000
.200
.400

-.160
-.599
-.480
-.360

-.187
-.306
-.306
-.306

.000
.000
.000
.000

-.199
-.665
-.532
-.399

-.187
-.306
-.306
-.306

.000
.000
.000
.000

-.228
-.712
-.570
-.427

-.187
-.306
-.306
-.306

.000
.000
.000
.000

40/60

.600

-.240

-.306

.000

-.266

-.306

.000

-.285

-.306

.000

Positive values indicate a downward bias with recovered estimate being less than the population parameter and negative values
indicate an upward bias with recovered estimate being more than the population parameter.
1

Note: pop. = population, reliab. = reliability, S/M ratio = substantive to method variance, s = substantive factor loading, =
substantive error variance, m = marker variable factor loading, IV, DV = factor correlation between substantive latent variables.

Appendix E. Effect Sizes of Included Studies


author
Afsar & Saeed
Agarwal et al (a)
Agarwal et al (b)
Alegre et al
Alge et al
Allam
Ashill & Jobber
Bal et al (a)
Bal et al (b)
Bammens et al
Bernerth et al (a)
Bernerth et al (b)
Bhattacharya & Swain
Birkin
Bock et al
Boichuk
Booth et al
Boso et al
Boso et al
Bowling
Brees
Bustamante
Cameron & Webster
Cardon et al
Carver
Chang et al

marker
continuance commitment
solitary work preference
smallest positive correlation
NR
NR
NR
task variability
continuance commitment
work engagement
CWB-O
gender
gender
work flexibility
involvement & motivation
NR
smallest r
giving in past year
diversity of exports
capital availability
lateness policy
name vs. store brand
environmental norms
extraversion
learning goals
age and industry
export diversity

strategy
partial r
CFA
stat. sign.
partial r
stat. sign.
CFA
partial r
partial r
partial r
partial r
partial r
partial r
partial r
control
CFA
partial r
partial r
partial r
partial r
partial r
partial r
stat. sign.
NR
CFA
partial r
stat. sign.

timing
a priori
a priori
post hoc
a priori
a priori
a priori
a priori
a priori
a priori
a priori
post hoc
post hoc
a priori
post hoc
a priori
post hoc
a priori
a priori
a priori
a priori
a priori
a priori
a priori
a priori
a priori
post hoc

scaling
same
same
same
NR
same
same
same
same
same
same
diff.
diff.
same
same
same
same
diff.
same
same
same
same
same
same
same
diff.
same

n
255
979
400
132
294
534
84
124
1083
893
117
195
221
120
107
127
3658
212
164
209
756
5355
324
158
84
215

k
7
8
1
5
12
3
3
7
1
1
5
14
1
26
15
1
1
12
22
10
NR
7
1
19
10
6

min
.04
.01
.03
.01
.00
.05
.03
.01
.00
.01
.00
.01
.04
0
NR
.00
.00
.01
.02

max
.46
.32
.09
.15
.06
.07
.19
.11
.17
.42
.34
.08
.09
.13
NR
.08
.46
.13
.12

rmean
.13
.10
.41
.07
.07
.02
.06
.12
.02
.01
.05
.07
.04
.15
.12
.06
.10
.07
.05
.07
.02
.05
.01
.26
.05
.05

Che-Ha et al
Chiaburu & Carpenter
Choi
Ciabuschi & Martin
Ciabuschi et al (a)
Ciabuschi et al (b)
Crespo et al
Crossley et al
De Clercq et al
Delgado-Garca et al
Densten & Sarros
Dunham
Engelen et al
Frenkel et al

1st and 2nd smallest r


self-concept clarity
patriotism
social comparison
communication channels
communication channels
age
composure/competence
Human capital
obtained outside advice
citizenship behaviors
universalism
foreign employees
discretion

partial r
partial r
CFA
partial r
CFA
partial r
stat. sign.
partial r
partial r
partial r
CFA
partial r
partial r
partial r

post hoc
a priori
a priori
a priori
a priori
a priori
post hoc
a priori
a priori
post hoc
a priori
a priori
a priori
post hoc

same
same
same
same
same
same
diff.
same
same
same
same
same
diff.
same

223
165
310
85
85
71
202
52
263
335
635
134
790
1533

2
4
10
4
4
7
10
3
5
1
12
19
1
1

.20
.09
.00
.02
.00
.01
NR
.05
.01
.05
.03
-

.23
.3
.17
.09
.11
.18
NR
.21
.37
.38
.29
-

.22
.20
.05
.07
.05
.05
.01
.08
.13
.01
.24
.11
.05
.08

Goffin & Anderson


Goldberg et al
Gray & Densten
Hallin et al
Harris et al
Hau et al
Haynie
Heidenreich et al
Hoegele et al
Homburg et al
Hong et al
Jaramillo et al
Jiang et al
Jimmieson & White

understanding
NR
citizenship behaviors
innovation
Machiavellianism
2 smallest rs
patriotism
NR
NR
quality
technical life cycle
performance orientation
negative affect
comply with friends

partial r
partial r
partial r
partial r
partial r
stat. sign.
partial r
stat. sign.
partial r
partial r
CFA
partial r
CFA
partial r

a priori
post hoc
a priori
a priori
a priori
a priori
post hoc
a priori
a priori
post hoc
a priori
a priori
a priori
a priori

same
NR
same
same
same
same
same
diff.
NR
same
same
same
same
same

204
374
2376
210
205
300
161
103
1406
194
471
342
218
77

11
NR
10
10
8
2
14
NR
1
1
1
16
9
8

.01
.08
.01
.01
.01
.04
NR
.00
.01
.00

.22
.35
.21
.15
.00
.38
NR
.27
.31
NR

.07
.02
.23
.13
.07
.02
.20
.04
.02
.06
.04
.14
.16
.00

Jimmieson et al
Johnson et al
Johnson et al
Ju et al
Kane et al
Katsikea et al
Kavanagh et al
Kim et al
Kim et al
Kim et al
Klijn et al
Knoll & Gill
Kotabe et al
Kovjanic et al, study 1
Kovjanic et al, study 2
Krishnaveni & Deepa
Lages & Piercy
lazer
LI
Lin et al
Love et al
Lui & Ngo
Mehta
Meng et al
Moorman et al
Ngo & O'Cass
Niedle
O'Cass & Ngo
O'Cass & Sok

age
neutral object
neutral object
motivation
attitude toward oil
second smallest correlation
financial referent
smallest r
place of birth
creativity
media
competence
firms trust
occupational self-efficacy
occupational self-efficacy
self-control
adaptability
NR
shopping with friends
creativity
age of the plant
price factor
age
task-relationship conflict r
leadership effectiveness
export versus domestic
satisfaction with life
market type
NR

partial r
partial r
partial r
stat. sign.
partial r
partial r
partial r
stat. sign.
stat. sign.
partial r
partial r
CFA
stat. sign.
CFA
CFA
partial r
partial r
partial r
partial r
partial r
partial r
partial r
partial r
partial r
partial r
partial r
partial r
partial r
partial r

post hoc
a priori
a priori
a priori
a priori
post hoc
post hoc
post hoc
a priori
a priori
a priori
a priori
a priori
a priori
a priori
post hoc
a priori
a priori
a priori
a priori
post hoc
a priori
post hoc
post hoc
a priori
a priori
a priori
a priori
post hoc

diff.
same
same
same
same
same
NR
same
diff.
same
NR
same
same
same
same
same
same
same
same
same
diff.
same
diff.
same
same
diff.
same
diff.
same

147
129
138
160
249
160
2377
154
325
146
678
187
108
442
410
104
740
109
76
252
1064
218
330
175
205
155
198
301
171

8
5
5
9
2
1
15
1
4
9
4
17
10
9
9
1
6
7
6
7
14
1
10
1
9
1
1
9
6

.01
.12
.32
.01
.01
NR
.02
.02
.00
.00
.02
.06
.04
.02
.04
.03
.04
.00
.00
.06
NR
.02

.17
.28
.50
.20
.04
NR
.05
.46
.26
.27
.09
.53
.34
.02
.25
.17
.41
.12
.07
.78
NR
.16

.07
.20
.42
.11
.02
.00
.07
.01
.04
.24
.09
.11
.06
.25
.14
.03
.05
.17
.10
.18
.06
.01
.03
.13
.52
.05
.06
.04
.08

Panagopoulos et al
Parker et al
Prez-Nordtvedt et al
Poon
Poon et al
Rafferty & Griffin
Rafferty & Griffin
Rafferty & Jimmieson
Raffiee et al
Rutten
Salter et al
Schneider et al
Seong et al
Sexton

job experience
education and training
proactiveness
cross-cultural awareness
continuance commitment
bureaucracy
bureaucracy
workfamily conflict
consumed alcohol
smallest positive r
environmental concern
HQ location
economic liberalism
smallest r

partial r
partial r
CFA
partial r
partial r
CFA
partial r
CFA
partial r
partial r
partial r
partial r
partial r
partial r

a priori
a priori
a priori
a priori
a priori
a priori
a priori
a priori
a priori
post hoc
a priori
a priori
a priori
post hoc

diff.
same
same
same
same
same
same
same
diff.
same
diff.
diff.
same
same

438
123
168
106
155
1378
2311
178
16458
120
355
846
990
184

11
1
10
3
7
5
12
10
NR
1
11
7
NR
1

.00
.02
.06
.04
.04
.00
.02
NR
.01
.01
NR
-

.18
.28
.29
.46
.13
.08
.21
NR
.33
.18
NR
-

.09
.07
.08
.17
.13
.12
.04
.08
0
.04
.1
.08
.05
.28

Shipton et al
Shou et al
Sirn et al
Strauss et al
Strobel et al
Tayfur et al
Tillman
Tiwana
Vlachos et al
Waldman et al
Walsh & Bartikowski
Wang & Hseih
Wang et al
Webster et al
Whiteley et al
Wong
Wong et al
Wong et al
Yang & Mossholder
Yang et al
Yu
Zhang & Wu
Zhang & Zhou

job sat
contract with supplier firm
dynamism
tenure
personal distress
attitudes toward teamwork
smallest r
South American oper.
second smallest positive r
perception of suppliers
second smallest r
smallest positive correlation
least preferred coworker
individualism
smallest observed r
years of employment
ownership type
firm ownership type
creativity
smallest r
smallest r
technological uncertainty
market uncertainty

NR
partial r
partial r
partial r
partial r
stat. sign.
partial r
partial r
partial r
partial r
partial r
partial r
CFA
CFA
partial r
partial r
partial r
stat. sign.
partial r
partial r
partial r
partial r
partial r

a priori
a priori
a priori
a priori
a priori
a priori
post hoc
post hoc
post hoc
post hoc
post hoc
post hoc
a priori
a priori
post hoc
a priori
a priori
a priori
a priori
a priori
a priori
a priori
a priori

same
same
same
diff.
same
same
same
diff.
same
same
same
diff.
diff.
same
same
diff.
diff.
diff.
same
same
same
same
same

86
324
206
186
617
217
1083
120
438
69
783
273
376
169
453
230
122
206
195
195
473
102
343

6
5
8
11
4
9
1
3
1
3
1
1
8
3
1
7
4
5
15
1
1
8
12

.02
.02
.01
.01
.01
.00
.02
.00
.00
.01
.01
.01
.01
.03
.00
.00

.49
.08
.30
.10
.05
.31
.03
.07
.07
.06
.2
.17
.08
.29
.18
.25

.24
.04
.08
.04
.03
.12
.01
.03
.05
.04
.10
.03
.04
.03
.19
.14
.07
.04
.03
.04
.06
.07
.09

Note: partial r = study used the partial correlation marker technique described in Lindell and Whitney
(2001). CFA = study used the CFA Marker Technique, stat. sign. = study used the statistical significance (or
lack thereof) between the marker variable and substantive variables as a test for CMV, NR = not reported, n
= sample size, rmean = average correlation between marker and substantive variables.

You might also like