You are on page 1of 12

Apr-17

Introduction to Meta-Analysis

Data Analysis II

Ahmet H. Kirca, Ph.D.


Associate Professor
Michigan State University

Mean Effect Size Calculation (for Correlation


Coefficients) and Transformations
1 r
ES Zr .5 ln
1 r

Correlation
coefficient Reliability of Measure1 Reliability of Measure2 Sample Size ES Zr

0.100 999 0.85 62 0.100


0.150 0.86 0.9 62 0.151
-0.028 999 999 213 -0.028
0.120 0.75 0.9 115 0.121
0.500 999 999 213 0.549
0.350 999 999 156 0.365
0.240 999 999 156 0.245

1
Apr-17

Adjustments
(for Measurement Unreliability)

Reliability of Reliability of Reliability adjusted ES


Measure1 Measure2 Sample Size ES Zr Zr

999 0.85 62 0.100 0.109


0.86 0.9 62 0.151 0.172
999 999 213 -0.028 -0.028
0.75 0.9 115 0.121 0.147
999 999 213 0.549 0.549
999 999 156 0.365 0.365
999 999 156 0.245 0.245

The Weighted Mean Effect Size


(for Correlation Coefficients)
Reliability
adjusted ES Zr n-3 se w w*ES N Sum w*ES Sum w Average ES
0.109 59 0.130 59 6.42 977 235.83 956 0.2467
0.172 59 0.130 59 10.14
-0.028 210 0.069 210 -5.88
0.147 112 0.094 112 16.44
0.549 210 0.069 210 115.35
0.365 153 0.081 153 55.91
0.245 153 0.081 153 37.45

1
se
n3
w n 1
ES
(w ES)
w
Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

2
Apr-17

The Standard Error of the Mean ES

Reliability
adjusted ES Zr n-3 se w w*ES N Sum w*ES Sum w Average ES se of ES

0.109 59 0.130 59 6.42 977 235.83 956 0.2467 0.03234


0.172 59 0.130 59 10.14
-0.028 210 0.069 210 -5.88
0.147 112 0.094 112 16.44
0.549 210 0.069 210 115.35
0.365 153 0.081 153 55.91
0.245 153 0.081 153 37.45

1 1
seES 0.032
w 956

The standard error of the mean is the square root of 1 divided


by the sum of the weights.

Significance of the Mean ES


and Confidence Intervals

95% CI
Average z-test for the 95% CI Lower Upper
ES se of ES Mean ES bound bound

0.2467 0.03234 7.62731144 0.183 0.310

Z-test for the Mean ES


ES 0.2467
Z 7.63
seES 0.03231
95% Confidence Interval
Lower ES 1.96( seES ) 0.247 1.96(.032) 0.183
Upper ES 1.96( seES ) 0.247 1.96(.032) 0.31

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

3
Apr-17

Q - The Homogeneity Statistic

Q-value Total Q
1.1213 39.8062
0.3309
15.8457 The homogeneity test is
1.1182 based on the Q statistic,
19.2317 which is distributed as a chi-
2.1579 square with k 1 degrees of
0.0006
freedom where k is the
number of effect sizes
(Hedges & Olkin, 1985)

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

Homogeneity Analysis

Homogeneity analysis tests whether the assumption


that all of the effect sizes are estimating the same
population mean is a reasonable assumption.
If homogeneity is rejected, the distribution of effect
sizes is assumed to be heterogeneous.
Single mean ES not a good descriptor of the distribution
There are real between study differences, that is, studies
estimate different population mean effect sizes.
Two options:
model between study differences
fit a random effects model

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

4
Apr-17

Heterogeneous Distributions

Analyze excess between study (ES) variability


categorical variables with the analog to the one-way ANOVA
continuous variables and/or multiple variables with weighted
multiple regression
Assume variability is random and fit a random effects
model.

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

Analyzing Heterogeneous Distributions:


The Analog to the one-way ANOVA
(for Mean Differences)

Study Grp ES w w*ES w*ES^2 Calculate the 3


1 1 -0.33 11.91 -3.93 1.30
2 1 0.32 28.57 9.14 2.93
sums for each
3 1 0.39 58.82 22.94 8.95 subgroup of effect
4 1 0.31 29.41 9.12 2.83
5 1 0.17 13.89 2.36 0.40
sizes.
6 1 0.64 8.55 5.47 3.50
151.15 45.10 19.90

7 2 -0.33 9.80 -3.24 1.07


8 2 0.15 10.75 1.61 0.24
9 2 -0.02 83.33 -1.67 0.03
10 2 0.00 14.93 0.00 0.00
118.82 -3.29 1.34

A grouping variable (e.g., random vs. nonrandom)

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

5
Apr-17

Analyzing Heterogeneous Distributions:


The Analog to the ANOVA
(for Mean Differences)

Calculate a separate Q for each group:


45.10 2
QGROUP _ 1 19.90 6.44
151.15

3.29 2
QGROUP _ 2 1.34 1.25
118.82

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

Analyzing Heterogeneous Distributions:


The Analog to the ANOVA
(for Mean Differences)
The sum of the individual group Qs = Q within:

QW QGROUP _ 1 QGROUP _ 2 6.44 1.25 7.69

df k j 10 2 8 Where k is the number of effect sizes


and j is the number of groups.

The difference between the Q total and the Q within


is the Q between:
QB QT QW 14.76 7.69 7.07

df j 1 2 1 1 Where j is the number of groups.

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

6
Apr-17

Analyzing Heterogeneous Distributions:


The Analog to the ANOVA
(for Mean Differences)

All we did was partition the overall Q into two pieces, a


within groups Q and a between groups Q.

QB 7.69 df B 1 QCV _ .05 (1) 3.84 pB .05


QW 7.07 dfW 8 QCV _ .05 (8) 15.51 pW .05
QT 14.76 dfT 9 QCV _ .05 (9) 16.92 pT .05

The grouping variable accounts for significant variability


in effect sizes.

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

Mean ES for each Group


(for Mean Differences)

The mean ES, standard error and confidence intervals


can be calculated for each group:

ESGROUP _1
(w ES ) 45.10 0.30
w 151.15

ESGROUP _ 2
( w ES ) 3.29 0.03
w 118.82

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

7
Apr-17

Mean ES for each Group


(for Correlation Coefficients)
- Same as before: Calculate the mean ES, standard error
and confidence intervals for each group.
- Compare confidence intervals to see if there is overlap
- Calculate Q-statistics for each group
- Remember, these are multiple comparisons and
bivariate assessments
- See Joshi and Roh (2009) Tables 1, 2, and 3

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

Analyzing Heterogeneous Distributions:


Multiple Regression Analysis

Analog to the ANOVA is restricted to a single


categorical between studies variable.
What if you are interested in a continuous
variable or multiple between study variables?
Multiple Regression Analysis
can use canned programs (e.g., SPSS, SAS)
parameter estimates are correct (R-squared, B weights, etc.)
F-tests, t-tests, and associated probabilities are incorrect
can use Wilson/Lipsey SPSS macros which give correct
parameters and probability values

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

8
Apr-17

Multiple Regression Analysis


Bivariate vs. OLS vs. WLS vs. HLM
(see Sagie and Koslowsky 1993 Personnel Psych)
Both methods analyze the effects of moderators by
performing multiple regression analysis where:
Observed effects sizes is the dependent variable and
the level of all moderators for each study are
independent variables.

Z = 0 + 1 X1 + 2 X2 + 3 X3 + + I .
where, Z is the z-transformed value of the sample size weighted
correlation between multinationality and performance, s are
parameter estimates, and Xi are categorical variables (dummy-coded)
and continuous variables each X.

Random Effects Models

Meta-analytic techniques are evolving


Three reasons to use a random effects model
Total Q is significant and you assume that the excess variability
across effect sizes derives from random differences across
studies (sources you cannot identify or measure).
The Q within from an Analog to the ANOVA is significant.
The Q residual from a Weighted Multiple Regression analysis is
significant.

18
Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

9
Apr-17

Random vs. Fixed Effects Model


Fixed effects model assumes that all of the variability
between effect sizes is due to sampling error.
(If differences between studies that lead to differences in effects are
not regarded as random (i.e., if they are regarded as consequences
of purposeful design decisions), then fixed effects methods are
appropriate for analysis).
Random effects model assumes that the variability
between effect sizes is due to sampling error plus
variability in the population of effects.
(the differences in effects are regarded as consequences of a
process that cannot be predicted in advance (i.e., the sources of
influences on the outcome are both numerous and unidentifiable)
The true effect sizes under study are sampled from a
larger population of effect sizes.

Comparison of Random Effect with


Fixed Effect Results

The biggest difference is in the significance levels and


confidence intervals.
Confidence intervals will get bigger.
Effects that were significant under a fixed effect model may no
longer be significant.
Random effects models are therefore more conservative.

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

10
Apr-17

Random Effects with HLM

Source: Krasnikov and Jayachandran (2008 JM)

Dummies for
Moderators
for Level 2

11
Apr-17

Recommendations for the Data Analysis


Stage

Conduct univariate analysis of mean effect sizes and report the


findings in a table (i.e., Descriptive Statistics Table)
Search for moderators using bivariate analysis and multivariate
analysis of effect sizes.
Use random effects models unless you have a strong reason to
believe that fixed effects models are more appropriate for your
analyses.
Despite its limitations, try to analyze the meta-analytical correlation
matrix for theory testing purposes, when appropriate.

Source: Lipsey and Wilson (2001), Practical Meta-Analysis by Sage

Discussion

12

You might also like