from Rudner,
Glass, Evartt, & Emery (2002). *A user's guide to the meta-analysis of
research studies**.
*College
Park, MD: ERIC Clearinghouse on Assessment and Evaluation

9. Technical Information

This chapter outlines various equations and procedures used in ** Meta-Stat**. The equations are based on generally accepted
statistical principles and on methods described in Glass, McGaw, and Smith (1981), Hunter and Schmidt (1990), Cohen
(1977), Veldman (1967), Hedges and Olkin (1985), Draper and Smith (1966), and SPSS (1991). The reader is referred
to these excellent materials for a fuller development of the technical approaches used in

Conversion Formulas

Meta-analysis is based on placing the results from different research studies on a common metric. ** Meta-Stat** allows the
user to use either effect size or correlation as that common metric. To calculate effect size from a

The following formula is used to convert from an ** r** to effect size:

To derive ** r **from a

To derive ** r** from an

Here *df* refers to the denominator degrees of freedom. The equation is only appropriate when there is 1 degree of
freedom in the numerator.

To derive ** r** from a

To derive ** r** from a probability level (

To compute effect size as parametric gain scores:

To compute effect size from a paired comparison t-test

where is the mean difference between the observed pairs.

This last equation is from Gibbons, Hedeker, and Davis (1993).

Pre-Coded Variables

There are precoded equations in both the effect size and correlational meta-analysis modules. Here, we discuss these equations and outline their use.

Effect Size Meta-Analysis

Two pre-coded variables are used in the effect size meta-analysis module, UNBIASED, and WEIGHT. Both of these variables are based on the work of Hedges and Olkin (1985).

The unbiased effect size,* d*, is used in several homogeneity calculations and as the bases for the effect size plot. It should
be noted that *d* usually differs only slightly from *d*, especially as N gets large (see Hedges and Olkin, 1985, pp 78-79.).
Taylor and White (1993) found that *d* and *d* produce the same conclusions.

The variable WEIGHT is the inverse of the variance. The variance is used in the effect size plot. The optimal weight
when analyzing *d *and estimating the true effect size is the inverse of the variance. The most precise studies are given the
greatest weight. The more precise studies are give higher weights (see Hedges and Olkin, 1985, pp 302-304).

Correlational Meta-Analysis

The correlational meta-analysis modules have numerous precoded variables, including adjusted correlation, weights, and attenuation formulas.

Adjusted Correlation and Weights

The adjusted correlation is the unadjusted correlation divided by the product of the correction factors. This is the what the correlation would be if it were not for artifacts in the data.

Two suggested weights are provided based on discussions by Hunter and Schmidt (1990, pages 145-150). WEIGHTB is the optimal weight presented by Hedges and Olkin (1985):

In the case of multiple independent variables, *W _{b}* is multiplied by

Attenuation Formulas

Attenuation formulas are used in correlational meta-analysis to adjust for characteristics of the data (** artifacts** to use the
terminology of Hunter and Schmidt). They provide a theoretical estimate of the attenuation in the true correlation due
to data characteristics.

If ** r_{o}** is the observed correlation,

To correct for unreliability in either the dependent or independent variable:

To correct for variable dichotomization (i.e., to assess the correlation for a continuous variable given a point biserial correlation):

To correct for the use of a proxy dependent or independent variable:

To unpartial a partial correlation:

To correct for restriction in range for the independent variable:

Probability

Probabilities for various statistics are based on Kendall's (1955) normalizing transformation of the F distribution:

where A is the df for the numerator of the F ratio, and B is the df for the denominator.

If B<4 then ** Z** is transformed using Kelley's (1947) correction

The procedure is used to provide the exact probability of other statistics:

** z **Since

**^{2} **Since

** t** Since

Statistical Analyses

The following identifies the equations used in the analysis module of *Meta-Stat*

Descriptive Statistics

This procedure provided the weighted and unweighted mean, median, variance, standard deviation, minimum value, maximum value, and range given a specified variable and an optional weighting variable.

The computational formula used to derive the mean and variance are:

The standard deviation is simply the positive square root of the variance. This is the standard deviation of the presented
data. To treat this data as a sample and to obtain an unbiased estimate of the population variance, you can multiply the
sample variance by ** n/(n-1)**.

If the dependent variable is the unbiased effect size then a variety of additional statistics are available. Hedges's Q_{T} can be
used to test the homogeneity of the adjusted effect sizes.

The computational formula used by ** Meta-Stat** is

Mean ES or mean effect size is the mathematical average described above. It is repeated on this screen for reference. To compute the Fisher's Z, each effect size is first converted to an r using the reverse of the formula above:

The sampling error variance is discussed by Hunter and Schmidt (1990, page 281-338).

Regression

** Meta-Stat** uses the iterative stepwise multiple regression approach developed by Greenberger and Ward (1956) and
described in Veldman (1967, pages 294-307). The iteration process begins by selecting the variable with the highest
correlation available from the set of predictor variables. In subsequent iterations,

The key advantage of this iterative approach is that it allows the computation of key regression statistics (R^{2} and beta
weights) without having to invert the correlation matrix. Thus the non-multicolinearity restriction is not needed for
computing the multiple correlation and accuracy is improved. The standard errors for the beta weights, and hence the
corresponding t-statistics, however, are based on the inverted the correlation matrix. The Gauss-Jordon procedure is
used to do the inverting.

Some relevant equations are:

Adjusted R-Square:

where N is the number of observations and p is the number of included predictor variables.

Residual Sum of Squares:

Regression Sum of Squares:

Standard error of the unstandardized regression weight:

t-Statistic for testing H_{o}: _{k} = 0

If the criterion variable is UNBIASED and the weighting variable is WEIGHT, then we follow the recommendations
outlined on page 174 of Hedges and Olkin (1985). The weighted sum of squares about the regression is the chi-square
statistic Q_{E} and the weighted sum of squares due to the regression Q_{R}. The corresponding probabilities are reported. The
standard errors of the regression weights are corrected by dividing by the square root of the mean square error.

Group Means

This module provides means, standard deviations and confidence intervals by group for the selected criterion variable. Means and standard deviations are computed using the routines from the descriptive statistics discussed above. The 95% confidence intervals are computed using

If the criterion variable is the UNBIASED effect size, ** Meta-Stat** also computes Hedges's within, between and total
homogeneity statistics (Hedges and Olkin, 1985, pp 153-165). The total homogeneity statistic tests whether the studies,
regardless of the grouping variable, share the same effect size and is discussed above under descriptive statistics.

Hedges's between homogeneity test is analogous to the analysis of variance F-test examining whether the group means are the same. The tested hypothesis is that the group means are the same. The test statistic is:

Q_{B} is tested with p groups-1 degrees of freedom.

Hedges within homogeneity test examines the hypothesis that the effect sizes are homogeneous within group. The test statistic is

Hedges and Olkin (1985, p 156) note that if the samples sizes are at least 10 per group and if the effect sizes are not too large, then the actual significance of these test statistics will be sufficiently close to the nominal values that would be obtained from a large sample distribution.

Graphical Analysis

Descriptions of the various graphs can be found in Chapter 8. The following outlines some of the equations used in the program.

Effect Size Plots

The approximate 95% confidence interval for the unbiased effect size is given by

Mean Plots

The approximate 95% confidence interval about the mean is given by