In:
Educational and Psychological Measurement, SAGE Publications, Vol. 62, No. 2 ( 2002-04), p. 241-253
Abstract:
Some authors debate whether effect sizes should be reported (a) for all null hypothesis tests, even non–statistically significant ones, or (b) only after a finding is first determined to be statistically significant. The decision to report and interpret small effects may partially depend on the amount of bias in the effect size measure used. Based on the recognitions that variance-accounted-for effect statistics are positively biased and that standardized difference effect sizes such as Cohen’s d can be converted into r 2 metrics and vice versa, the authors considered that d also may be biased. The authors therefore explored the amount of bias in Cohen’s d across a series of simulated study conditions. Results from their simulations indicated relatively no bias (close to zero) in Cohen’s d across all study conditions.
Type of Medium:
Online Resource
ISSN:
0013-1644
,
1552-3888
DOI:
10.1177/0013164402062002003
Language:
English
Publisher:
SAGE Publications
Publication Date:
2002
detail.hit.zdb_id:
1500101-5
detail.hit.zdb_id:
206630-0
SSG:
5,2
SSG:
5,3
Permalink