Reliability is an important psychometric property of a test in psychological measurement. Traditionally, the Cronbach’s alpha coefficient (α) is commonly employed as an internal consistency estimate that measures the reliability of a test with multiple items (Cronbach, 1951). This method, however, assumes tau-equivalent items and it serves only as a lower bound (i.e., biased) estimate of reliability when the items concerned are congeneric. In this talk, therefore, we propose a bootstrap bias-corrected alpha coefficient (σ*) as an alternative, which we believe is more accurate in (1) estimating the true test reliability, and (2) testing the equality of test reliabilities in two independent samples. Through a Monte Carlo experiment, we compared the empirical performance of α and α* under different model conditions. First, simulation results indicated that when the test items are congeneric, α* is more accurate than α in estimating the true reliability. Second, when testing the equality of scale reliabilities in two independent samples, the test based on α will over-reject the correct null hypothesis of equal reliability when the items are tau-equivalent in one sample and congeneric in the other. Third, when the two groups have unequal reliabilities, the power of the test based on α will depend critically on the form of the items in the two groups. Finally, the test based on α* performs satisfactorily across all simulation conditions, in terms of both Type I error rate and statistical power.
|Publication status||Published - Jul 2014|