Item Non Response (INR), such as “Don’t Know”, “Refusal”, “Hard to Say”, and “No Opinion”, occur when a respondent does not give a substantive answer to a particular question. Treating INR as missing at random is a common practice, but it could yield biased parameter estimates when they are not. In this study we classified responding processes into a hierarchy and proposed a new Item Response Theory (IRT) model for INR, in which additional latent traits were added to account for the hierarchical structure of responding processes. Simulation studies were conducted to evaluate parameter recovery when INR were ignorable or non-ignorable. The results showed that ignoring non-ignorable INR by fitting standard IRT models yielded severely biased parameter estimates especially when the latent traits were highly correlated; whereas the new model yielded unbiased estimates regardless of whether the INR were ignorable or not. The new model was fit to a real data of citizenship survey about democratic politics. The results demonstrated the superiority and feasibility of the new model for INR for Likert-type scales.
|Publication status||Published - Jul 2015|