Nonresponse error, measurement error, and mode of data collection: Tradeoffs in a multi-mode survey of sensitive and non-sensitive items

Although some researchers have suggested that a tradeoff exists between nonresponse and measurement error, to date, the evidence for this connection has been relatively sparse. We examine data from an alumni survey to explore potential links between nonresponse and measurement error. Records data were available for some of the survey items, allowing us to check the accuracy of the answers. The survey included relatively sensitive questions about the respondent's academic performance and compared three methods of data collection – computer-assisted telephone interviewing (CATI), interactive voice response (IVR), and an Internet survey. We test the hypothesis that the two modes of computerized self-administration reduce measurement error but increase nonresponse error, in particular the nonresponse error associated with dropping out of the survey during the switch from the initial telephone contact to the IVR or Internet mode. We find evidence for relatively large errors due to the mode switch; in some cases, these mode switch biases offset the advantages of self-administration for reducing measurement error. We find less evidence for a possible second link between nonresponse and measurement error, based on a relationship between the level of effort needed to obtain the data and the accuracy of the data that are ultimately obtained. We also compare nonresponse and measurement errors across different types of sensitive items; in general, measurement error tended to be the largest source of error for estimates of socially undesirable characteristics; nonresponse error tended to be the largest source of error for estimates involving socially desirable or neutral characteristics. © The Author 2011.