December 14, 2016
Couper and Zhang Evaluate Techniques for Improving Web Survey Responses
The first experimental study was conducted in two opt-in web panels in the U.S. Participants from the two panels were randomized into one of the three design conditions mentioned above. Couper and Zhang found significantly higher rates of break-off, significantly higher missing data rates, and longer response times on the prescription drug use question for respondents assigned to the two alternative options. However, they also found evidence of a learning curve, in that response times significantly decreased in the alternative options with each additional drug mentioned. Furthermore, more of the standard text box responses were either machine-matchable or codable (by human coders) during post-processing, assuming that manual coding effort is an option.
Given the unexpected negative results from the first study, the authors ran a second experimental study using two additional web panels, and improved the designs of the alternative options. Specifically, the drug names used for the alternative options were no longer in all caps, and they trimmed duplicate drug names in the master database of names, reducing the number of response options. They found evidence of an improved user experience when using the alternative conditions, with less break-off and less item-missing data, and response times were also shorter in general. However, the alternative conditions still resulted in longer response times for the first drug entered. The machine matching and /or coding rates were higher in the drop-down condition, but not enough so to offset the longer response times. Importantly, respondents were not provided with an option to enter the name of a drug that they could not find on the list in the alternative conditions.
Ultimately, the authors concluded that their results were indicative of an important trade-off. After implementing selected design changes, the alternative tools appeared to provide more matchable or codable responses (saving in post-processing time), but continued to take longer for respondents than the standard text box approach (which would also be easier to program). These results therefore have important implications for web surveys, given that respondents consistently showed evidence of learning with more frequent use of the alternative tools. If immediate coding is needed, choosing one of the alternative options would probably be best. However, if the web survey is collecting data to be coded and analyzed later, the standard text box option may be best for minimizing respondent burden.