Increasingly people are communicating with one another using new media such as texting on mobile devices. At the same survey response rates continue to drop. These phenomena are related to the extent that respondents only use mobile devices (21% of US households no longer have a landline phone) and rely on modes other than voice, in particular text (this is certainly the norm among certain subgroups in the US and among the entire population in countries such as China). Yet we know little about the impact of multimodal mobile devices on survey participation, completion, data quality and respondent satisfaction. The proposed research would explore these issues in two experiments that will collect survey data on iPhones in four modes defined by whether the interviewing agent is a live human or a computer, and whether the medium of communication is voice or text. The resulting modes are telephone interviews, instant message (IM) interviews, speech integrated voice response (IVR), and automated IM. This way of defining modes enables us to isolate the effects of the agent and medium. The first experiment explores the affect of the four modes on participation, completion, data quality and satisfaction; the second explores the impact on the same four measures of allowing participants to choose the response mode.
The two experiments are designed to allow us to answer questions such as: How does the effort required to interact with a particular medium affect respondents? behavior and experience? Some people find it easier to speak and listen than type and read, for others this is reversed. Greater effort could lead to cognitive shortcuts (e.g., failure to attend to all the details of the instructions that accompany a question) that are sure to degrade data quality. How do the fleeting character of speech and the permanent character of text lead to different outcomes on the fundamental measures? Might respondents using text be more honest when answering sensitive questions because the visible record of their responses increases their sense of accountability for what they say? How does the physical environment interact with mode? For example will respondents? performance suffer if they engage in an IM interview while walking or pushing a shopping cart, or if they engage in a voice interview in a loud restaurant? Will interviewers increase socially desirable responding to the same extent in voice versus text interviews? Text may communicate fewer social cues than speech and so may reduce the impact of interviewers on socially desirable responding relative to voice. Will iPhone users participate in greater numbers if allowed to select the response mode and will they be more conscientious? Will respondents choose the mode that researchers would want them to? It may be that they do not realize they are more likely to shade the truth when reporting embarrassing facts to an interviewer than computer, or it may be they give more weight to other considerations such as amount of effort to respond.
The proposed research will help fill at least two serious gaps in the methodological literature, namely the literature on mixed mode surveys and mobile data collection. It will provide initial data about the intersection of those areas, namely multimedia mobile surveys, which certainly seems to be on the horizon. The project is proposed as collaborative research between the University of Michigan (Institute for Social Research) and the New School for Social Research, with collaborators from AT&T Research and Yellow Pages Research. The team combines cognitive psychologists and psycholinguists, survey methodologists and computer scientists. The work will support one full time graduate student at each university.