How Should the Researcher Handle “Don’t Know” Responses?
If you are a scholar engaged in research, that research may sometimes lead you into controversy. While simply doing good research with professional integrity is usually the best strategy for avoiding those situations, it may not be sufficient in some cases. The key is to do a bit of planning if you think your work might put you in an uncomfortable spotlight. Some situations are more likely to get you and your research into conflicts than others. For instance, if your research hits a cultural or political hot button, like today’s heated debates over climate change, gun control, the minimum wage, charter schools, gay marriage or evolution, you might attract attention from many people outside of academe who think they have some stake in your research findings or ideas. Or if you become an expert witness in litigation, you could get a subpoena to produce drafts and other materials that you used to formulate your professional opinion on a topic.
First of all, there are only two types of respondents, i.e. a) respondents that have opinions on any given issue and are aware of possessing those opinions and b) respondents that do not have opinions on any given issue and are equally aware of this. I.e. respondent with a fair amount of self-knowledge. Second, respondents are presumed to act rationally. As a result, the first type of respondents are supposed to report their opinions, regardless of a “DK/NO” being included or omitted. The second type are then presumed to choose the “DK/NO” option when it is offered. As we already discussed, this type of respondents will most likely fabricate an opinion in order to appear opinionated when there isn’t a “DK/NO” offered. In order to avoid this fabrication, several survey researchers advocate for adding a “DK/NO”. However, there appears to be a fundamental flaw in these assumptions. After all, it is assumed that a “DK/NO” only attracts those respondents that truly do not know the answer to the question or do not have an opinion on the surveyed topic. Yet, you can never be 100% certain that an opinionated respondent will not opt for the “DK/NO” option. As a result, the initial assumptions do not hold. That is why it is argued that you should not automatically presume that every time a respondent chooses for a “DK/NO”, (s)he cannot report a meaningful response.
Another open issue is the statistical treatment of such responses (Rubin et al., 1995; Schafer and Graham, 2002, e.g.). Although a dk response contains a specific information about the state of mind of the respondent, it is often treated as a missing value. Imputation procedures replace dk responses with estimates of the corresponding rating, usually by drawing information from the observed data, implicitly assuming the same probability distribution of the random variable R generating the observed ratings. As a matter of fact, most of the existing imputation procedures are not appropriate to handle dk responses (Feick, 2005, e.g.) and some alternatives have been proposed in the literature (Feick, 2005; Kroh, 2006). Following another approach, dk could be treated as one of the possible response categories. However, the addition of the dk category to the ordered ratings imposes a nominal scaling level to the random variable generating responses and this prevents the use of the statistical methods usually employed to model ratings, as they are mostly conceived for ordinal data.
On the whole, participants do not always state the truth and may say what they think the interviewer wishes to hear. A good qualitative researcher should not only examine what people say but also consider how they structured their responses and how they talked about the subject being discussed, for example, the person's emotions, tone, nonverbal communication, etc. If the research was triangulated with other qualitative or quantitative data, this should be discussed.
Rubin, D.B., Stern, H.S., Vehovarc, V., 1995. Handling “don’t know” survey responses: The case of the slovenian plebiscite. J Am Stat Assoc 90, 822–828
Feick, L.F., 2005. Latent class analysis of survey questions that include don’t know responses. Public Opin Quart 53, 525–547.