How Should the Researcher Handle “Don’t Know” Responses?
For instance, if your research hits a cultural or political hot button, like today’s heated debates over climate change, gun control, the minimum wage, charter schools, gay marriage or evolution, you might attract attention from many people outside of academe who think they have some stake in your research findings or ideas. Or if you become an expert witness in litigation, you could get a subpoena to produce drafts and other materials that you used to formulate your professional opinion on a topic.
As a result, the first type of respondents are supposed to report their opinions, regardless of a “DK/NO” being included or omitted. The second type are then presumed to choose the “DK/NO” option when it is offered. As we already discussed, this type of respondents will most likely fabricate an opinion in order to appear opinionated when there isn’t a “DK/NO” offered. In order to avoid this fabrication, several survey researchers advocate for adding a “DK/NO”. However, there appears to be a fundamental flaw in these assumptions. After all, it is assumed that a “DK/NO” only attracts those respondents that truly do not know the answer to the question or do not have an opinion on the surveyed topic. Yet, you can never be 100% certain that an opinionated respondent will not opt for the “DK/NO” option. As a result, the initial assumptions do not hold. That is why it is argued that you should not automatically presume that every time a respondent chooses for a “DK/NO”, (s)he cannot report a meaningful response.
Although a dk response contains a specific information about the state of mind of the respondent, it is often treated as a missing value. Imputation procedures replace dk responses with estimates of the corresponding rating, usually by drawing information from the observed data, implicitly assuming the same probability distribution of the random variable R generating the observed ratings. As a matter of fact, most of the existing imputation procedures are not appropriate to handle dk responses (Feick, 2005, e.g.) and some alternatives have been proposed in the literature (Feick, 2005; Kroh, 2006). Following another approach, dk could be treated as one of the possible response categories. However, the addition of the dk category to the ordered ratings imposes a nominal scaling level to the random variable generating responses and this prevents the use of the statistical methods usually employed to model ratings, as they are mostly conceived for ordinal data.
If the research was triangulated with other qualitative or quantitative data, this should be discussed.
Rubin, D.B., Stern, H.S., Vehovarc, V., 1995. Handling “don’t know” survey responses: The case of the slovenian plebiscite. J Am Stat Assoc 90, 822–828
Feick, L.F., 2005. Latent class analysis of survey questions that include don’t know responses. Public Opin Quart 53, 525–547.