Dr. Dennis Gleiber Explains How What You Ask Respondents When Can Affect Their Answers
“Data Use: Arbitrary coherence, or, a failure to replicate” (in Quirk’s December 2010 issue) by Stephen J. Hellebusch discusses a special case of the impact of context and conditional effects on survey responses. We have known for a very long time that context matters in all forms of communications. I remember a paper I assigned to my voting and public opinion classes that demonstrated the framing effects of setting on the interpretation of President’s Reagan’s words in news stories, and also showed that the interpretation varied with the other stories bracketing that about the president.
Surveys are generally constructed with a temporal arrow such that bracketing is only an issue within single pages. Everything that comes before conditions our behavior, including answers to survey questions.
As Campbell, et al. (1960) emphasized with their heuristic “the funnel of causality,” the closer things are in the temporal stream the more likely and greater their potential impact. This is not a new idea. We prefer to ask people about their current or recent behaviors and attitudes. Intercepts closer to the point of interest are preferable to long term recollections not only because memory is highly imperfect but because all that intervenes between the stimulus and response potentially contaminates the linkage between the two. In causal and path modeling we use the multiplication of decimal measures of relational linkages to describe the strength of indirect paths. This functional form describes geometric decline in relative impact as the number of indirect links increases, just like longer time means less direct and lower impacts.
Ariely demonstrated the internal validity of arbitrary coherence. Hellebusch (“By the Numbers: Under the influence”, May 2010 Quirk’s Article) demonstrated that arbitrary coherence varies with interest and prior knowledge. In his second article (“Data Use: Arbitrary coherence, or, a failure to replicate”), Hellebusch tests the robustness of arbitrary coherence effect, a more abstract framing effect. His test was of four “randomly” chosen items. These were not random items. They were selected to be not “known [or] inexpensive cost” “and…of reasonably high level of interest to people.” Perhaps they are of relatively equal interest but high interest seems more of a reach. “[C]rocodile-skin wallet, necklace of pearls from Tahiti, copper cooking bowl and The Complete Works of Lewis Carol” all seem to be luxury items that derive their value from taste imbuing the pricing exercise with an additional uncontrolled element of personal preference and desirability. The results indicate that in a three question format when the random number (subjects enter the last two digits of their SSN) is separated from the maximum price by another pricing question the arbitrary coherence effect decreases or is eliminated. Hellebusch suggests that there may also be a learning effect since participants completed the same exercise within the last year. This seems like a more plausible explanation given the learning that surely accompanied the experience and potential discussions of published findings even in a large organization. The alternative explanation is however consistent with the issue being addressed here, that learning, cuing, and context matter for survey responses.
Because people generally complete surveys without interruption, the survey’s prior content has a high potential to cue, color, or condition survey responses. This is demonstrated empirically by Ariely and Hellebusch. The later also demonstrates that this is not a simple deterministic relationship, rather it is a conditional contextual effect that varies but is unknown and therefore neither predictable, nor controllable ways. This means that survey content and question order are critical aspects of all survey design. Randomization of item order and section order are widespread and relatively easy to program into computer assisted surveying these days and few researchers would argue against doing so in order to make the stimulus uniform across respondents rather than using randomization to control for potential cuing effects of presentation order. Short of making every survey a far too complicated implicit experiment by building in alternative forms and question orderings, we must be cognizant of potential problems and minimize them by closely considering the potential impact of what comes before every question. For example, there is ongoing debate about whether general overall evaluations should be placed before or after specifics. I have addressed this question elsewhere. Suffice it to say here that specifics clearly cue general evaluations far more than the general cues specifics.
Ariely, Dan. Predictably Irrational: The Hidden Forces That Shape Our Decisions. HarperCollins (2010) 384 pages
Campbell, Converse, Miller, and Stokes. 1960. The American voter. New York: John Wiley & Sons, Inc.
Hellebusch, Stephen J. Quirks. “By the Numbers: Under the influence”. Published: May 2010, page 22.
Article ID: 20100503
Hellebusch, Stephen J. Quirks. “Data Use: Arbitrary incoherence, or, a failure to replicate”. Published:
December 2010, page 22 Article ID: 20101202
Quirks articles available to registered users.