0:09
In this segment, we'll talk about two phenomena that involve interactive
prompting, to reduce respondent behaviors that are considered undesirable.
In particular, we'll talk about prompting respondents who are caught speeding,
answering unreasonably fast, and respondents who don't answer at all,
item non response.
Turning first to speeding, in one study, we set 300 milliseconds per word
as the threshold below, which we considered a respondent to be speeding.
In other words, in a ten word question, a response time below
three seconds, or 3,000 milliseconds, would be considered speeding.
0:50
This has been proposed, that is 300 milliseconds per word,
has been proposed as the time required for adults in the U.S. to read.
Of course, there are exceptions in both directions.
There are adults who eat faster and adults who eat slower.
So to the extent that this is an effect of threshold, it suggests that
interventions of this sort don't have to be tailored to every respondent but
can be designed in the aggregate.
1:18
In any case, when respondents answered below this threshold,
they received a prompt such as this one.
You seem to have responded very quickly.
Please be sure you've given the question sufficient thought
to provide an accurate answer.
Do you want to go back and reconsider your answer?
1:34
We compared the amount of speeding in a condition,
experimental treatment in which this prompt these prompts were displayed,
when responded to answers below that threshold.
To a control condition, of which there were no prompts.
1:48
We did this for seven questions about autobiographical quantities
like how many nights have you spent away from home in the last year?
For which the true value is not known, at least we didn't have access to it.
Also for seven simple arithmetic or probability questions for which,
the true value was known or at least we could determine the correct answers.
And that was useful because it allowed us to assess response accuracy.
2:15
So what we want to see is whether, there was any less speeding when respondents
were prompted upon speeding once, then when they were not.
When they were allowed to speed, more or less, with impunity.
2:30
As you can see here in the no prompt condition,
speeding was actually quite prevalent, 61, 62% in two of the experiments.
This means speeding at least one time.
But when we prompted respondents, they sped less in all cases, so
a respondent who was prompted for speeding once or potentially more
overall sped less often than their counterparts in the control condition,
who were never prompted for speeding.
2:59
So that suggests that we are slowing them down with this intervention.
Does that mean they're giving more thought to their answers,
and actually answering more accurately?
Or are they just sort of biding their time to avoid future prompts like this.
And then, just giving the same kind of superficial treatment to the question.
So in the one experiment with where we could know the true value,
where they were answering simple arithmetic and probability questions.
We could look at whether prompting not only slowed them down, but
increased their response accuracy.
There was actually no effect overall of prompting on response accuracy but
there was an effect for one group, when we broke the groups down by education level.
In particular, respondents who had some college or
an associates degree were more accurate when we prompted them for speeding.
So they slowed down, and
seemed to use the additional time that give more thought to their answers.
This account for about 40% of the respondents.
The respondents with only a high school degree were unaffected by prompting
as where the respondents with a bachelor's degree or more and
this could well be because those are the high school degree
were struggling with these questions, even though they were considered to be
relatively basic measures of numerical literacy and numeracy.
And so, for the respondents with a high school degree or
less, even more time didn't seem to help.
So that's one explanation for why prompting didn't help them,
and the group with a Bachelor's degree or
more may have found these questions so easy that they were able to answer
quite fast, fast enough that in fact we were considering them to be speeding.
When actually, they were giving adequate thought to the questions.
So it's for this group in the middle that, at least for these items,
that the slow down due to prompting seemed to improve response accuracy.
5:01
Turning to the other use of interactive prompting that we'll cover here,
when respondents speed, they're in a sense not really answering the question.
They're just providing sort of an answer at random.
In some cases, respondents don't provide any answers, and
this is known as item missing data or just item non response.
It could be that you can reduce item non response by
absolutely requiring a respondent to provide an answer by not allowing them to
proceed to the next question until they enter something, but
that could be annoying if they really don't want to give an answer and
that might lead them to break off and then this whole strategy backfires.
5:41
An alternative would be, to provide a don't know or
refuse to answer options, or decline to answer.
Which in a way invites respondents to use those options but
by combining that with an interactive prompt, which says essentially we value
your answer won't you please consider providing an answer.
This could in the end lead to less item nonresponse,
more completed questionnaires.
6:08
DeRouvray and Couper tried just this by presenting
four variants of a decline to answer option to respondents.
In one, they explicitly provided a decline to answer option after
the three substantive options.
And that appears at the bottom.
In another, they provided an explicit decline to answer option but
in a smaller typeface, so it might not be as enticing, inviting for respondents.
In another,
it was implicit that as respondents could advance without entering anything.
They're going to explicitly offer that option.
And the fourth display, presented to respondents was a message,
which said we very much like to have your answer to this question.
If you would like to choose one of the proposed answers, please select back If
you would prefer not to answer this question, please select next, thank you.
So the idea is that this encourage them to provide an answer, but
it didnt require them to.
And the idea that this strikes the balance between forcing them to give an answer and
not sort of carrot and stick.
7:16
And so what happened?
Well, as you can see, this is the rate of item missing data or item non response.
And the pop up, the fourth option that we just looked at, was quite successful in
reducing item non response compared to the three other interfaces.
7:31
We've now gone through a number of different aspects of self
administration,and we've focused really on web questionnaires.
We saw in the first segment, the first topic, that respondents
disclose more when they directly enter answers to sensitive questions
into the computer, and, in fact, there were some advantages that we discussed for
computerized self administration, compared to paper self administration.
8:24
We then looked at issues of coverage in web surveys and it was evident that
coverage is not universal when it comes to Internet access, and
that web surveys could suffer as a result.
So those without Internet access may differ on key attributes,
such as education and income and age.
And to the extent that questions the researcher wants to ask involve those
attributes, under coverage could distort the estimates that are produced.
8:54
With respect to sampling,
we talked about two main approaches to recruiting participants into web surveys.
So called opt in or volunteer panels and probability panels.
And we saw opt in panel,s which do not represent any particular
population except perhaps the population of volunteer web survey respondents,
may lead to less accurate data than results from probability panels.
Where the participants the panel members are recruited with,
probability methods such as random digit dialing and address based sampling.
And then finally, we examine interactive features,
and we saw that they can help promote positive respondent behavior and
reduce less desirable respondent behavior relatively easily and relatively cheaply.
But we also saw, particularly in the case of progress indicators that
the use of these features can backfire and actually Reduce completion rates.
What we'll turn to in the next module is interviewers and interviewing.
9:59
Interviewers can substantially improve survey estimates
by increasing response rates, by keeping responders on task.
And by explaining the intentions behind questions.
Different interviewing techniques can emphasize any of these particular
advantages.