0:09
Turning now to nonresponse in online surveys.
We actually don't know a lot about the nonrespondents in web surveys,
which is really required to quantify nonresponse error.
So it's virtually impossible to talk about nonresponse error for
opt-in or non-probability panels because we just don't know anything about
the population that's being represented, or again who chose not to,
say respond to a banner advertisement.
0:43
The little research that has been done are nonresponse error in probability panels
in contrast, suggests that nonrespondents
differ from respondents on variables such as race, ethnicity, employment status.
And so there may will be a nonresponse error in probability panels,
as a result of the recruitment process.
1:04
If we look only at response rates as opposed to nonresponse error,
the web generally has lower response rates than other modes.
So, in some sense this makes web surveys more vulnerable to nonresponse error.
But as we've discussed, it really depends on the differences between the respondents
and the nonrespondents, as well as the number of nonrespondents.
The evidence that response rates alone comes from a number of meta-analysis.
So Lozar Manfreda and her colleagues looked at 45 studies and
found that web survey response rates were 11% lower than mail questionnaires,
faxed questionnaires, email and a number of more conventional modes.
1:51
Shih and Fan conducted a meta-analysis of 39 studies, and
show the response rates for web also 11% lower than just for mail.
There's considerable variation in response rates across individual studies.
In some cases, the web surveys actually
produce higher response rates than the comparable studies in other modes.
This is, at this point, kind of atheoretical, there's no good explanation.
But the general trend is for
response rates to be lower in online data collection than other modes.
When it comes to probability panels, there is as I've mentioned,
a kind of cumulative response rate or nonresponse rate,
that has to do with the fact that there are many opportunities to not respond.
The panel is initially recruited through, as we've discussed, a mix of random
digit dialing and address-based sampling, and that produces a response rate.
And in this data by Lee, it turns out that 36% of
the attempted contacts end up recruiting someone or household into the panel.
3:02
Then the panel members without internet access are provided a device,
and when these data were collected, that device was a Web TV.
And only about 67% of those sample members
were successfully connected to the internet through the device.
So you can see that the percent of respondents is dropping as we
go through additional steps in this process.
3:29
Then, the sample members are required to complete a profile, and almost everybody
does complete the profile, but after doing that only 47% remained in the panel.
That is, remained active in the panel.
And then when the first survey is completed, the completion rate is 57.4%,
so that's 57.4% of all these other fractions of the original frame.
So the final response rate is only 5.5%.
Again, if the nonrespondents
resemble the respondents on various attributes, and would have answered
the questions as they respondents answer them, this is not a problem.
But with only 5.5% of the initial frame ultimately providing survey data,
the nonresponse error is a real concern.
Response rates in other types of web surveys include pop up surveys
where the response rates have actually, at least in the early days were higher than
they are in other types of non-probability surveys, web surveys.
24%, 15% in two early evaluations of pop up surveys.
So an invitation will appear in a window
that's separate from the primary browser window that the user is viewing.
Banner-advertised surveys, so where there's usually a band across
the top of the web page advertising the survey,
produce very low click-through rates,
under 1%, in two studies that were done when this approach was common.
5:33
Certainly helps the sort of authority of the sender or sponsor matters.
If they're familiar or if they seem authoritative, if a reminder is sent
after an initial invitation, that helps us [INAUDIBLE] this is true in in many modes.
Incentives help, and prepaid incentives which are much harder to deliver for
an online study, help more than promised incentives just like in other modes.
And questionnaire length matters.
And the length is usually mentioned in the invitations.
So the longer the questionnaire, the lower the response rate generally.
And attributes of the sample members matter as well.
So gender, personality, whether this is a topic of interest.
Attitudes towards survey research.
Experience taking part in surveys, all affect participation,
much as they do in surveys in other modes.
The final topic I want to mention on nonresponse on web surveys
concerns breakoffs or abandonment.
So, this refers to a situation which a sample member
starts to complete the questionnaire, but then terminates somewhere before the end.
So in fact, this is most common on the very first page of the web questionnaire.
That the sample member arrives there and decides this is not for
me, or for whatever reason doesn't go beyond the first page,
that is called the splash page.
Once the sample member begins the questionnaire, you can think of breakoffs
as somewhere between providing missing data or
item nonresponse, and unit nonresponse, not providing any data.
So, if the sample member provides data, the respondent provides data for,
say the first ten questions and no data after that, it's not exactly
the same as missing data intermittently throughout it.
It's an abandonment, and is generally sort of treated as both as I said,
item missing data, or item nonresponse and unit nonresponse, overall nonresponse.
Much more common in longer questionnaires, not surprisingly, and on difficult items,
and more demanding mentally.
Incentives can reduce breakoffs or delay them.
That is lead to completion of more items, or
higher percentage of the items than without incentive.
8:04
Progress indicators, which we'll talk about.
The primary reason designers use them is to reduce breakoffs.
The idea being that if you provide respondents with information about how
much of the task they have completed, and
how much of the task remains, this will help them.
Information is power.
But in fact, it turns out progress indicators may increase breakoffs.
It's a complicated story, it's not that simple, but the evidence so
far, and it continues to build, is that progress indicators, while they may be
sensible on intuitive grounds, actually harm completion rates.
They increase breakoffs.