0:05
Anderson Smith again,
talking about descriptive research methods in psychology.
And we're talking about surveys in this section, surveys questionnaires, and
polls.
And talking about the quality of the questionnaire, of the survey, the poll.
We talked about reliability, now we're going to talk about validity.
0:25
Reliability was the extent to which a survey is repeatable or
consistent across different times that is given.
Validity actually means the survey is measuring what you want it to measure.
Is it accurate, is it actually correlated with what the issue of the topic
is that you're looking at in the questionnaire?
0:45
And I gave you the example of a survey that can be reliable like all
the darts hit the same place on the target.
But it's not valid because they don’t hit the actual bulls eye of the target.
So, a survey can reliable when it’s not valid and
it can be valid when it’s not reliable.
1:07
So, a survey to assess, for example, counseling skills of high school
counselors might be reliable if it measures things consistently.
But it might not be valid, because it only looks at mental health knowledge,
and doesn't measure important things in counseling,
which is the ability to communicate with high school students.
So communication skills are just as important as actual content knowledge.
And a survey doesn't measure that, so
it wouldn't be a good assessment instrument of valid survey instrument of
counseling skills that deal with high-school students.
1:59
And two of those, Face validity and Content Validity.
Two different measures of construct validity of translation validity.
Then there's a criterion-based validity where we looking at what we are measuring
in the way in which it deals with some subsequent behavior.
For example, does it predict stuff that we want it to predict?
Is what we're measuring in a survey or
poll, actually predictive of what we want it to predict?
And concurrent validity, whether now it actually measures, not something
in the future, but what's happening right now in terms of being accurate.
2:40
And then convergent validity, which is the degree to which it sort of validates
with other things that are used to measure that particular construct I'm looking at.
And discriminant validity.
Does it measure correctly what I'm interested in and
not measure other things that I'm not interested in?
So all of these become important measures of validity,
both construct validity and criterion-based validity.
So let's look at that.
3:19
And do the questions represent a good operational definition of
the construct is.
And as we say the construct is measure by these particular question.
And as I mentioned, that could both face validity and content validity.
Face validity was do the question look like they should be measuring that.
That the person filling out the survey can understand when I'm answering these
questions about represents this particular construct on this particular issue.
That's face validity, but you may not want the face validity.
I want the measure to look like the construct.
You might want it to be more of a covert measure rather than an overt measure where
the person taking this survey understands what is being looked at.
We might want an implicit measure that we still get
a good translation to what the issue is.
But without the person taking the survey understanding it.
So we want implicit measures, rather than explicit measures.
A good way of looking at that is when you deal with more work controversial issues
like prejudice.
We might want to get implicit measure of prejudice but
a person doesn't quite know that we're actually measuring prejudice,
rather than explicit prejudice where the person knows what we're looking at.
Often there's an attitude behavior inconsistency when we do explicit
measures.
For example, a survey was done many years ago when Americans had a great deal of
prejudice against Oriental people, and Chinese, and Japanese.
And they actually sent a survey out, and asked hotels and
restaurants, would you serve a Chinese couple if they came into your restaurant?
And they answered, and the overall survey response was, absolutely not.
It began representing this prejudice we had in the countries back 100 years ago.
But then the surveyor actually took a Chinese couple and
went around to the hotels and restaurants that they had surveyed.
And only one place actually refused service to the couple.
So there was an inconsistency between what they said and what they did.
That is they said what they thought people wanted to hear, but
that did not reflect in the later behavior.
So we really need to measure implicit measures there,
rather than explicit measure.
because sometimes explicit measures are what they think people
are expecting you to say.
5:43
So content validity, where the questions cover the agreed-upon
content range of construct, and it's not easy to fully cover a construct.
Constructs are very, very complex and sometimes we have a problem coming up with
the questions that cover all aspect of the construct, as I mentioned for
example in measuring high school counselors' abilities.
Also attitudes and beliefs cover a wide range of different experiences,
of conscious and unconscious influences, of different contexts.
What I say may depend upon the context in which that question is actually placed,
So content validity is a very important type of construct validity.
Criterion-based validity is how we predict things.
How will the measures actually perform?
The example of that would be the attitude, behavior, and consistency we saw.
With the implicit versus explicit kind of surveys.
It didn't really predict the behavior.
So predictive validity is one type of criterion based validity.
We also have concurrent, convergent and discriminant validity.
There's other types of way of measuring
the extent to which the survey will be useful in the future.
6:58
Predictive validity, simply the questions predict what the construct says it should
predict about behavior.
So do positive attitudes about getting news from newspapers translate into
subscription of newspapers?
That's the way I'm looking at does my attitude about newspapers predict whether
I'll subscribe to one or not.
7:17
Concurrent validity is does the survey measure the same as other existing
instruments that are measuring it now?
So for example,
does my political candidate poll correlate with other existing poll?
Right now, in this time in history in the United States,
we're going through a political election, and the problem is that
polls differ greatly on the extent to which they measure the same thing.
If you measure a poll online, where anybody can get on and
vote, then they tend to be very skewed towards
a particular candidate that has a program on getting people to vote on that online.
Survey is not a good sampling procedure and so
the survey can be skewed and that really correlate with much more scientifically
based sampling in other polls that we see.
So, we also have convergent validity.
We have the constructs related as they expected to be related.
Or in measuring construct,
does the measure capture all aspects of the construct?
So for example, if you try to rate people's attitudes people's about some
protective products in the market place it might involve your of attitude it might
involve cost of the product visibility of the product they already know about it
are the product on the market.
So it's really involves a lot of different things to determine your attitude towards
that particular product and so are we really measuring the same thing.
Opinions on these features really show the same kind
of opinion about the product itself.
It might be that cost is the over writing factors
not measuring cost to measure all those other things will really give you a good
opinion about the project is not valid.
Is done measured what it really should be measured to get that opinion.
8:56
There's also discriminant divergent validity,
which says that it measures some constructs and not correlate with others.
Or that what they do measure are sort of consistent across
different samples in the same situation.
For example, a study was done by Cable, et al in 2002.
It looked at fit between the individual employee and
the organization they were in, also called job satisfaction.
And what they found is they had young employees.
They wanted to look at higher pay as a primary factor
in determining their fit with the organization.
Whether they were valuable, their satisfaction with the job.
When they looked at older employees, they were more interested in
the benefits that were provided and how extensive those benefits were, and
less concerned about pay increases and overall pay.
So, here's a case where I needed to include age in my survey, or
I wouldn't get a good, valid representation.
What determines job satisfaction and determine person organizational fit.
We really need to make sure that we had good discriminate validity.
And they only look at that if we have the right things in the questionnaire that
allows to do the correct discriminations.