0:00
Design is conceiving and giving form to artifacts that solve problems,
In other words, closing the gap. In our design process, we've worked pretty
hard to dedal, to develop a plan. That is, to develop a design concept.
But, really, the question is, are we actually closing the gap?
One of the problems is that there's typically some drift from where the actual
users, where the gap and the actual user experience is to what we've actually done,
Or what we're actually building. So, if you think about it conceptually,
the user may have some actual ideal point for where they would like the artifact to
be. This is what they say they want.
This is what we hear them say they want, And this is what we actually build.
And there could be quite a bit of drift from where they actually started, that is,
what they actually want to what we actually deliver.
Our goal in concept testing is to discover that gap sooner rather than later.
And that is, to discover that gap while we're still defining the concept rather
than after we've actually delivered the artifact.
As I said when we discussed concept selection, if you're designing the
artifact for you or for your own personal purposes,
You don't really need to do any concept testing.
You just go with the concept that you're most enthusiastic about,
The one you actually want to pursue. But most of us do design, in some kind of
institutional context, Often in the context of a company.
And we're quite concerned with the question of, will users buy the design?
Will they adopt the design? Do they prefer the design?
And it, those are the settings in which concept testing is most important.
One way to do that is with, with what's called a concept test survey.
I'm just going to dive right in and show you one because I think it's easier to
illustrate it with specifics. So, here's a kind of concept test called
the Purchase Intent Survey. And, the idea behind a purchase intent
survey is that you sample a collection of users who are representative of a target
audience, and you expose them to the design concept with some kind of
description. And then, you simply ask the question, how
likely would they be to buy the product or to select the product?
And, so here is shown an ice cream scoop that's illustrated with just a black and
white line drawing, and then some additional description,
That it has an ergonomic grip. That it has this heavy aluminum scoop that
stays warm for easy scooping. That it has colourful handle options.
And that the scoop edge is contoured to slice through ice cream.
And then, the survey question itself simply asks, If you were choosing an
ice-cream scoop, how likely would you be to buy this scoop if it were available at
a store where you normally shop for similar items?
And the intention scale that's used is almost always this 5-point scale which
starts with definitely would not buy, goes to probably would not buy, might or might
not buy, probably would buy, and then definitely would buy.
And you simply ask the target user didn't get one of those choices and then you
infer from that. How likely they would be to, to actually
select the product. Now, you probably immediately see some of
the problems with the concept test, like this.
First of all, one of the problems is, how accurate the description is in
communicating the essence of the design concept.
Now, here we have an illustration and some bullet points that try to describe the
concept. But if you remember the scoop that's
represented here, a lot of its appeal is kinaesthetic, relates to the details of
its appearance and to the way it feels. And, this line drawing is probably not so
good at actually communicating what this concept is all about.
This photograph or rendering would certainly be better in doing that.
But, even still would not be as good as actually letting the user try a prototype
or actually experience the finished product.
Of course, the trade off here is that you'd like to get this information early
in the design process. And, in general, you won't have finished
production intent prototypes until quite late in the process till' well after this
information would be useful. And so, this is typically a trade off you
face which is getting the information early with a relatively less rich
description of the artifact versus getting it later when you can describe it
accurately but after that information is less useful.
I would say parenthetically that, the more novel the concept.
And to some extent, the newer the category of the artifact, the easier it is to
describe. The artifact with simply text or an
illustration. For instance,
If I were to describe a car rental service that meets you at the airport at the
airline gate as you get off the plane, I don't really need a, a drawing or a
description of that. The textual description is pretty easy to
understand. On the other hand, something like an ice
cream scoop where the category is quite mature.
The differences among the various artifacts in the market are quite small.
It would be, it's more important to have a model or a realistic representation of the
artifact itself. One of the other problems with an intense
survey like this is it asks the, it asks for intention in isolation without asking
the user to consider the competitive alternatives, or the other choices that he
or she might be, might, might face. And for that reason, sometimes a forced
choice is a better way to, to do the concept testing.
A forced choice also lets you discriminate among several ideas or concepts that you
might be considering. So here, here's a version of a concept
test survey that follows the forced choice methodology. And it simply asks,
If you were choosing an ice cream scoop, which of the following would you be most
likely to purchase? And that articulates three distinct
solution concepts. One, the one I've already described, and
then this one that has this highly contoured handle with an ergonomic grip, a
rubberized grip. And then scoop C, which is quite distinct,
which is an electric powered heater to keep the scoop head warm.
And then, you force a choice that is asked the user, the respondent to select only
one of the three options. Now, forced choice can be used with either
the concepts that are under consideration by the designer or the design team.
It could also include concepts that are already available on the market,
That is the, The dominant existing solutions.
And that lets the team estimate what fraction of the market the, the, the team
would likely garner if it were to compete with the products or artifacts that were
available in the market already. Indeed, lat, lat,
Leads me to the last point that I want to make about concept testing.
Which is, the results from this kind of concept test can be used in forecasting.
So, and they're used in a, in a very simple arithmetic model that has the
following form. You simply say, you estimate the forecast
units that would be sold per year, per, per time period as a function of, of four
factors. First, the total number of units that are
sold in, In the entire category per year.
So, this is all the artefacts of the category that the new artifact falls into,
that are sold in the market in the year. And then, times the fraction of the market
that's actually aware of the new product and that has access to the new product.
So, this might be the fraction of the market that your, your product has
distribution in or a geographic faction of the market.
It is rarely 100% because most new products are, are not, are not fully
available in the market and it's quite rare to have the entire market aware of
the new product. The third factor is the fraction of your
survey respondents in your intent survey that actually selected the so-called top
box. And, in general,
In purchase intent surveys, the, Of these five boxes here, in most cases,
you ignore all responses other than the so-called top box, which is the definitely
would buy box. And sometimes you also include the
probably would buy. But that, that term and that equation is
the fraction of respondents who actually selected the top box, so that might be
twenty%, or five%, or ten%, or whatever it is.
And then, the last factor accounts for the fact that respondents to surveys are
typically over-optimistic, and they over estimate the likely they'll actually buy.
So, let me just give an example, a numerical example to wrap up.
So, let's imagine that in the ice cream scoop category that there are 2.5 million
units sold in the ice cream scoop category per year, and may be we can just call this
in the U.S. Market just to keep it simple.
And then, we can look at the fraction of the U.S.
Market that's aware of the new product and have it, has access to it,
And let's assume that we're a small company, we are just getting started and
that we're in a small fraction of retailers.
And so, we have access and awareness of only two%.
And then, the third factor, Let's assume that of those that we've
surveyed, 30% of them select the top box as they
accept, they indicate they would definitely buy.
And then lastly, We put in a factor that counts for stated
versus actual behavior, which is a factor of 25%.
Which means that if, if 100 people say they will buy, that actually 25, only 25
of them actually will. And that, that's a typical value.
That value varies by product category. But for a relatively low cost consumer
good like an ice cream scoop, 25 or 30% would be a typical number used in these
models. And if you multiply that all out, you
would end up with an estimate or a forecast for the new product of 3,750
units per year sold. Now,
These forecasts are notoriously inaccurate, but they are,
Generally, its, We, we've shown empirically that they are
predictive of, of eventual sales. That is, they're correlated with what eventually
happens in the marketplace even though they're quite noisy.
That is, your actual sales or the actual results are likely to vary quite a bit
from the actual forecast. Having said that, it's, it's really the
best you can do and it gives you some indication of how likely it is people will
buy. And whether, indeed, you've actually
closed the gap as perceived by the user when exposed to your design concept.