These are the results from a study that we had done. We'd run a regression-based model for two related decisions. One decision was incidence. Am I going to post at all? And these groups are ordered in terms of the intercept value. So the low-involvement group has the lowest intercept for the incidents model, the activist group as we're terming them. They have the highest intercept in the incidents model. So we modeled out the incidents decision and this is simply a binary model, yes or no. Do individuals post a product? Do individuals post a review for the product? And then we also modeled out the review itself. So, this is an ordinal regression. How many stars was it given? And so, the only restriction that we imposed on these results is on that intercept. That low-involvements are less likely to post than moderates, less likely to post than activists and what's interesting here is two things. One, we see that there's a pattern between the intercepts in the incidence model and the evaluation model. Notice that the more frequently people post, the lower the evaluation they tend to give that product. So your hard core posters, they post a lot. They're the most critical. The low-involvement folks, the ones who don't post all that frequently are also tend to be the most generous in their evaluations, but the one that I wanted to draw some attention to is how people react to dissension in what's been said previously. Now, this is in the context of product reviews. And so dissension, disagreement in the reviews, we operationalize using a measure of variance. The more disagreement there is, these low-involvement users are less likely to post, but the more disagreement the activists are more likely to post. So, they actually thrive on that disagreement. How do they react in their evaluations? Well, the low-involvements, the one's who are less likely to post. But when they post they bump, their evaluations upward. In contrast, the activist, if there is disagreement, they differentiate themselves by making their evaluations even lower. So when either of these cases, we don't actually get a read on that unbiased opinion that a person holds. It's being affected by previously posted ratings. And so for consumers, what are the implications of this? Well, the opinion that we form, that's going to differ from what we see in social media content. There's a filtering stage going on. Everyone forms an opinion not everyone chooses to express it. As consumers, we only observe the comments from those people who have chosen to express an opinion. Those expressed opinions can be based off prior expectations. It's going to be based on that individual's experiences, but it's also going to be affected by what people had said previously. And so we need to keep that in mind when we're looking at user reviews and when we're looking at social media comments as the source of information to help inform our own decision making. We definitely have to keep that in mind. And so if I turn to user reviews, if I turn to social media comment to help me make my own purchase decisions, well, what if that doesn't align with kind of my own perceptions as I made a decision based on what I read online, now I have my own hands on experience and there's a disconnect leads unsatisfied consumer. So is in everyone's best interest to provide for consumers to be able to get access to quality information that isn't subject to the same biases or at the very least, takes into account the biases to the extent possible. What does it mean for companies? We've talked about the potential to use social media for monitoring purposes, to inform marketing activity, to potentially inform product design decisions. There's value to using social media data, but it's something that we've got to exercise some caution in doing. There's a difference between the people you hear online and your customer base. Making decisions to that are based on what your vocal minority, what people are saying online, better make sure that what they're looking for aligns with the broader customer base. If not, you're making a decision to satisfy a minority group and it's going to be at the risk of alienating the majority of your customers. What you're going to hear, that's going to depend on where you do your social media listening. Again, blogs, discussion forums, microblogs, all going to draw different content, all going to have different sentiments expressed. So when you're doing your monitoring, you want to make sure that you're sourcing your comments from as broad a set of sources as possible. This is something that we'll look at in the next component when I walk you through using Crimson Hexagon as a listening platform. Why are people engaging in posting behavior? People have a lot of different motivations. And so, we want to figure out what's driving them to post and then how can we engage with those users. As I said already kind of taking actions, you've got to be careful not to do it solely in reaction to what you see in social media. What you're seeing on social media may give you that early read and people are talking about this. Is there something that my broader customer base cares about? If so, maybe I need to take an action. But just because you see something on social media, that shouldn't be the reason for the companies to take actions. And yeah, this has been a common trend that we've been talking about. We don't want to focus on absolute levels. We want to focus on setting baselines, focus on deviations from what's normal. There's baseline sentiment, there's baseline volume. There's also kind of this set of dynamics in posting behavior that are normal. We'll take a look in a little bit about what are the typical dynamics when it comes to product ratings. What we’re seeing the same thing play out with social media comments and other forms of user generated context. So let's dive into that question, which of these looks like the most likely trend when it comes to product ratings? So, how do the ratings change over time? One possibility is that they go up. It's also possible that they stay flat and it's also possible that they're going to decline over time. Now if we're talking about service, let's say, service delivered at a restaurant, the service itself may change over time. The quality of the food itself may change over time, but let's restrict ourselves for the time being to just looking at a product where the quality of he product is fixed. It's always the same product. Well, we're going to see why this negative trend is actually the most predominant dynamic and I'll give you one potential explanation for why that's the case. So I pulled this off of Amazon, it's reviews for a particular book, The Black Swan. And you can see the distribution of the reviews, 533 reviews, almost half of them giving it five stars. Yeah, the majority of reviews are above three stars. On average, it looks like it's got about 3.5 stars, a couple of 1 and 2 star reviews. Well, let's filter these reviews. Let's look at just a subset of them. This is one of the first reviews that came in for this book. We see with Amazon. We can tell, all right, this person actually purchased this particular book. They're putting their real name behind the comment and we can see that this particular reviewer thinks that this is a very good book goes into detail about why he likes that book. Another reviewer that we know, he's a top 50 reviewer, he's using his real name or at least part of his real name. And again, goes into high praise for the book and people who should read the book. So, these early reviews coming in very positive. Now, we get into a later review. This later review goes into a little bit of detail as far as the writing that he is using is much more critical than the first two reviews. He only gives it three stars. Let me show one more of these later reviews. And so, we start to see a distinction. Now, is there a systematic pattern? Early reviews positive, later reviews negative. I've pulled out, this is just one example to illustrate this, but what might be contributing to this pattern? Well, a little bit broader of a scope. Old reviews almost systematically positive. The most recent reviews for the book when polled, a lot more variations. And not only are they lower reviews overall, they're more negative, but there's also much more variation. Well, why might we care about these dynamics? One, from the standpoint of consumers. Consumers are looking to reviews and social media content as a source of information. Well, if that's the case, what information should platforms be presenting to them? Online reviews matter. It ends up driving consumer behavior. And more broadly, the set of information I'm exposed to. That's going to shape how I'm thinking. Well, one of the trends that we observe, it goes back to goes low-involvements and activists that we mentioned earlier was over time as more reviews come in, variance goes up and the average ratings go down. And so, why might that happen? Well, let's take a look at the profile of those activists and the low involvements that we mentioned earlier. Activists post frequently, they're more prone to do it when there's not consensus. They tend to be more negative. Whereas low involvement, they don't post all that much and they post even less frequently when there's lack of consensus. So, what happens when those first reviews start coming in? Activists are more prone to be the ones posting. Overtime, as the level of disagreement increases, activists are going to become over represented in the posting population. And so, as they become over represented, that's going to pull down the average ratings. So, that's just a general dynamic. That's something that we want to keep in mind.