In this video, we're going to use NetLogo to explore the different settings of the Emperor's Dilemma model to understand how a behavior can spread through a network even if nobody in the network or very few people in the network actually like that behavior. As with the other NetLogo models, we have a number of different common elements. We have the buttons to set up the game board and run the game. We also have a number of different sliders for setting the different model parameters, and we have plots and displays that will help us to understand what's happening as the model runs. In this model, what I want you to pay most attention to is the color of the different places on the board and this plot down here. Now, previous models had discrete turtles, as we call them agents, that were nodes in a network. This model uses a slightly different technique. It's more like a chess board, where each grid on the board simply represents a different person. So the first situation we're going to look at is the situation in which each agent's behavior is determined only by the behavior of those agents nearby to them on the board and where, initially, the people displaying the behavior, the unpopular norm that might spread, they're all clustered together. So you can see, even though these agents appear to be on different sides of the board, since the board actually wraps around in what's called a torus, they're actually next to each other. Now, we're going to start with just one percent of the population displaying this unpopular behavior. We've got clustering turned on so they're nearby each other, and we've got embeddedness turned on so everybody responds only to the behavior of those agents local to them in the network. And then we're going to run the model. And as we run the model, you can see that it very quickly spreads, and we can use this chart down here to see what color corresponds to what agent, and a red is a false enforcer. So what this shows us is that with clustering embeddedness and only one percent of true believers, the norm that's unpopular with most agents very quickly spreads to the entire population, not only do people start complying with that norm, but they actually start enforcing that norm. And that's what displays a cascade of behavior. Now, it's important with these kinds of models since they're just a little bit random to run them a few times. So we can set up the board again, and it's going to randomly select a new set of agents. We can run it again and with this parameter space, we get nearly the same outcome. Every time we run this model with clustering and embeddedness, the unpopular norm is always going to be take over nearly the entire population. And the only reason this little green square held out is because this model is set to stop automatically after 100 rounds. Now, let's see what happens if we turn off clustering. This time, when we set up the board, the agents who start as true believers, these yellow squares right here, instead of being nearby to each other, they're scattered all over the place. So what happens when we run the game? Very little. You can see that a few of the agents near those initial believers turn blue, which means they are false believers, and they're giving into the pressure of their neighbors. However, for the most part, most agents stay green, which means they're sticking with their true disbelief in the norm. And if we run it again, we get the same outcome. Now, every now and then, if we run it enough times even when there's no clustering, we will see that the norm spreads throughout the population. But it can take a few tries, which is why it's important to run these models multiple times. Now, here, you can see there's two yellow agents from the outset that are very close together. So by random chance, we get a little bit of that clustering effect. Let's see if it's enough to spread the unpopular norm. And look at that, it is. That helps us to understand that clustering among the initial true believers is a very important component of spreading an unpopular norm. So, now, let's take a look at what happens in this uncluttered world if we increase the percent of true believers from one percent to say just three percent. Now, we should be a lot more likely to see cascades of behavior every time we run the model. And that's because we've increased the density of true believers so that those initial clusters are far more likely to form. Now, throughout these models so far, we've been assuming that every neighbor, every agent responds only to the behavior of its immediate neighbors. And that's controlled by this embeddedness parameter right here. So let's take a look at what happens if we turn that embeddedness parameter off. Now, even though the agents are laid out on the board like this, we know that they're actually responding to the behavior of the entire population. This is what we call a fully connected network. Now, even though everything else is the same as before, nothing really happens. You see a little bit of take-off among some false believers who are likely to very easily give into the pressure of enforcement, but it doesn't really spread. You can try it another time just to see that we get the same outcome. Now, what happens when we raise the percent of believers to say 10 percent. Maybe with 10 percent, that will be enough to spread. Look at all those true believers. Still, nothing happens. So let's raise the percent of true believers all the way up to 40 percent and see what happens this time. Now, you can see, nearly half the world starts off as a true believer. As you might expect, the norm very quickly takes over the population and even those people who don't really believe in the norm start to give in to social pressure, not only comply but actually enforce it on their neighbors. So you might think that as we increase the percentage of true believers from the outset, we get that same outcome. So let's take a look at what happens if we raise the percent of true believers all the way up to 90 percent. So, now, we're starting off with the world where almost everybody truly believes in the norm. So if we run the model, surprisingly, nothing happens, although there is a little bit of activity at the first. In the end, we see that zero percent of the true disbelievers are enforcing the norm. The reason that happens is because the initial true believers aren't surrounded by enough disbelievers to feel that they have to enforce the norm. So this shows how sometimes, social behavior can have unexpected outcomes. How even as we raise the percent of true believers, the percent of false believers who enforce the norm actually goes down. And this is why it's important to use computational models and tools such as NetLogo to carefully think through social behaviors.