Good afternoon. Glad to see you back.

Last time, we talked about static competition.

We talked about how firms compete with each

other when interaction lasts for only one period.

Today, we will talk about dynamic competition.

Competition that lasts for more than one period.

The main subject of our previous lectures is stability of collusion.

We have established already that the collusive outcome is highly unstable.

Yes, it is better for the firms to collude.

Everyone knows that, but this outcome sometimes is not possible to be

achieved because there are incentives for deviation from this outcome.

In other words, firms want to have an agreement,

but then they want to cheat and this makes a collusive outcome hard to achieve.

It is not a Nash equilibrium and there is always incentive to deviate.

This is also the essence of a very important paradox

that we saw last time, the Bertrand paradox.

Players always have a tendency to undercut.

Each firm wants to give a slightly smaller price than

the competitors in order to attract

much more customers and take market share from its opponents.

Now the Bertrand paradox may be avoided and we already saw two ways of this happening.

The first was when we have

capacity constraints and the second is when we have differentiated product.

Today, we will see the third one and this is when we have repeated interaction,

interaction with time depth.

So we will consider the case where interaction

between the firms is first of all repeated and then

perhaps it's dynamic and each firm takes a different action in every turn.

Let's start from the same static game which repeats again and again,

and let's see how this is going to solve the paradox.

In repeated games, we have already seen that

players first of all can develop reputations.

Everyone knows a little bit of your history,

everyone knows how you kind of react to everything that

happens and this information can be very useful for others,

but also for you to prove who you are.

To prove that yes,

let's collude. I'm always loyal.

I will always keep my word and I will always keep the agreement and do not undercut,

messing up the agreement.

Second, even if you are not loyal,

others have a way to retaliate and so do you.

You have a way to retaliate if someone cheats,

so you can take revenge if someone cheats.

So let's take the original Bertrand game.

We have two firms that at the same time set prices,

binding prices, they cannot take them back.

They're selling a homogeneous product in the sense that consumers cannot see

a difference to this product and they say they have the same cost,

C, we assume this for simplicity for both of them.

Now, additionally now, let's bring this game to a repeated state.

Let's assume that the game is repeated T times.

T can be whatever you want and a history of actions will be common knowledge.

So everyone knows that how other players have acted in the past.

We have a game of complete information here.

And firms they do have a discount factor and we denote that by Delta.

Now, a discount factor is how you look at the values,

at the money that you get in the future.

So what are the strategies that we can follow in this game?

So firms, first of all may presume each other innocent.

That they are well intended and therefore they will adopt what we said in the previous,

in our introduction for game theory,

they will adopt what we called trigger strategies.

Now a trigger strategy means that I believe you,

I will collude as long as you collude.

So in the beginning everybody trusts everybody else until someone pulls the trigger.

Once you cheat then trouble starts.

So, once the trigger is pushed,

we usually will have penalization from the other players,

they will not like that you cheated,

they will try to protect themselves and automatically everybody will start returning to

their Nash strategies which will give you

the Nash equilibrium and the Nash equilibrium as we have seen

repeatedly is not as good for the firms as the collusive outcome.

So, everybody begins by trusting each other until someone

is proven to not be worthy of

the trust and then we get in a situation that everybody says,

"Okay, there's a cheater here,

let's all go back to our protective state,

which is let's go back to our Nash strategies."

In this case the cheater is penalized,

is not going to get away with cheating anymore,

but also the fair players have to pay a cost,

because instead of being in the collusive outcome,

they're back now to the Nash equilibrium outcome.

The stability of collusion is an issue there that yes,

because someone cheated now we cannot collude anymore.

So, let's try to find what is the equilibrium in this game in two cases.

Let's first try to find it in if T is finite,

is a specific amount of time that is actually pre-specified,

and then let's see if it's infinite or indefinite,

because we have already seen in game theory,

this is not a surprise to you.

We have already seen in game theory that it's going to make it,

the horizon of the game is going to make a difference.

Let's see what this different is.

For the Finite horizon,

it's easy because we assume that the game

is going to be played for a predetermined number of times.

This means that everybody knows from the beginning of the game that at

some point and they know the exact point the game will end.

So at this terminal period we are going to start from the back of the game,

go to the terminal period using backward induction.

The philosophical method that we have seen that

allows us to solve these games in this kind of situations.

So in the terminal period there is no possibility of

retaliation therefore everybody will tend to cheat.

There's no way to restrain people from cheating in the last period.

Since there is no incentive

for collusion in the last period but everybody wants to cheat,

the same will happen to the second to

last period and this will continue until we get back to the first period.

So in any period collusion is not going to be enforceable.

Therefore, what we can say about repeated interaction is that it doesn't

necessarily solve the problem that we have in a Bertrand situation,

that we automatically go for the Nash equilibrium.

It doesn't solve the Bertrand paradox.

What actually happens is that if the interaction is predetermined,

is finite and predetermined,

and everybody knows when it is going to finish,

it's not going to work solving the Bertrand paradox.

The Bertrand paradox will be solved in the,

in indefinite or infinite repetition that we will see in the next segment. Stay with us.