In this video, you'll learn about cycle consistency, which is a loss term that is added to CycleGAN and puts a cycle in CycleGAN. At cycle consistency is, is an extra loss term to the loss function and this is for each of the two GANs, and I will go over how exactly Cycle Consistency helps and what happens when we do not include it in cycle GAN. Let's first examine the zebra to horse is zebra direction. First that zebra horse generator will map a real image of a zebra to a fake horse, and then the other generator will map that horse back to a zebra. What cycle consistency expects is for the generated fake zebra to look exactly like the real one, because only styles should have changed. What you can do here is you can take the pixel difference between these two images and add that to your loss function, and you want to encourage these two images to be as close as possible. This also applies in the opposite direction, mapping horse to zebra and back to horse again and you want to again take the pixel difference between those two. You can now construct the entire cycle consistency loss by summing the pixel differences from both directions. One is going from zebra to horse to zebra, where you look at the difference between the real zebra and that fake zebra after going through the cycle, and the same goes for the horse to zebra and back to horse direction as well. You can just sum over i samples of each. This constitutes the entire cycle consistency lost arm for your generator. How you add this as you add this to your adversarial loss. Since each direction uses bolt generators that you see here, you actually just have one optimizer for both of your generators. There's only one loss term that both of your generators are using. From this cycle consistency loss is calculated and shared between the two generators. Again, the adversarial loss is not MeanGAN loss function. You'll learn about what exactly the adversarial loss function will be for a CycleGAN, because it's a little bit different from what you've seen before. In so more concretely, you can sum the cycle consistency over zebras and horses in your training dataset and weight this by some Lambda term to get your full cycle consistency loss term that you then add to your generators loss. One fun fact, is that cycle consistency is a broader concept than it's used in CycleGAN. It's actually used across deep learning quite a bit. It helped with data augmentation, for example, and also has been used in text translation too, from going French to English to French. Where you expect to get the same phrase back, or that is used to augment your dataset because maybe you don't get the same phrase back, but that adds an extra bit of noise to your dataset. It's pretty cool, what's I cyclic consistency can do. All right. That's a premise of cycle consistency loss. The CycleGAN paper also showed some cool ablation studies and what an ablation study is. The word ablation means that you're cutting out various components of what you're introducing in CycleGAN and seeing how the model does without those various components. First, it took cycle GAN, but took away off a GAN components what if it only had cycle consistency loss, how would this model do? You can see that with just cycle consistency loss doesn't do too well. These images don't do too great. What's happening here is that you're actually looking at a pair dataset. Remember that's CycleGAN operates on unpaired and so it really is just looking out two piles. It's not taking into consideration the pairing at all. But the reason to use a pair dataset here is to get a sense of how far away the output is from the ground truth. That's a pretty cool way to evaluate and look at the ablation study of your model, even if you're looking at unpaired translation, is to use a paired translation dataset. What's going on here is that the ground truth is taking in this realistic image and producing this segmentation output or the opposite direction, taking a segmentation output and producing this realistic image. These are the same pair here in different directions and the second set is also the same pair just in different directions as well. With cycle consistency alone, you can see that these outputs are just not realistic at all. You get these completely black squares, probably some mode collapse going on, and there's not much realism at all. Adversarial loss from GANs really makes things look realistic, and so as expected without it, with just cycle consistency, you don't get those realistic outputs that you would want. All right. But what about if you only have GANs and there's no cycle consistency? Well, the outputs actually look fairly realistic. They look pretty good, except for you might note that there is a little bit of mode collapse going on. These two have very different input images, but the segmentation masks that are produced from them are very similar and there's a little bit of mode collapse going on, and that's obviously not ideal. These realistic images, while somewhat realistic much better than those black squares, still look a little bit mode collapsing they have all the same features, is not as diverse as the real images. Interestingly, you could use half of cycle consistency, but that also isn't enough for the GAN to learn diverse and quality mappings. You can see that without cycle consistency loss, the fall loss in both horses to zebras the horses and zebras the horses the zebras, and then the output still see some motor collapse to it's not great. But what CycleGAN is trying to say is that with both loss terms, with both the adversarial loss term and your cycle consistency loss in both directions. Then it really, really helps with these outputs. Here, you see less mode collapse going on, you see diversity in your outputs, and it looks fairly realistic, as realistic as we're going to get over here. CycleGAN really uses the best of both worlds and through these ablation studies showing that if you take away one of these, it doesn't do as well. This is how CycleGAN is trying to say this is really what we need, this as an important contribution. You'll see abolition studies in research papers quite often. In summary, cycle consistency is important in transferring on common style elements while maintaining common content across those images, and it is a really, really important loss term. This can be done by adding that pixel distance loss to the or adversarial loss to encourage cycle consistency in both directions. Looking at fake zebra to real zebra and fake horse to real horse. The abolition studies show that the cycle consistency loss term in both directions help prevent mode collapse and help with this uwieldy, unpaired image to image translation task.