So, some of the examples of decomposition then we've talked a little bit about the evolved packet core. It continues to be as we've said now, a prime target in that it has significant elements both from our control plane and a Dataplane. When we looked at things that are inside that EPC like the MME, primarily control plane functionality, that can be completely disaggregate it has been now, from a variety of providers that are out there. So, that you can scale the MME and potentially even dynamically spin up those functions inside your network across the very large deployed network. Then the packet processing, both from the Serving Gateway and the Packet Gateway functionality again, our Dataplane applications that have been disaggregated into that functionality and could be sized and scaled. Specifically to meet those needs of what we're talking about from even from a slicing perspective where we may have multiple bits of functionality that are in there. We talked a little bit about our provider edge, and one of the functions that can take place in that provider edge also can take place inside the session border controllers. Certainly is a prime target for some of those functionalities. A lot of successes being seen in virtualizing both the PE and the SBC today and that includes the the challenge of the media transcoding particularly for incompatibilities of void calls. One would think today we don't have that, but it still exists it's a functionality that we do see it it's being virtualized from legacy systems. The question often comes up when we talk about VNFs. Virtualized Network Functions is that when are we going to have cloud native when are they going to be containerized, is another way of asking that same question. That readiness is actually a work in progress, we're seeing some significant opportunity and progress in that area, as where we are in the state of the art of the technology today. Certainly that doesn't cause any break in that concept of the desegregation. It's certainly an important step forward at the end of the day, we think it's the right step forward. It will allow us to move more quickly in the future, but waiting for containers is not necessarily the optimal path forward. Take a look at what's happening inside the industry. We've got a lot of virtual machines are showing a lot of success and containers are coming and they'll be continuing to come as we move forward. So, this is a journey not just a one-step. So, what are some of the key points that we've got when we or takeaways from the network functions virtualization? Transforming this network it is complex, and in some cases existing functionality, we mentioned some of them, do require re-architecting in order to implement the full vision. I think that it's fair, as we look back, historically, and say that maybe we wouldn't appreciate how complex this transformation really was, and yet at the same time if we look at the history of the network itself, the history that tells us that transforming it, it's not unusual for us to take a long time it's a massive network. There's a lot of heavy lifting takes place inside here, and we have to do it without breaking it. We've got it even though we talk about experimental quickly or failing fast in some cases, we can't fail in this network, we're going to do this in a very controlled and very prescribed way. So we're moving quickly, we'd like to move more quickly, but the ship has sailed, virtualization is coming into our network. The ETSI defined NFVs we're seeing ETSI continue to work in that area we're seeing lots of other. Standards bodies and in ADAHC groups continued to make progress. At the end of the day, we're all going to look back and go yeah, this should have been building networks along time ago. Maybe 20 years ago we should have been thinking about this. But it took us a while to get here, and this is the right way. Lots of energy is going into it and it's all quality energy. So, we do have the question of different architectures that can come into play for different layers of the VNFs. This is fair. There are cases still where we may look at packet acceleration as being a differentiator for some bits of functionality. So, even though we talk about standard high-volume servers. We don't mean identical servers. There may be functionality at the edge, that's different than at the core, at the edge we may be more concerned about some packet acceleration that requires encryption, and because of that we might want to allocate additional silicon resources, that provide better encryption capability than that of which we might need in the core of the network. So, they're going to be some decisions that take place there. The important part is to understand that this decomposition can take place and that we understand the impact of those resources are required both for scalability, and redundancy, and insecurity. That with the right architecture, we can be aware of our platform indications, implications, and the trade-offs that may take place.