So, if we look at service assurance, for example, is double-click on this one a little bit. We start off with a fundamental platform, and there's always a question as whether or not that platform itself natively has service assurance. While other technologies and Intel has been introduced, that allow us to ensure that that platform leaving manufacturing can have a known good configuration, a known good quantity on it, and then that that can be reproduced from a validation standpoint as that platform starts entering into assurance, and at certain checkpoint times let's think about reboot. We can validate a fact that that system is still in its original intended state, and it hasn't been corrupted by any means that are in there. So, that platform becomes aware at deployment time, and then there's also significant amount of telemetry and monitoring that feeds back into those management and orchestration layers in real time to generate information that's useful to that calm service provider, that allow them to take correction action if any is necessary, and that's in essence what the calm service providers are looking at from a shared platform. So, we're not just grabbing any old box off of any old shelf from a manufacturing standpoint, but we're able to get platforms that have a known good state as they arrived from a manufacturer before their HR network and can be guaranteed to have assurance built on the technologies that we're talking about here. Obviously, the operational context of these platforms remains a critical aspect to the communication service providers. When we look at what happens inside network function virtualization, OPNFV, is creating some specifications and some software that allows to generate a barometer, pressure level if you will, to tell us about the health of these systems. When we look at this diagram, we see a little bit more information that's coming in. We see that the the basic platform still exists, we've got an NFVI layer that's wrapped around there. We're also able to have a local interface for corrective actions into that platform that allows monitoring systems, if you will, to get statistics out of that platform and in real time or frequent basis, report that information back at the orchestration layer. So, the orchestration layer, for example, we know this system's operating normally, I've got a certain amount of headroom where when the specification this system is starting to see an increase in traffic flow. It's approaching a threshold that we've established. We may need to spin up additional resources in a service chain or in a network slice, in order to continue to meet the service that we need on here and the orchestrator is able to do that based on the information that flows out of these systems from an analytic system. Then, we're also able to capture the element management functionality out of the interesting VNFs, those virtual network functions that have deployed onto this system that a vendor may provide that are unique to the application itself. We've introduced a new box over here, that's the VIM layer. That's the Virtual Interface Manager. So, it's able to provide control, if you will, to a number of those NFVs again for the same thing, to monitor and manage those functions of that platform there at that layer inside the architecture below us. This is just one example of some of the complexity that goes into place. We mentioned the system integration aspect and this is an example of a system integration that is being introduced in. It's beyond the obvious system functionality that we looked at before. So, let's pull it all together a little bit with just one example. We're going to get into some other examples as we go on. So, this is a functionality, sometimes known as the session border controller or a virtualized session border controller, an SBC. So, we've talked about some purpose-built platforms a couple of times, and what would that be? So, if we're looking at a session border controller, maybe it's a box in the past, a proper session border controller from 10 years ago, five years ago. It's going to have a general CPU, obviously on there. It may also have some specific application logic on there in the form of a digital signal processor or network adapters that are designed to provide certain bits of functionality. In addition to that, then there's control management, media coding, network encryption layers, packet processing, header manipulation capability that go on to the session border controller. The reason we call it session border controllers is that it provides functionality in the border of the network. So, if you think about an enterprise user who has an interface in a communication service provider, one or both of those points on the enterprise itself and certainly the communists service provider, would most likely have a session border controller that does things like throttling, let's say, that that enterprise is allowed to have a 40 gigs worth of interface. That session border controller wouldn't allow a 50 gigs of interface to flow through. It would block those types of sessions that would exceed that because of protecting, if you will, information or the content of the functionality deeper into their network, or it may be providing functionality such as, for example, the media transcoding. There are a variety of examples are people still make phone calls to this network, believe it or not. Even though they're digital phone calls, one of the things that happens is that there are different protocols for how that voice is represented in there. The session border controller is a very good place to have transition, where if one end likes encoder A and B and the other end likes encoder C and D, a session border controller actually sitting at that edge of that network can see that and say, "I tell you what, you like A and B and you like C and D, we're going to settle on C, and I'll take care of the problem of interfacing between B and C at this point and allow that to take place." Similarly, these are great places for doing things like DDoS protections. So, that's an example of a purpose-built platform that was early in the virtualization aspect of things, because of a lot of it is very high level processing that takes place. So, a virtualized session border controller still provides that management control, that media transcoding, that network encryption at the VNF level, at the virtualized network function level, and then relying on the resources of a standard high-volume server based on an Intel Xeon technology and NIC card, and possibly with some additional processing capability, if necessary like an FPGA, all on top of a hypervisor. Now, what we can do is optimize the use of those software cores that continue to grow with each release of the Intel technology so that we have more and more capability. So, one session border controller and then would about five years ago is going to have a fixed capability, and we'd have to replace an entire purpose-built platform. But if we've got a based on a standard high-volume server with this virtualized function, and if we're only using 12 of the course today, tomorrow we need 14 cores, we simply spin up two more of the cores. We need 20 cores, we spin up eight more from an original 12, and we've got those resources that are there, they're available in that network. So, that was just to give you some idea of the concept that we're using from the network functions virtualization as we separated into that desegregation, and begin to virtualize some of these interesting functions that exist in the core of the network.