So here the idea is they want to do static analysis of OpenFlow programs. Static analysis is something that you may have heard of in the context of programming languages, where you can have a flow graph and you're doing a static analysis to see what are the branches that might be taken by a particular program. There is the same sort of analysis that you want apply to OpenFlow programs in the context of network behavior. So what they do in terms of houses tool works in production, meaning once you've deployed an application to run in the cloud, what it is going to do is it's going to record the state, which is events, network events, and the traffic flow, and this should be done with minimal overhead because you don't want to put up the application. So you want to do this with minimal overhead, and once you've recorded the state, then postmodern you can replay the state at a convenient pace. So in other words, in production, you observe the states, record it and then bring it back to your lab, and then you can rerun that state. Then what you're doing is you're reproducing problems at chosen times and locations. Location, meaning switch locations where certain things should happen or should not happen. So that's sort of the overall structure of this idea. So in terms of scalability, again, all of this comes down to how to make these tools scalable because you don't want the recording state to overwhelm the performance of the application itself. From that point of view, what they do is they record only the control plane traffic. That's the key thing because control-plane, remember that you're doing these switch settings once and then using that over set of packets that may flow using that particular control strategy. Recording this control plane traffic is a reasonable overhead to incur because that is part of what SDN would anyway do in terms of setting up the switches. Now data plane traffic is not something that we care about too much. Because as I said, you can sort of think of all data packets as roughly the same, even though from an application logic, it may be different, but from a network logic, it doesn't matter. So you can skip an aggregate, the data plane traffic. So these are the two key elements which makes it minimal overhead in terms of recording the state information when a program in production is running, and when it comes to replay again, you don't want to completely replay, meaning I'm not rerunning the program as a whole, but it is doing a best effort as opposed to deterministic replay. Because the idea is really to identify whether there was any performance problems, whether they're in the correctness issues in the behavior. So we don't necessarily need a deterministic execution, but in a sufficient, if you have a best-effort execution of the replay. The overarching goal in this tool is partial recording and replay a chosen times and locations to reproduce specific problem that you might encounter. How would you use this tool? Well, you deploy this engine called OF record in production and it is always on. So it will take all of the OpenFlow messages from the control plane and summaries of the data plane and messages or aggregation or even skipping it completely and you're going to play selection rules as necessary to make sure that you prune the amount of information that you're recording during the production run of the program. Then, once you're done with that, deploying this OF replay, it can be done in the lab and the intends to localize the bugs and validate any bug fixes that you may have, whether it actually is correcting some of the things that you observe as should not be happening in the program. So that's the way you would use the tool.