[BLANK_AUDIO] Alright we're here with Josh Reich, the inventor of Pyretic. oh, we're here with Josh Reich, the inventor of Pyretic. And a postdoc at Princeton. Onto great things after that I'm sure. And thanks very much Josh. Looking forward to chatting with you about Pyretic today. >> Likewise Nick. >> Cool. So, what led you to work on Pyretic? And what do you think are, what do you think are the most interesting features it provides or the most important problems that it solves? >> Okay, so let's see, first for two a day. I was doing my PhD at Columbia, working on a variety of sort of network meets systems kind of stuff. And then I was fortunate enough to get a CI fellowship and that was to Princeton to work with Jen Rexford. And Jan and her collaborators at Princeton gave Walker a programming languages and also up to Cornell, they foster in programming languages as well. You know we're kind of had this ongoing work on, sort of software defined networking. Languages, you know, how do we get, how do we put these two together, provide an interface so that abstractions that are kind of coherent, intuitive, powerful that can actually potentially be used in practice and, you know, make real impact on the community? So that's sort of the high level vision and goal of what they were aiming for when I. When I kind of came in and joined, and, I was immediately sort of with my sort of systems meets networking background, and taste, wanted to see well, what can we do, to push this forward in ways that are going to be most appealing to folks who might actually adopt this technology, practitioners. And so at the point where I joined the program you know, this wider collaboration called Frenetic which I think we'll be talking about little bit later they were really focused on a lot of the sort of deeper more core PL kind of issues. They were doing implementation in Haskell, a functional programming language, which is fantastic. A fantastic language, but it's not one that's necessarily intuitive or familiar to the systems community. So I immediately said well, you know, I'd love to see this in a language which is more accessible, both to the systems community. Folks who are already programming and potentially even to folks who are currently doing network, network operations. These folks might not be programmers, but, you know, maybe we can package this in something that they'll have a, you know, relatively smooth learning curve on, learning curve on. And functional programming can be a big jump. It's a really powerful paradigm. But it's, you know, it's not intuitive at first to the vast majority of us, so [NOISE], so I along with some other folks, some students at Princeton and working in conjunction with Jen, Nate and Dave decided okay, I'm going to lead an effort to. Provide a Python-based version of the sort of concepts and ideas that we had in the Frenetic project when I joined. And also push forward on a variety of different fronts that I thought were particularly important and that appealed to my sensibilities as a researcher. So those were modularity, number one and then also how do we put this, Mechanism Agnosticism Maybe, maybe a bad word, but Basically, the idea that we want to get further and further away from mechanisms when we're talking about a higher level language for describing how networks should behave. And then, leave it to the runtime system compiler to figure this all out. So should I keep talking or did I answer the question? >> No, that was, that said a lot certainly. I guess like. Yeah, because one of the things I wanted to ask was, you know, about. You know the features th, th, that made Pyretic interesting and I think definitely sort of made a nice distinction between, between the earlier work going on and, it was more functional and, and, Pyretic. And, and I guess you also headed off my next question which is, how Pyretic relates to some of the other things in the Frenetic family, like like Frenetic and NetCore. I guess one follow up there is like Frenetic is, is based on this this NetCore compiler, and I guess Pyretic is, as well, right, I mean, in some sense? Is that, is that the case? >> Yeah, sort of. It, it gets really, really complicated because, so Frenetic, the Frenetic project, umbrella project in collaboration primarily between Princeton and Cornell, though, at this point, other researchers in other institutions, including, I think, your own, had gotten involved in some of the work. And you know, the high level goes what I described before. Provide a language with abstractions, or a family of languages really. That provide these kind of high level abstractions that are intuitive. And then leave all the details of figuring out how to implement this. And realize this in any given network with its own particular typology instead of hardware. To some piece of computational machinery, which we call a runtime system and compiler. So the the sort of big that's the big umbrella. And where things were when I joined there had been sort of a, very early system. Which was called Frenetic. And that system was, it's key idea was hey, let's try to do a functional reactive programming approach to network control. And functional reactive programming is a really unique type of programming language thing. [LAUGH] And basically the idea is it's designed to take strings. Each string be composed of a series of discreet events that come in, and as each of them comes in, there's some machinery which kind of whirls around and passes on something else, and whirls around and then so forth. So in the original kind of design there were three sort of sections. There was section that was doing that, that provided a sort of a query language and this was very SQL-like. And then there was another section which let's you do forwarding type things. And then there was kind of glue between that with these various FRP functional reactive programming operations and hence the name. FRP Network. Frenetic came to be. So, things evolved. The next kind of step along the way there was looking at the notion that. Okay, maybe this, maybe this FRP stuff is a little bit over-heavy. But in the forwarding portion of things, there are actually some very interesting things going on. When we, when we look at forwarding, and we look at querying. Namely we might want to do these at the same time. And, in fact, when you're doing querying in an OpenFlow-based network, you're actually reading counters off a table. So this is not like using a tool like. Netflow or S Flow where you've got an entirely separate stack of software or firmware running on the switch which is going to be gathering these statistics. But instead, you're actually going to be inspecting the very tables that are used for forwarding. So if you change your forwarding, you potentially can remove entries from the table. And thus that affects the counters, and that affects what, in fact, you can query. And so, you know, they kind of moved on to this key idea of, hey, we can do some type of parallelization. We can provide a way of specifying some querying and a way of specifying forwarding that enables you to write lots of things that work in parallel. And that was really cool. In fact, there's sort of, you know, you can have as many readers as you want. One writer. Clearly makes sense. You can also do two writers, but you might get sort of strange behaviors out of that. At least when parallel compositions is all you have. So, so that work will spawn, that work what was really cool about it, in addition to what I've said before. Is also they also came up with a really sophisticated way of taking these parallel composed specifications of reading state of the network, so querying counters, and writing state of the network, which is to say, specifying how packets'll get forwarded. And compiling that all down to a set of OpenFlow rules on a bunch of switches. And having it semantically, so it's meaning its actual behavior. Exactly matching the syntax that they, that they described. Okay that wasn't the most eloquent way of putting it. But- >> You get some provable properties then I guess, where you have the ability to kind of reason about you write this thing at a high level, you want something good to happen, or you want the, you want something to happen that kind of matches that policy, and you have a way of. Saying like, okay what's happening in the forwarding table. Is actually- >> Yeah, yeah, yeah. >> Semantically the same as when I write in this high level language. >> Yeah, exactly. So, so you basically, the language provides you with a syntax, a way of writing things. And then you can write things that have completely specified meanings, and, and what, and what you're getting at here I think, and what was really. Another thing about this that was really neat is that beforehand if you look at sort of like POX or Knox or Eakin, Floodlight, these kind of languages. Generally, you have to specify independently how packets are going to be treated on the controller. So what we'll call an interpreter loop and how packets are going to be treated on the data plane. So what the OpenFlow roles are going to be to handle that. And it's not necessarily the case that you're going to, get it right. You have to specify this logic in two different places and. You know, the results may not be the same. >> I think even in the course like, like as people were doing the assignments, they had this confusion of like when I'm writing a learning switch like, I'm writing this thing that keeps the table of mapping like MAC addresses to destination app reports. And they're like wait that's happening at the controller it's not actually on the switch, right? So, I think there was some, some like you know, cognitive dissonance there that, it's exactly what you are pointing out. >> Yeah and I mean it's just kind of, you know, a thing that's automatically going to cause your code to be more error prone, more complicated, more convoluted. So what's really neat is with a signal syntactic representation. You both had an interpreter that realized that, when packets came to the controller because they were directed there. Either, these rules hadn't got installed, or because it was part of a query that needed the packet. Or whatever and, also, provably compiled a correct set of OpenFlow rules that realized that exact same behavior in the data plan. So, we knew what was going to happen on the controller and what was going to happen in the network switches, you always get the same results. And that's a great property to have. That's a really important one. So, that was kind of the past, or the situation when I came in. And what. What we really, and each kind of iteration in the family of Frenetic languages, early on, they were only in Frenetic, until NetCore. So let me quickly describe what NetCore is. so, so one thing that was realized, was hey, we can really think about the way in which the network's behaving, the packet processing behavior of the network, which we've called, in our body of work, the policy. So policy is how the data plane forwards, how the data and control plane forward packets and also send packets to the controller. So it expresses when a packet comes into the network, exactly what's going to happen afterwards and the next step and the step after that and so forth, until the packet, or all of its descendent packets, if we're stuck doing a multicast. Leaves the network. So NetCore is a static lang, it's a static domain-specific language. And by static I mean that it specifies the packet processing behavior or policy on the network for one given instant in time. And if that behavior is in fact changing, we can consider this a series of static representation. So I'll have my first static representation how the network behaves. Then say I'm going to change how, how the network behaves on, you know, one particular length. I'll make a new representation, put that on in effect and so forth. So NetCore gives you this language for representing how the network behaves in a given instance and then when you embed that, in a lang, in a higher level language like, a generic language like Python, or Haskell or OCaml or whatever. You can use the language's data structures and flow control and what not, you know and your while loop to create a series of these static things and then, make each one effective in the network and there by create your kind of dynamic behavior. And so this, this was the approach that, that was kind of adopted by all the work going forward [SOUND] at least up until recently. And I, I would say even recently, but we'll get to Kinetic hopefully a little bit later. but, but now coming back so, so at the time that I joined, NetCore was really just this you know, we can write a query or several queries. And we can put that in parallel with you know, some forwarding policies or some forwarding policy components. So I could say how one switch forwards, how another switch forwards, run those in parallel and then I've got a policy that covers the behavior of the two switches. so, so Pyretic basically, you know, started off and said okay, well this is, this is great let's get, let's get all this NetCore goodness implemented in Python, embedded in Python as well. So, when I say embedded it means when you write a Pyretic program, it's actually a Python program that has some extra syntactic elements, namely this NetCore, as we'll call it, domain-specific language. And a couple of other goodies. It's also. Implemented in Python in the sense that we could have written an interpreter that takes this Python code plus, and then you know compiles it and manages it using say, C++ or Haskell or whatever. But we also wrote it in Python, so. Hopefully it would be very accessible, and easily changeable, and understandable by the community. And that was one of the real aims of the code base. And then we started thinking, okay, what are the research problems here? And they were, there were a couple of new things that Pyretic introduced into the family of Frenetic languages. That were then absorbed by, by the members of the family that survived. Right now the surviving descendents are basically Pyretic and a version of Frenetic reading in OCaml. Which I like to call Frenetic OCaml because I am a little bit picky about naming conventions. So what were the new things that we introduced? Well the first, and probably the biggest one, was, previously, there was, you could write one component of logic, and you could write another component, and you could run them at the same time, on the same packet, and what you get out is the set of packets produced by the first one. And the set of packets produced by the second one. Union together. But that's not the way a lot of things we'd want to have, actually, work. That's not, that doesn't give us the ability to express something like, say, hey, I want to write a load balancer. It's going to take a packet, coming in, and rewrite its destination address. For example, to map it to one of several servers down the line. And then I want to hand that off to a routing module and have the routing module act not on the original packet but on the packet who's header has been remapped. And so this is an example of a different type of composition. Sequential composition. And you'll actually find this type of composition. In the underlying hardware, so there are multiple, multiple tables in switches usually, not just one, like the open, the original version of OpenFlow provided. So this is a very natural kind of way, and, you know other work has thought about, you know, this in terms of, you know, data path pipelining. And so forth, and this was missing. So we introduced a new operator to the net core domain specific language that enabled us to do sequential composition, and its semantics were so in NetCore, everything is a function that takes a packet at a particular location and outputs a set of packets at particular locations. So, you know, take a packet, at switch on port one, output the same packet at switch on port two, that's forwarding. I could output two packets, one on port two, and one on port three. That's multicasting. No packets, that's blocking. A packet but with the header changed, that's you know, mod, field modification. Renaming that could be used for balancing. So basically the semantic of sequential composition is very very straightforward. It's basically, we have one function A, the NetCore function and another function b. It's basically saying we're going to take B of A of the packet. And there's a little bit you know more because it's possible that A could actually output multiple packets. So there's slight tweaks, but that's roughly the definition. And that's intuitively correct. Or intuitively reliable. If that's your intuition. It'll work, you know. And we take care of the rest of the details underneath the hood. So sequential composition's really big because it'll let us do all sorts of things, and it turns out, that actually once you take sequential composition and parallel composition. And you have both of those in a language like Net, in language design the way we had designed NetCore. You actually get an instance of what is known as a Kleene algebra with tests. And there are all sorts of wonderful properties Kleene algebra with tests have which already proven. So, after we kind of publish the Pyretic work we were all kind of seeing, oh there's, there are a bunch of transformations you can always make that are always true. And the folks at Cornell picked this up and published paper. Really great, showing all the theory all the axioms you could use to transform different statements in the Pyretic version of NetCore into you know, other statements that are. Provably equivalent and showing that this in fact is a Kleene algebra with tests. When you add one additional operator. And so with the addition of that operator, they had what was called the NetKAT language. A different slightly enhanced domain specific language that was born out of the vers, Pyretic's version of NetCore. So, kind of complicated, but pretty nice, and Pyretic right now still uses NetCore. Because the NetKAT stuff is, is super elegant and really nice, but practically speaking isn't necessarily the, the first thing we need to get done, to get adoption. >> What do you, what do you need to get done? I mean what, what do you think, [CROSSTALK] like, the. >> Oh. Yeah, yeah. I'll tell you about that in one sec, but I just wanted to finish the story. So Pyretic right now. Has it's own version of kind of NetCore. And the Frenetic OCaml actually is based on NetKAT now. I'm not sure exactly how they do that but. But you should, I think you're going to be talking with Nate Fostersen so he can give you all the, all the thing on that. so, in terms of adoption. Well, so, so we wanted to get a couple of things, other things going that Pyretic kind of introduced so one was support for virtual header fields so again a mechanism agnostic way of specifying how you want traffic to be treated. So we can tag packets and then add based on those tags, instead of saying oh I'm going to you know, use the vlan some bits in the vlan header or I'm going to use MPLS labels. Because there are a whole variety of issues that you can run into when you want to combine things. If I write two pieces of software independently that say hey I'm using this set of vlan bit, these set of vlan values to do one thing and the other one is using the same set of vlan values to do another thing, then you're going to get collision. Likewise if you want to try to port from a network that say supports vlan two one that support MPLS labels oh, you're, you're out of luck if you specify things in this mechanism independent way. So we kind of you know, worked on providing an abstraction that the programmer can use to say, hey I'm demanding a header name and I want so many values in it and I'm going to use that and then you can. You know, each program can be written independently, and the compiler and runtime system can figure out, okay, how are we actually going to realize this on the network? You know, and what are the capabilities of the switches? So that was kind of one thing trying to push forward more into the practical sphere. Another thing we really focused on. You sort of provide, provide things which will be very useful potentially for practitioners. Another thing we were working on was hey you know, this idea of having a stream of static NetCore policies is all fine and well, but how do we actually provide this in some kind of packaging that makes it easy for programmers to do? And what do we need to change in terms of the compilation infrastructure, and the runtime infrastructure, and the interpreter loop to make sure this works correctly in practice, or at least that it works reasonably correctly and reasonably efficiently? And the focus here is not on say, you know, proving that you never mess up on a given packet. But rather a implementing mechanisms which generally provide, generally are performing. And potentially extensible to sort of getting provable behavior. If you're willing to pay the cost, but that takes an efficiency. Whereas, you know, our, my friends on the PL side were much much more interested in just, hey, can we, can we provably have something evolve in such a way that no packet is ever- >> Right. >> Ever treated by first the first version and then the second version? You can do it but it gets. You know, but you may be ignoring all the performance issues. >> I think one of the things too, is like, I mean, Pyretic is based, I mean it basically writes to POX [CROSSTALK] or POX under the hood. And probably like, I think certainly people taking the course. Probably the first time they see POX, it might be the last time. Like particularly like if they are in industry they're probably going to run off and do other stuff like maybe Open Daylight stuff or something like that. Will Pyretic ever? Or Pyretic concepts. Will they ever, language constructs? Do you think they'll find their way into other controllers besides POX? Is that on the roadmap, or? >> Well, yeah sure. So I think the right, well first of all, so POX is- >> And what are the hurdles to making that happen I guess? The other you know, just part of. >> Totally, totally. so, so first thing you know, POX, POX is great. That said it does have its, it does have its limitations. The way in which Pyretic uses POX really is just as kind of a dump pipe. Pyretic has its own topology detection built in. It has, you know? It's own packet processing built in. But what it doesn't have is it doesn't have all the hooks in the back to say set up SSH connections or TCP connections. TCP connections generally. To the switches, maintain those connections turn, you know? Open flow, OpenFlow commands that are going out or OpenFlow tables into actual packets that go across the wire and do all that kind of low level stuff. So we basically made a choice to say hey we're going to just use POX as our backend. So we'll have a bunch of commands or RCP's which are basically OpenFlow that the runtime has available, but those will be realized by some hooks into POX. So in terms of that architecture, you could of course take the POX backend out and replace it with a similar backend written in Open Daylight or any other kind of language. I think what we'd really like to do. Ongoing down the line is just own the whole stack, and you know, take the middlemen out of the picture. And you know, implement, go through the pain of implementing the connections to the switches. And there's some open source projects which are. kind of moving there, and do the work of actually mar, marshalling the packets correctly into the wire formats, or OpenFlow 1.0, or OpenFlow 1.3 and so forth. And then you know- >> You mentioned open source projects, like, what, what, what are those, the ones that will do- >> Oh so so let's see there's library called live fluid. I believe that was recently released a couple of months ago now in some kind of hack-a-ton. And they provide a they prove tools for marshalling, unmarshalling open flow packets in a variety of OpenFlow standards with, I think it's mostly C++ right now. But I think they're, they're doing bindings to Python and Java, there's also a there's also another set of code the loxigen that's like [INAUDIBLE] with some others, and they have some marshalling, and they also built some, an indigo platform, which does some of the connection. stuff, although, the, the Python, especially the Python part of that doesn't quite seem mature, and the document, I've had a little bit of trouble with the documentation though. Honestly, I haven't spent a lot of time on it yet, because I hadn't. But so, there are a bunch of these kind of things. You know coming up. An, and also another important point is that we might not always want to target just, OpenFlow. You know there are a lot of other great SDN kind of interfaces- >> That definitely brings me to another one of my questions which was like, yeah can you imagine other southbound APIs? I mean, There's like [CROSSTALK] a cough, there's even like command line interface you know, [CROSSTALK], there's also metal switching stuff going on that you know, OpenFlow might go out the window with, with some of that. >> Yeah koto, I mean I was at Cisco live and they had like their new Cisco, I think it's AIS so I mean there are potentially a whole lot of different options out there. and, OpenFlow is a good one. I think it'll probably stick around for awhile but it may not be the one that wins out in the end. And yeah, being able to compile down to all those targets I think is, is you know a great, a great thing for future work. I think of it as just a more heterogeneous sort of southbound. Set of targets. So there's just more interesting work to be done in the runtime system compiler. And also, if we get more, the set of things OpenFlow can do are somewhat limited and they're geared towards right now the limitations of current hardware. It may be that in the future, you know, I just heard that Intel is releasing some, some switching-centric chipsets that also have FPAGA integration. So maybe we're going to want to maybe. Once we have that available, we'll actually want to increase what the service line can do or add extensions and libraries, so if you know you're going to have certain types of hardware, we can let you say more powerful things in the domain-specific language. >> If you have stuff underneath in the southbound there that's less constrained in OpenFlow shall we say, does it have any implication for things like NetCore? I mean, does NetCore make any assumptions about, you know, the, the, this having this very narrow interface, or, or- >> Yeah, totally. I mean, so, so the design of NetCore, so it has a lot of nice properties, and, you know, I could go on about, like, what's really cool about it. But when we were designing NetCore and over the course of NetCore's evolution into NetKAT and then beyond. We really have, tried to be cognizant of what limitations of the southbound interface are, and what limitations of the hardware are. because we don't want to say hey, you can write anything on you, you know, as a static policy in your network, and just boom it'll, it'll work like because, you know, our switches can't do turn complete calculations or even close. As so if, if you let, if you provide too much a expresses it, if you provide to much expressivity in your domain specific language. What ends up happening is, you know, in order to implement that. And realize that, you're going to need to send all the packets to the controller. And then you're going to just kill your performance, your latency, your thru put, you'll probably crash the controller. So people can write great programs unencumbered, fantastic for the programmer, but there's no cost metric associated with it. And therefore it's not practical. Now there are potential other ways of getting around it. Aside from sort of having the switches themselves enhance the API. You could imagine our architecture where you have, you know, additional devices that are embedded in the network that act as localized controllers and have potentially a quicker feedback with the, with the hardware. You know, that said, there are also certain limitations to the hardware right now, today's, the T-cams in today's switches are not designed for having rule changes very quickly, you know, you have a switch that has a million, I think this kind of state of the art is like. We have, you know, we give you close, order of a million entries in the T-cam, which is tremendous. But the rates that these switches support in terms of changing roles are like, 3,000 roles a second. When you think in a million role table, switching 3,000 roles a second doesn't give you a lot of dynamism, you know? That's slow to evolve, and so we have to be cognizant of all this kind of stuff. And so you know, if as, as the as the underlying hardware and the interfaces to the hardware kind of expand in terms of what they can do and what they can provide. You know I definitely think extending the domain specific language or perhaps coming up with entirely new domain specific languages for doing this is kind of going to be inevitable. >> Yeah, that seems like really interesting direction. Like, how these, how these DSLs are going to need to evolve as the southbound APIs change. I guess like, just one last sort of, area I wanted to ask you about, is some of your more recent work on verification and things like that. Speaking of evolution I guess, you sort have evolved what these northbound languages, if you will, could do. In this in this Kinetic work. Could you say a little bit about that? The students want to know about that as well, so. >> Yeah, definitely. Definitely, so you know, when we had Pyretic, you know, and we released it. There was, people really liked it. It won the NSDI Community Award. And a lot of folks just started building stuff on top of it, which was tremendously gratifying. But. You know? Once we have this kind of, modularity thing in some, you know? Mechanism independence going on what quickly became apparent was, hey. This is, you know? Theoretically, this is mission-critical software. Okay, we're academic, so, maybe it's not actually mission-critical yet. But, but we want to come up with ways that are going to be robust. To to specify these things and we have a great, a great thing going with NetCore, but it's static. So whenever we want to change something, we kind of have to you know write a new static one. And put that on the network and, what's the difference between the old one and the new one? How can some you know, just regular piece of Python code that's generating in the stream be assessed? You know to predict its future behavior. Will it produce some NetCore policy which is utter nonsense at some point in the future? Who knows? Will it just stop producing things at some point and get stuck in a loop? Who knows? And from the perspective of a programmer, how, you know, how should I structure this mission critical code? What's the right way? What's, you know, how do I want this thing to look? So it became clear, some abstractions for dynamism, for describing how the controller behavior evolves overtime and we really need it. And that's where the Kinetic work came in and you know I joined this work when it was already in progress. and, and really you know I've been very gratified to, you know to participate in it. I'm, and basically the, the idea in Kinetic, which is, which is really neat is, hey, why don't we provide sort of a natural computationally powerful but not too powerful, not Turing complete and well-accepted mechanism? Leslie Lamport, I think, said this about finite state machines, or state machines in general, maybe. That hey, this should just be the basis of computer science. Of computer programming. That's all we should ever use. It's this really neat. You know, you should totally go look it up online and read it. But, coming back, coming back to right now, the idea in Kinetic was hey, we could use finite state machines, as a way of describing dynamics. So each state in the finite state machine would have some NetCore policy in it, so static policy, which is associated with that state, and then. Based on events that happen in the network, like say I see a particular packet, or I get a warning from an outside intrusion detection system. Or I, you know, the stock market, you know goes down by too many, you know, points. I transition to my next state. Where there's a different NetCore policy. And that NetCore policy might be. You know like, oh, block packets now, or oh, route them a different way and so forth. And so, once you have this kind of structure it enables a bunch of really neat things as well. So it's, it's intuitive to the programmer, which is where, where this FSN was originally introduced, but you know, one of the observations that we made was that, hey, this gives you a structure that we can use for actually verifying certain types of properties. So there are techniques and model checkers out there that will let you take a finite state machine. And write statements in temporal logics, so these are you know, logic like you're probably familiar with. You know if A then B, B and A, not B, etc. And add to them some operator specifying time, things about time, like, you know, for all future times, such and such condition must hold, or, if I had a branching tree of possibilities of different ways I might traverse through my state machine, then, there exists at least one. Potential future path in which such and such could happen. Where there exist no paths in which I'll get stuck in such and such state. And so you can make these statements in temporal logic. And then use a theorem prover, an out-of-the-box model checker like USMB to actually show that, hey the program you wrote in Kinetic. Which we implemented. It's a library on top of Pyretic that provided a DSL for writing these state machine things. This can be directly compiled, completely by heart. Completely by code. So, no human intervention. To a model that USMB can take. And, you can write your temporal logic. And then, USMB will tell you, yes, or no. And if it says no, it'll give you a counter-example for the property that you specified. And this, this was really cool and powerful, and we were able to build a lot of really neat applications with it. >> It's, it's super neat. like, I mean I've seen some of the, the, the policies and, a, as well, written out. And I, I want to show the students some of them as well. Super neat, like you can say, like, okay if an infection event happens, like, you'll eventually transition to this, to this other NetCore policy that's going to do dropping. So, it seems like there are a lot of things that you can definitely verify. Are there things that, that you can't verify with with Kinetic? I mean it seems to me, I was. >> Yes. >> kind of looking at it. So can you say, for example, like. You know, I mean, you can say for example, like whether or not a state is going to be reached. But can you say things about, can you make assertions about flows or reachability? Or is that, is it one level of interaction away? Is it possible? >> [CROSSTALK] So, so it is a level of interaction away, I mean, so there are a variety of limitations to the kind of statements you can make, namely anything you can't express in CLT, you can't express, you can't prove, that's kind of the formal version. But in terms of property use so the properties you can state are ones about the evolution of the state machine. But, since each, state in the state machine is also a NetCore policy, and NetCore policies themselves have particular characteristics that can be proven about them it's possible to, you know, right now do some hand proofs provided >> I see. >> Combined with the new SMB proofs- >> [CROSSTALK] Like automated proof of what states can or can't be reached, and then. From here to your kind of like- >> Who have another yeah, another proof mechanism for say, showing like. Oh each one of the net core policies in each of the states preserve something. >> Do you think that the whole thing could or should eventually be automated? >> Oh, that would be, that would be so cool. I think that's like a wide open research, question. I've definitely chatted a little bit about it with Sharad Malik at the EE Department at Princeton and he thinks it would be really cool to explore. You know, I mean, yeah, part of the question isn't you know, whether we could. I mean, with proven things in general. You can always, well, not always. The vast majority of the time you can do a brute force thing like explore every possible thing that could every happen. As long as there aren't an infinite number of those, eventually you'll get an answer. But, but there, they are very well may be a class of things in between, that we can specify that are broader than simpro, than simple temporal logic properties. But at the same time narrow enough, that yeah, we can you know, say something about behavior in the network data plane that you know, we'll always hold in certain types of ways, given we have this particular FSM and it evolves in a way that prohibits certain types of evolution. >> Interesting. Yeah. So so one final, final question I thought. Is I mean, what do you think the this sort of killer app for adoption will be for Pyretic? Do you think it's sequential composition or do you think it's this, this verification stuff that you've added with, to it with Kinetic? Do you think it's just something else that hasn't been done yet? What do you think is if there's one thing that would make people say like, I must have this. What would that be? >> Mm. So, it's something, I think it's, I think it's you know I'll make a distinction between feature set and application. So you know over the course of the Frenetic project, you know from that earliest FRP version and then coming to Pyretic and then Kinetic. Forking off into Frenetic OCaml with their own stuff. You know, there's kind of been this. Build up of good abstractions and good features for doing things. So being able to say things that are useful. Being able to realize in the network, efficiently. Being able to provide various types of guarantees to the programmer or network operator. and, you know, I think, you know, each way, you know, parallel composition is super important. Sequential composition is super important. Oh, one thing I'll just quickly mention, that Pyretic did that I had kind of left out before, we kind of got rid of the SQL-based querying language, and instead said, hey, querying and forwarding can be considered. As just one thing where, instead of forwarding a packet at a link, a physical link on the data plane, we're going to forward a packet at a virtual, you know, port to a controller, or to a data structure on the controller. And so we introduced this notion of buckets also in Pyretic which was, which I totally missed. Talking about before. But this was like, this was a big step forward in terms of, you know, a unified way of, of writing things. And so as we add each of these features we, we enable potentially a wider range of applications that can be efficiently implemented, even intuitively expressed etc. And so it is hard for me to predict exactly which, which feature in the language is going to be the one that kind of, you know, tips the balance and really, like, makes this super real and practical. I don't, I don't know if there is one, but I think by, my approach has been to kind of try and, you know, add these features and also libraries like Kinetic and potentially other libraries that provide useful, general-purpose facilities, and hopefully get ourselves there. And I, I mean, even in terms of Pyretic right now, like. That SDX work, like, was done on it. And, that stuff seems to be catching fire, right now. So, you know? Maybe, we've already got some killer applications. >> I think, I think you I think you hit the nail on the head, which is, like, one of the things, I think. And, I think, as people use it, too. Especially, after, like sort of, grinding it out with POX is like. You know you do one thing in, in POX, you do the other, you know, you you do the same thing is Pyretic, and you know. You're like wow, okay I don't have to think about all of these things, about where the state is. And you know, oh, I can compose these things so easily. And, and, and now you get verifiability as well in, certain, certain aspects of the program, so. >> Yeah. >> I think so I, I think, I think. I think maybe, you like you said, maybe you're all ready there in some sense. >> Though, though one, just one little thing, you know. You know, we should not beat up on POX. POX is really, POX is really awesome. For what it is. >> Absolutely, absolutely. >> And it's very similar at the whole variety of other like, REU, and NOX, and Deacon, and Floodlight and I'm going to say Open Daylight although I might get some people who then you know, do that whole firebombing me on the internet. because Open Daylight is different and better or something. But, but I, I think fundamentally there, there. You know, first order approximation, they're all very, very similar. So, you know, POX can stand in for that whole family of what I like to call OpenFlow-centric controllers. So yeah. >> Mm-hm, mm-hm. >> Anyway, that's my own like little, you know. What I heard. [LAUGH] >> Cool, cool. Well, thanks a lot. Yeah, thanks for chatting about Pyretic today and Kinetic and I'm looking forward to using it more and building lots of cool apps with it. So, thanks. >> Awesome. Thanks, thanks so much, Nick. I really I really appreciate you having me on your show. >> [LAUGH] Yes, of course. >> You know? I guess, I guess I'll go to the call lines now. >> Yeah, [LAUGH] yeah exactly, exactly. Woah, was that a phone I saw? >> Yeah, yeah, I'm at a hotel right now, and it has a phone, it had a phone so I decided to pick it up and be like, oh, yes, oh, that, that question isn't about, anyway I won't do my comedy routine now. >> Cool. Alright, cool, thanks a lot. >> Thank you very much. And yeah, talk to you soon. >> Thanks.