So let's talk about traffic across the mesh. So what you will see here is life of a request in the mesh. So I started a new service. I am a developer, I have a team of developers. I have a new service called service A. Service A has been deployed in Kubernetes, and we created a Kubernetes service for it, from a cluster IP, so it has an internal IP address. And it has label selectors as we all know in from Kubernetes, right? So you have a service, you have label selectors, I have my deployment with many, many pods in it. And now, Service A came online. What happens is that Pilot, Pilot is the one that is responsible for the routing. So Pilot has an adapter that will read my service entries in Kubernetes, and will translate that environment into the Envoy's APIs, into the Envoy's proxies. So service A will be recognized and registered in all of the nodes, in all of the pods across my cluster. So everybody knows about Service A now. And Service A, and all of its pods will be informed with the rest of the topology in the mesh. Now, why is that important? That is important because the load balancing and communication actually happens directly between the proxies. And let's see how it looks like. So Citadel is also being used, and TLS certificates are also being securely distributed into the proxies themselves for mutual authentication. If we want to use that, but in any kind of way, we will have Citadel issuing certificates for our service, Service A that we just deployed. Now Service A, the binary, my business logic, my container that the developer has worked tirelessly on. Is trying to contact Service B not through any, nothing sophisticated there. It will send DNS request, Cube DNS is going to reply back with IP address, and the request is going to be intercepted by the proxy. So the proxy is going to pick up that request. And all the requests that comes from that specific container are going to be intercepted by the envoy proxy. The envoy proxy has a map, has all the wraps, it has all the routes and all the services configured in it from pilot. So what the proxy will do is it will do client side load balancing. And it would look into how can I communicate with Service B? And it will choose one of the pods that has service beyond them. And it will create a direct connection into service B, so one pod to the next. So if you think about it, with Kubernetes we used to have a service and we have a virtual IP. And the pod will talk to the virtual IP, right? No longer that's the case. In the service mesh, all of the proxies communicate directly to one another, and they can need to also authenticate to one another if we have mutual TLS. So over here we can have either HTTP1.1, HTTP2, GRPC or TCP with or without mTLS. But the leading principle here is that they communicate directly between themselves, right? So the other proxy receives that packet. Once the service, the server received that packet, it communicates with Mixer, and it asks the Mixer, hey, do I have enough quota limit? So policy is okay. That Service A is allowed to talk to Service B. And it also asynchronously send telemetry information. So we have a yeah or nay response from Mixer. And now the proxy's going to provide that request from Service A to Service B. And Service B's going to do the business logic, kind of think about what it needs to do, and create a response back into service A, right? But the application that is not aware of the proxy, right? It's completely transparent. So now the proxy over here is going to try and you know what, I'm going to return back the communication straight into the proxy of Service A. And I'm also going to asynchronously provide telemetry to Mixer. So we can actually know how long did it took Service B to respond, and we can have that information observability end to end, right? By the time that other proxy got that, it delivered it into Service A, and that is completely transparent to both Service A and Service B. And this is how a service mesh actually works, right? So everything works from the proxies themselves. All the communication happens between them, and they are basically sidecars that intercept all communication. And again, once the proxy here, once they get the packet, they also send telemetry as well. So we have telemetry from each and every stage of the communication. And then we can analyze and have that observed, end to end observability that we all crave for when it comes to microservices. Couldn't provide us with a lot of information. So the architecture components just to kind of briefly go over them. Pilot is the control plane to configure and push service communication policies. Mixer provides you with policy enforcement. So every time we need to communicate between different services, Mixer is going to look into the policy and say, hey, do you have enough quota and enough rate limiting? And do you allow to talk between them? And Citadel is going to be the CA, it's going to be the one who is responsible for delivering certificates, managing the certificates, rotating them, etc, etc. So these are the three main components that we have when we talk about this STO. And the important things that we get here is you get traffic splitting that is independent from the infrastructure. So if you think about it, if I want to have a canary deployment in Kubernetes, I usually have a service, and I have two deployments under it. I have one deployment which is version one, and another deployment which is service two. And by the game of labels, it selects both of these deployments, right? That's the way that we do it in Kubernetes. The problem with that is that we are dependent on the number of pods to do the traffic splitting. If I want to do 1% and 99%, I need to have 99 pods in one pod in the canary, right? And that is very, very inefficient, and it is coupled with the infrastructure. It's very inflexible. And what we basically do now is we raised a layer of abstraction. We use STO to do that routing, to do that traffic splitting. And we're no longer dependent on the number of pods that we have in order to do that traffic splitting. Now we can have one port in Canary and four ports in production. And we can logically say, give me 99% into these four pods we have in production, and 1% go into this one. So we're making the network smarter. Not the infrastructure, no longer the infrastructure is dictating how do we do routing. So, the next one is content-based traffic. So what we can do is we can have a layer seven content-based traffic now. So we can say, if you are a user that comes from Android, maybe HTTP header, or maybe a different URI, or anything like that. We can route your traffic inside of the mesh, so that's very powerful as well. And we cannot really do that with plain vanilla Kubernetes as of yet. We also have fault injection and circuit breaking. And these are two very important network capabilities that usually involves a lot of fiddling around with your binaries. These usually have to be the developers responsibility to create that kind of functionality in the network. So you use some kind of a library of sorts, and you instrument your application with it. And now we are basically enabling the service mesh to take control over that. And we can see how it actually works later on in the slides. But that is the principle behind using that.