[MUSIC] We see the importance of building tests to understand our code. And as we change our code, to understand how the changes that we're making impact how it functions. Does it behave correctly, or after we make a change does it begin behaving incorrectly in some other part. So building up a set of tests is a really important piece to measure the impact of our changes and know that we are heading in the right direction as we're improving our software or trying to improve our software. So one the questions that comes up then is, how do we measure the tests themselves? How do we know that we've written tests that cover a large part of our application, and give us confidence that when we make a change, that if it has a negative impact that we're going to see it? So we need some way of knowing, have we written enough tests? Are there areas of our software that are unexplored and untested to some degree? And so, what we need are some metrics for defining this. And in software engineering, these are what we call code coverage metrics. And we're going to talk about what these mean. But basically, they're ways of trying to estimate how our tests cover the software that we've written, and if we made a change that broke something, how likely are we to detect that change? Well its actually not even a probability. I don't want to give you a sense that having one of these numbers tells you that you're going to detect something or not detect something. It just helps you to get a sense of, are you moving in the right direction? Do you have most things covered with tests? Are you over time adding tests and getting more coverage or are you getting less coverage? It gives you sort of a yardstick, but it's not going to give you a guarantee of anything. And we'll talk about that guarantee aspect a little bit in a minute. So let's assume that we have a class. We'll just call this class Foo. And Foo has, let's say, one method in it, bar. That takes a parameter, and then maybe it has a conditional branch in here, a greater than one, then we're going to do something else. We're going to do something else. And then we're going to return some value. Now you can fill this in however you think of it, maybe we have a value that gets changed based on this branch statement. Now let's that say we go and write a test that calls the method bar, with one value. How might we estimate our test coverage? And say whether or not our test was giving us coverage of a lot of our code. Well, one thing we might do is we might say well, how many of the methods that I have written do my tests cover? And we would come in and we would say one method bar. It's covered by our one test, therefore we have pretty good coverage. All of our methods are covered. So that's one possible metric that we could use to measure our code coverage or how well our tests are exercising the software that we have written. Now if we look at this a little deeper though, we may say well that's not necessarily a very good coverage metric. It's saying that we've covered all the methods, but if we look within this method, there is actually two separate branches of execution, depending on the value of a. So, in that sense, we really haven't covered everything, and just method coverage may not give us a full picture of the coverage. So, another metric for defining code coverage is looking at the different conditional branches of execution and saying how many of those do we cover? So in this case, if we're only passing one value for a, then we can only possibly be covering half the branches, because we need at least two values to exercise both branches. So in this case, we would say well we have two branches but we're only exercising one of them with our test. So another metric, is to actually go and look at, within the code, all the different branches of execution, and see how well they're covered. Now, the goal of code coverage is not to send your developers off on a mission to count all of the If Else statements in your code. Obviously, that's not what you want to do. There are automated tools called code coverage tools, that can calculate these types of metrics that I'm talking about, on your behalf. So they can go and tell you things like here is what your method coverage is, here is what your branch coverage is, and the different branches of execution. We could also go and look at things like conditionals. If we have statements that can evaluate to true or false, how many of those statements do we evaluate to both true and false? And there's a wide variety of code coverage metrics that are available to use. Now, my goal is not to give you an exhaustive list of those. I'm going to give you some basics like method coverage, looking at conditional branches. You can go off and explore other ones available to you but I want to give you a sense of how this is useful. Well, if we have these metrics, we can use them to do something important. And that is to help us figure out which parts of our code are tested. And which parts of our code are less tested. And this is really the most effective, or one of the most effective, if not the most effective use of code coverage, is to really give us an understanding of what do we know about our code? What are we evaluating and measuring in our code, and what are we not measuring and evaluating in our code? And this gives us a sense of that. Now it doesn't give us 100% definition of what we're not measuring, but it helps us to understand which pieces of our code tend to be better measured or less measured. Now, is it a metric of the quality of our test or the overall quality of our code? And you may get different opinions, my answer is no. It'll help you to know where you're testing a lot and where you're testing less, but it doesn't mean that your tests are good. You could still have really good coverage but miss bugs in your code. Just because you have good coverage doesn't mean that you're not going to miss problems in your code. And you may have one really lucky test, or one really extensive test, that only goes down a very critical execution path in your code. But it may happen to catch a whole lot of bugs that a big sweep of unit tests, that test all these small point things would never catch. So you can't just say one number is enough to tell you whether or not your tests are good, you can't just say I've calculated my code coverage it's 98%, my test are good. You can't rely on that, but you can use it to help you know where should probably be dedicating more time to building out your tests. Where you probably understand your code less because you're testing it and measuring it less. Another thing that I want to just say is that, sometimes managers get into the mindset that code coverage is an absolute number that we must achieve. And if we are not achieving this number, it means that we are not doing our job. Or if we have achieved this number, it means that our code is good. And that's not correct, that's a sort of misuse of code coverage. It's really a tool for understanding where you've tested and where you haven't tested. It's not a measurement of how good the software is. Now, in many cases, code coverage may correlate with better software because developers that are doing lots of testing, that are producing high code coverage, are probably being very thoughtful and conscientious in measuring the impact of your changes and trying to evolve the software and discover and correct bugs early. But it doesn't guarantee that. And lack of code coverage doesn't guarantee that the software developers have written bad code, or that they aren't being conscientious about trying to detect bugs. So you need to be careful when you use these metrics to use them in the right way. They're a tool for knowing what's tested and what's not as tested. But they're not a pure metric of quality, they're an indicator of potential quality. Maybe they help show that something's quality but they're not a guarantee of it. And so managers should not use it in that way, and you should not feel that just because you've achieved a particular code coverage that you are all set. That's not the use of this metric.