In Course 5, we've covered quite a few things, so I wanted to give you a summary of where we've been. All along, we've been talking about dealing with missing data, and we covered weighting and imputation for those issues. So there were four modules. The first one was general steps in weighting. So we talked about quantities that we can estimate, means, totals, proportions, quantiles, goals of estimation, how you interpret what you did statistically, how you use weights to reduce bias and variance. We talked about how to correct for coverage errors, we talked about how to use auxiliary data to improve estimates. And then we looked at the effects of waiting on standard errors which need to be accounted for, so you need specialized software to do that. The specific steps we looked at for probability samples were computing base weights, and then making nonresponse adjustments. And then calibration to external controls, which you may remember can be used both to correct for things like coverage errors and then also to reduce standard errors. We looked at some software in R that allows you to actually do this, all these steps here. And then in Module 4, we looked at imputations. So reasons to impute are to create a complete data set so that you don't drop out any cases when you do analyses, so that that's important. Also if you don't impute for missing items, you may actually produce bias point estimates, because of the missing this pattern in your data. We looked at methods for imputation, hot deck predictive mean matching regression estimation and some others. And then we looked a package in R called mice, that stands for multiple imputation through chained equations. And that is a good package to get to know in order to actually do multiple imputing, and to properly estimate your variances after you do that. So that concludes Course 5.