Hi everyone. Today, we will conclude our discussion of the types of evaluation. We will discuss multi-arm evaluations, cost-effectiveness, and prospective versus retrospective evaluations. We previously discussed effectiveness versus efficacy as evaluation types. Remember, efficacy is about the true effect, but potentially less generalizable, while effectiveness is more real-world, but harder to be as confident in the effect you measure. In evaluation, we can test one program's effectiveness such as, is a given program effective compared to the absence of the program? Or we can evaluate different alternatives such as, when a program can be implemented in several ways, which one is the most effective? In this type of evaluation, two or more approaches within the program can be compared with one another to generate evidence on which is the best alternative for reaching a particular goal. These program alternatives are often referred to as treatment arms. For example, we could evaluate whether $100 versus $250 incentive is needed to get our target quit rate in this smoking cessation study example from last time. Evaluations testing alternative program treatments normally include one treatment group for each of the treatment arms, as well as a pure comparison or control group that does not receive any intervention. For example, testing no incentive, just information on benefits of smoking cessation, versus the $100 incentive versus the $250 incentive for that smoking cessation trial. Evaluations can be used to test innovations or implementation alternatives within a program as well. For example, we could test an innovation in whether an incentive is paid to a person by gift card or in cash. Another important type of evaluation to influence evidence-based policy is what we bang for the buck is on policies. In other words, how much do we need to spend to get an increase in the outcomes we observed? We can address this return on investment question by adding cost information and doing two types of studies, cost-benefit analysis and cost-effectiveness analysis. Importantly, while similar, these are distinct methods. Cost-benefit analysis estimates the total expected benefits of a program compared to its total expected costs. It seeks to quantify all of the costs and benefits of program in monetary terms and assesses whether benefits outweigh costs. Cost-effectiveness analysis is a form of economic analysis that compares the relative costs and outcomes or effects of different courses of action. Cost-effectiveness analysis is distinctive from cost-benefit analysis because in cost-benefit analysis, the benefit part is quantified in dollars, not in measures of health outcomes. We frequently use cost-effectiveness analysis, rather than cost-benefit analysis in health policy because it may be inappropriate to monetize health effects. Like, how much is one fewer case of colon cancer worth in dollars? Tough. Typically, the cost-effectiveness analysis is expressed in terms of a ratio, where the denominator is a gain in health from a measure, for example, years of life, premature births averted, site years gained, and the numerator is the cost associated with the health gain. The most commonly used outcome measure is quality-adjusted life years or QALY. Let's give an example. We can either develop a professional robust emergency response system or we could try to train volunteers and lay citizens in CPR and First Aid. Using volunteer paramedics and trained lay people as first responders to accidents cost about $128 per life saved, whereas using a community-based ambulance costs about $1,100 per life saved. This is from global settings. Notice, by measuring cost-effectiveness in terms of lives saved, all lives are treated equally regardless of whether the person is an infant who might live another 80 years or a middle-aged person who can expect only perhaps another 40 years of life. To do cost-benefit analysis, we need to assign a dollar value to the value of the life saved. While this is challenging, it is a frequent exercise used in policy evaluations, and there are a variety of techniques involved usually from actuarial science. If we picked $100,000 per life year saved and assumed that each person saved lived in an average of 10 years longer, then we could compute the cost-benefit ratio. We would get $128 divided by $100,000 times 10 or 128 per million versus 1,100 per million. In this case, both of these are very cost-effectiveness and have low cost-benefit ratios. There is some debate on what is cost-effective, but a ballpark is around $50,000 per QALY to $100,000 per QALY. The National Institute for Clinical Effectiveness or NICE started with $50,000 per QALY in the UK. Most analysis in the US tend to use $100,000 per QALY as a threshold. Another dimension in which evaluations vary is whether they are prospective or retrospective. Prospective evaluations are developed at the same time as the program is being designed and are built into program implementation. Retrospective evaluations assess program impact after the program has been implemented, generating treatment in comparison groups exposed. In general, prospective evaluations are more likely to produce strong and credible evaluation results for three reasons. First, baseline data that can be collected to establish pre-program measures of outcomes of interests. Second, defining measures of a program's success in the programs planning stage focuses the evaluation and the program on these intended results. Third, and most importantly, in a prospective evaluation, the treatment and comparison groups are identified before the program is implemented. Randomized trials, for example, are prospective technique where the program and its evaluation are wedded to one another. Many of the quasi-experimental methods we discuss are used frequently in retrospective evaluations, though they can be used prospectively too. In particular, there are valuable and prospective evaluations where we may not be able to do randomization for ethical or pragmatic reasons. But prospective evaluations benefit from this pre-planning. Most evaluations, especially those done independently, are generally retrospective. That being said, as you plan programs that you know you want to measure in your workplace, you can plan ahead and leverage these advantages of prospective designs. Today, we concluded our section on types of evaluation. We discussed multi-arm studies to compare the effectiveness of alternative interventions, cost-effectiveness and cost-benefit analysis, and retrospective versus prospective designs.