[MUSIC] Welcome to today's lecture where we will be talking about evaluating training programs. We've been talking previously about planning the programs and you've been working on your grids or matrices where you've identify the tasks, the methods and the resources. Today, we'll be talking about evaluation issues and you will be able to integrate that into the last column in your training grids. We'll be talking primarily at this stage about what we need to look at, defining evaluation and thinking about some of the practical ways we can evaluate our training programs. Evaluation has been define generally and this can be applied to any kind of program not just training, as the comparison of an object of interest against a standard of acceptability. In this case, the object of interest is the trainees's performance or achievement of the task that we've spelled out in our training objectives. And as you recall when we talk about performance, we're talking about performance is trainee behavior or worker behavior back in the job. So if you were doing a community help education program, you would be concerned about community member behavior or organizational behavior. So, we're specify what we're looking at is change in behavior. And in this case, a change in the performance of the trainees, especially when they get back on the job and the standard of acceptability is the level of achievements specified in the objectives. Do we expect them to get 80% of the knowledge that we're trying to impart about managing diarrhoeal diseases? Are we hoping that they will be able to describe at least three major preventive measures for malaria? So, we need to specify that in our objectives. Again, setting our objectives is the key to helping us evaluate our programs. Another thing that's important to remember is that evaluation is an ongoing process. We set the objectives in the beginning based on our baseline information. This sets the stage for implementing the training. We want to monitor how the training is implemented. Are we actually delivering the knowledge and skills that will help people obtain those objectives? And then the final evaluation, looking to see what has actually happened in terms of trainee performance. Evaluation also helps us be accountable. According to the group Management Sciences for Health, providing training to staff has many costs. We use resources involved in the preparation. And as we've talked about, it includes staff, materials, transportation. The venues for holding the training. The lodging and travel for the participants. The participants are away from their workplace. So the agency that is actually, it's costing them, because of their absence during the training. So all of this implies that training has direct costs during the running of the workshop and preparation for it, but also indirect costs when the staff are away from work. And therefore, the managers of various programs and agencies need to know whether we can justify these cause. Is that the training worthwhile? Will the training actually make a difference in staff performance? And will this ultimately improve the ability of the agency or the organization to deliver the services and products that is trying to deliver? We want to make sure that staff members not only acquire new knowledge attitudes and skills from the training, but when they get back, they can actually put these into practice on the job and demonstrate to the managers that they've gained something. But of course, we have to gather information on this in a systematic way and report. It's not left just to the managers to observe if there's a difference, but we have to report, especially to people who have sent trainees, people who have funded our training. That yes, these changes have occurred. These improvements of performance have occurred. There are different types of training and evaluation and already in the previous lectures, we've discussed the fact that evaluation begins with our needs assessment. We do baseline or formative evaluation where we try to find out what is the level of knowledge, skill prior to training. We want to evaluate our inputs. Are they adequate? Do we have enough materials, enough funds, enough transportation, adequate meeting rooms, et cetera to deliver the training in the way we wished? And we also have talked about process evaluation. Are people gaining? Are they learning? What is the learning climate like are people satisfied? Are they comfortable? Another type of evaluation, of course is our outcome evaluation and that is what we will be discussing in today's lecture and this module. We want to document and assess whether new and improved knowledge, attitude and skills or abbreviated as KAS have occurred after the training. Impact evaluation, which we will discuss in the next lecture goes out to the field and looks at job performance. Organization performance, program performance. And hopefully, if the trainees work improve the quality of services delivered, then we will see demographic and health changes. Reduce fertility, reduce mortality rates in the community based on the services that are provided such as we see with our immunization programs here. The staff learn how to manage their cold chain better, keep better records. Mobilize the community, involve the community better. Hopefully, more people will be vaccinated and less children and mothers will have illness. In outcome evaluation, what are we looking for? We want to gather evidence that the trainees have acquired as we mentioned, new knowledge, attitude and skills. We want to get feedback from the trainees themselves on their perceived gains and gaps. So we want to try it to objectively measure the changes, but we want to make sure that the trainees themselves perceive that they have gained something. And if there aspects that are missing, that can tell us hopefully before the very end of the program, we might be able to do some remedial actions or plan additional followup with the trainees or give us ideas for improving the training process in the future. We're also looking for the trainee impressions of the quality of the different sessions. We want to get there specific comments on the adequacy of the arrangements, their comfort level, the appropriateness in terms of the level the challenge that they had. Were they able to understand the materials? So we want to get feedback from the trainees on all kind of aspects both the logistical, the space, facilities, time, comfort as well as the level that which the materials represented was it comprehensible to them. We have a variety of tools to perform our outcome evaluations. And again, the outcome evaluation occurs before the trainees leave the training site. Just as we've had baseline and pretest, we can have a post test questionnaire to compare results. We can observe trainee performance during return demonstrations, during practicals. We can devise feedback forms that the trainees will fill out and give their comments. Everything from logistics to content, their opinions of that. We can have focus group discussion. FGDs among participants to get feedback. And of course, we have our training committee and that committee needs to meet to consider the results of all of these tools and try to come up with summery of the process evaluation as well as the content evaluation, the results of the pretest, post-test comparisons, etc. So that ultimately, a report can be devised and given as feedback to sponsors and participants. We will look in our next sections about some of these specific types of tools and how they're used.