Why does evaluation matter? Anytime you're designing something for people you need to make sure that they can interpret the meaning that you've imbedded within your designs. While we have lots of experience with listening, we don't have lots of experience talking critically about sounds. On the other hand, we have tons of experience analyzing and talking about visuals, and this is almost ingrained in us since we were kids. Over our lives, we've built a set of common language and descriptors to talk about visuals. I would bet that I could ask you. What's one app that you don't like the design of, and why? And most people could tell me about one. But if I ask about an app with sounds that they don't like, I might get fewer responses. Since it's harder for people to talk about sounds, checking them with users helps us learn how well our designs are working. So how are these check-ins useful? But just asking for feedback isn't always enough. More way to do this is through careful planning of the questions we ask during each check-in. When creating a check-in or evaluation, we should always begin with our overall goals. What are the things we want to learn from this evaluation? The rest of this lesson, we'll go through some other question categories or underlying characteristics that we might want to learn during an evaluation. There are three high-level questions or ideas that we often measure for sound design. The first category of question is comprehension and interpretation, the second category is experience, and the third category is usability. There are a lot of individual questions that we could ask for each of these high-level categories. The table to the right is meant to give an overview of a few that we've brainstormed. We'll cover a couple of these in more detail but we won't have time to talk about each specific question that we could ask in these categories. If you want to learn more, check out the full list to the brainstorm questions in the supplemental resources. Let's go through some questions which can help us understand how well users are comprehending and interpreting the sounds? We might want to know if the sounds match their expectations based on the visuals and the learning tool, or maybe we want to know if the entire substance is cohesive? Do they go together as a group and make sense? Of all the sounds in my design were metallic but one was rubbery, that could break the user's expectation and confuse them. They might spend a lot of time trying to figure out why that one sound doesn't fit, instead using that time to interpret the sounds, or maybe we want to know if the user can tell the sounds apart. If they can't differentiate them from one another, then they probably can't track what's happening within one sound. Experience. Other questions help us understand someone's experience using the sounds or the sound enhance tool. Do they like it? Are the sounds nice? Are they annoying or fun? Even if the sounds are comprehendible, and a user could describe the mapping, if they think the sounds are bad, they will enjoy using the tool. Finally, usability. Usability cover some of the more generic ideas for good system or interface design. Are the tool in the sounds easy to learn? Are they easy to remember? Are they able to recover from errors or to reset the sound to try another scenario with the sounds? Usability generally covers the effectiveness, efficiency, and satisfaction of using a tool or a system to accomplish some goal. Asking these three types of questions can help us understand what a potential user thinks about our sound designs. But why is this important? You are not your users and your sound mappings should make sense for your user group. People from each group will have different expectations or prior knowledge about the concepts that you're conveying, and evaluations can help you decide whether or not a design is conveying what you want. So let's explore two cases where we used an evaluation for our own design mappings. At the start of a project, you might already have an idea about what mappings or sound design you want to use. Sometimes this works out but previous experience doesn't always lead to the best mapping. Let's go through an example from Gravity Force Lab: Basics. In this sim, you can change the size of the two masses. We wanted to represent these changes through sound, so we choose to use a mapping similar to the plan example we've showed previously, where larger objects have a lower pitch. A larger mass would have a deeper pitch than a smaller mass. This isn't reasonable mapping but we also needed to consider how to represent the changing value for force. In this example, we use a lower pitch sound to be a small amount of force and higher pitch sound to be more force. These two mappings work individually to convey the changing mass and force values. It could be okay to have these opposing mappings like this, where one gets higher to show smaller amount and the other gets higher to show a larger amount. But you should make sure that potential learners can understand those differences and track all sound changes easily. To learn whether this worked or not, we evaluated this design through interviews with learners to check the mappings. Let's explore another example this time from Ohm's law. As we are brainstorming possible designs, we had lots of different ideas for how to convey the important pieces of information. The most important thing we wanted to highlight were the two relationships. The relationship between voltage V, and current I, and the one between a resistance R and current I. To do this we needed to convey what was happening to current or I, in general and we consider providing reinforcement about the changing value of R or V. One way to show the slider changes would be through their reflected physical changes in the circuit. In this version, increasing the voltage slider would add batteries in plain air-con like this. Changes to the resistance slider would similarly be reflected with a bubbling sound. On top of those two sounds, we've plan to use a separate sound clip to show the changing value of current. There's lots of different ways that we could have conveyed the slider changes, we could have use more abstract sounds for both of the sliders. In one version, we use pitch mappings to represent higher low values for the sliders. We could even use similar pitch ranges but different timbres to make the slider sound easier to differentiate. On top of the slider sounds, we wanted to explore using a pitch and tempo mapping to emphasize the change to the value of current, and even from that simple idea, the exact sound for current could vary in complexity. In one version is a simple woodblock sound, in others, it was a much more complex sound. In this scenario, our evaluation goal was to explore different mapping strategies and narrow down to a smaller set of sound designs to move forward with. For both examples, we wanted to know if the mappings were confusing or hard to distinguish from each other. For each evaluation you plan, knowing what you want to learn about the interpretation, the experience, and the usability of your sound and learning tool design, will drive what evaluation you want and what questions you ask. Next lesson, we'll cover other details about evaluation. From there, we'll move into a few lessons covering questions and evaluation styles.