Hi, in the previous lecture I talked about the rational actor model. In the rational actor model individuals had objective functions and they made optimal choices, made optimal decisions given those objectives. In this lecture I want to talk about something called behavioral models. Now behavioral models are critical of the rational actor assumption and they're critical for two reasons. One is, there's a lot of observations, there's a lot of data, from experiments in the laboratory and from just looking at the real world, that people seem to systematically deviate from optimal choices. It's also the case as we sort of understand how the brain works, the more deeply, that there's evidence from neurology that, just the way our brain is structured, the way we encode and represent information, the way we think, causes us to systematically differ from what the rational actor model would assume. So what I'm going to do in this lecture is I obviously can't give a full accounting of the behavioral revolution within economics or of all of psychology. So what I'm going to do is just hit some high points. I'm going to talk about four well-documented biases and talk about their implications for when we think about modeling people. Before I get there though, I just want to give a little bit of background. So Daniel Kahneman is a psychologist and he won a Nobel Prize actually in economics for his work on how people systematically differ from the rationality assumption. So he's got a recent book out called Thinking Fast and Slow. And in this book he makes the following point. You can think of the brain as having two sorts of processes. A slower process that's a little bit closer to rational that processes information, and then fast processes that are more likely to work based on emotion or just based on quick clues. So as a result, these fast processes may make us biased, right, in ways that the rational actor model assumes that we're not, right? The rational argument assumes that we sort of think slowly and carefully about everything. But Kahneman argues that we think both fast and slow and as a result we make some mistakes. Now there's another book by Cass Sustien and Richard Thaler who then would argue that these biases have real implications for policy. And that's one of the things we're going to talk about. It's one thing to say people make mistakes. Remember in the last lecture I said, well some people make mistakes high and some people make mistakes low, those are just going to cancel out and there's nothing we can really do about it. What they argue is that these mistakes, because they're systematic had implications for how we construct policy. And we'll talk about that as well. Okay, so what are we going to do? Well, we're going to do four examples. We're going to take four particular types of biases that are well-documented. The first one is something called 'prospect theory' and this is the idea that we look at gains and losses a little bit differently. Second one is something called 'hyperbolic discounting'. This deals with how much we discount the future and how that changes depending on how far in the future it is. Third, there's something called the status quo bias. There's a tendency to sorta just stick to what we're currently doing and not make changes. And this is one that has big implications for policy. And then the last one is something called the base rate bias. And that is that we can just be influenced by what we're currently thinking. So what I wanna do is just show you some examples of each of these. Maybe talk a little bit about some evidence from experiments in the real world that, that you know, suggest that these things really are biases. And then I'll be a little bit critical about this whole approach at the end and sort of push back a little bit. Okay? Alright. Let's get started. So prospect theory. Suppose I say to you, okay, you've got two options. Option A is I can give you 400 bucks for sure. Here's $400 right now. Option B, 50 percent of the time I'll give you $1,000, I'll flip a coin. Comes up heads, you get 1,000 bucks, but 50 percent of the time when I flip the coin I get a tails, I'll give you nothing. So you go to decide, 400 bucks for sure, or this 50-50 proposition. So if you give this to people, a lot of people choose "A". A lot of people say "look, I'll take the 400 bucks." Now suppose I ramp this up. Suppose I said $400,000,000.00 for sure, for a 50/50 chance at, $1,000,000,000.00 or nothing. And almost everyone would choose A. Right? So, as amounts get larger, people tend to be what we call risk adverse. In gains. But, here's what Kahneman showed. This is what prospect theory is. Suppose it's a loss. Suppose I say okay. Option A. Is I'm just going to take $400.00 from you. Or, we can flip a coin and half the time I'll take $1000 from you which is even more, and half the time I'll take nothing. Well it turns out, people are actually more likely to choose B in this setting because we're risk loving over losses. So the risk averse over gain and risk loving over losses, and that's different than what you'd get from a rational actor assumption. So this is just a systematic deviation, and this explains why people take gambles maybe that they shouldn't take. Okay, that's one. Let's go to the next one. Hyperbolic discounting. Suppose I say to you, here's option A. I'll give you 1,000 bucks right now, or wait until tomorrow, and I'll give you 1,005 dollars. What do you take? Well, a lot of people are gonna say you know what? Just give me the 1,000 bucks today. Most people say, I'd rather get the $1,000 right now than wait a day to just get five dollars more. Now. Suppose I say to you okay. Here's option A. I'll give you $1,000.00 a year from today, or I'll give you $1,005.00 a year and a day from today. Well, here most people say, well look. We're waiting a year anyway. What's one more day? I'll take option "B". Well if you write down a rational actor model where I'm trying to maximize wealth and I've got some discount rate or something like that, if I in this case choose A then in this case I should also choose A, but most people don't do that and the reason why is we discount the near future. >> Right? A lot more than we discount that same amount of time in the far future. So this is called hyperbolic discounting, where immediate gratification matters a lot to us and so therefore we discount short periods of time from right now a lot more than we do short periods of time down the road. So this has implications and it has what is called sort of the "chocolate cake" implication. So suppose I say, you know, I want to be fit. I want to stay in great shape, I want to be healthy. And so someone said to me, "okay, you know... " A week from now, would you want to have chocolate cake with your desert or not. You know for desert or not. Would, would you forgo the chocolate cake. I'll say absolutely, I'm going to forgo the chocolate cake because the thing is, I really want to get in shape. But the thing is if I'm sitting at dinner and someone put the chocolate cake in front of me Even though I want to lose weight. Even though like I really want to be fit. The chocolate cake's right there. I can't put it off. Because there's just, there's, let's see, it's, it's just right there in front of me and so if you say in Kahneman's language, I am thinking fast, right. I don't have a long drawn up thinking that allows me to make the rational choice and so what's gonna happen is I am gonna choose the cake now and make what would be a sub optimal choice. Third something called the status quo bias. So suppose you got the phone, you go to work and it says, check this box, you can just check to contribute to the pension fund or not. And so I'm sitting there thinking "well, you know if I check here that means its going to take money out of my, You know, salary out of my paychecks here, I'm not gonna check it. Now, alternatively, what my firm could do is it could say, well, check box to not contribute to the pension fund, so this is called a negative check off. So if I check here, then they won't contribute. Otherwise, they will. Well, what happens here is, again, I think. From a status quo standpoint, oh I'm already contributing. No, I don't want to pull that money out. So it sort of seems a little like the prospect theory thing, right? So what happens here is most people, in this case, wont check the box. Right, most people won't check up here, and most people won't check down there. Now how do we know this? We know this by looking at organ donations. So in England, if you want to donate an organ, it says, they say this, check this box to donate the organ. And you know how many people check the box? 25%. If you look to the rest of Europe, there's a little thing that says, checkbox, to not donate your organ. And you know how many people check the box? Ten%. So in the countries in Europe in which you have to check the box to not donate your organ, 90 percent of people donate their organs. In England where you have to check the box donate, donate, to donate your organs, only 25 percent of people check the box. So, extremely large status quo bias. And that again is a deviation from rationality. Okay, last one. What I want you to do, is I want you to think of the year, that you think. This is a box made sometime during this century, the last century. I'm sorry. in the 1900's. I want you to guess the year that this box was made, and I want you to write it down. Okay. You've got it written down? Now what I want you to do is I want you to guess how much this box costs. So I want you to guess, how much you think this box is worth, if you were to you know, you know, buy it on the web. [inaudible]. How much do you think the box is worth? So this is a bias called the base rate bias. Look at the two numbers you wrote down. The first one is the year you thought it was, made. So suppose I thought this thing was made in 1960. Then, I want you to think, what price did you write down? And maybe you wrote down 63 dollars, right? Those two numbers tend to be fairly close. So what the base rate bias is, if you get people thinking about one number, and then you ask them something else, that second number will tend to be close to the first number. So what you get, right, is this bias that makes no sense at all. So you can do this, you can ask people, like, think of a phone, think of the last two digits of someone's phone number, and then price the box. And what you'll find is their prices are fairly close to the last two digits of the phone number. So this base rate bias is just a clear deviation from rational behavior, optimal behavior. Okay, so we've looked at these four things. Prospect theory, hyperbolic discounting, status quo bias, base rate bias. They're all well documented. So they're all well documented deviations from rational behavior. So, there's a ton of these, right? So if you go and look out in the web you'll see a list of hundreds of hundreds of biases, right that have been found in the laboratory. Now there's people who are critical of this, right? And so one of the acronyms that I sometimes hear in the psychology department is weird. And if they say that the results are weird. Now, what do they mean? Weird stands for Western Educated Industrialized Rich Developed countries. So most of these biases, even though there's lots of biases, most of them have been found on experiments with people like me. Western rich industrialized people from western rich industrialized countries. So one of the things that we're trying to figure out, is, how many of these biases hold true across different populations and different cultures? Some of them do, and some of them probably won't. Now there's another issue with these, right, you know there's lots of these biases. People tend to learn, and again remember we talked about if the stakes are large, right, maybe they'll learn the way out of the box. So, the Monty Hall problem, are they picking the door which the prize was? People suffer from a bias there right, a status quo bias. They stick to their door, but after they play it enough it goes away. So one of the questions is, right, how strong are these biases through repeated interaction, and do they go away? Last point about this behavior [inaudible], when you think about modeling you may say boy, you know what, it's right, I think people suffer from hyperbolic discounting. I think that there's prospect [inaudible] if there's a base rate bias. I want to include all these biases in my model. Computationally that can be a really difficult thing to do. So one reason why when you look at [inaudible], we're going to be using simple rules or maybe assuming people optimize even though we know people suffer from biases. It's that it can just be computationally hard. It can be difficult, right, to write down a model where you include these biases. Nevertheless, the biases are there. And when you think about a model of people, we don't necessarily want to peo, assume the people optimize. Nor as we look next, we want to see the people necessarily just follow simple rules. Instead, we may want to assume that people are these complicated messy things with these biases. It's just that can be sometimes hard to do unless elegant. Okay, so how do we think about this, how do we think? Here's I think one way, is that if I'm running down a model. It may not be a bad way to start, by sort of assuming people make optimum choices given some simple objective function. Then I want to ask myself. Okay, what biases are out there? What is the list of documented biases? Given this list of documented biases, do I think any of these are going to come into play? How large do I think they'd be and how relevant are they for this particular case? I might also say boy, maybe I should do some experiments, right. Maybe I should go look out there in the world and see are people behaving the way that my rational actor model seems to be behaving. And if not, right, if, if there's reason to believe that one of these biases is kicking in then I want to include the bias. But because there's so many of them, I can't include them all. So I want to think about which ones of them are most relevant. So what you see a lot of times, in models that try to explain behavior is they'll sort of be, what I call sort of, rational minus, or rationale plus. Right? So they're sort of, a little bit less than rationale behavior, because what they've done is take rationality, and added in a bias. And that's one way to write down useful models of individuals. Okay, thanks.