when we talk about equity and technology, we tend to focus on equity and technological design and we tend to focus on engineers and the tech industry or companies as the main place where we need to make change. And what I hope you got out of the readings for this week is that it's not the only place to make change that public policy is enormously important in this space. And invariably we we aren't focusing on that enough to my mind anyway, so when we talk about public policy and technology and equity, perhaps your first idea might go to diversifying the people involved. Right? So you might say we need to make sure that we have a more diverse set of scientists and engineers of innovators, basically, we need, you know, more people getting these grants from the government. We need more small business loans to more entrepreneurs to historically marginalized groups. But there are many, many other ways in which policy has a role to play and policy is already playing a role. So, for example, we think about the patent system, which seems like this very technical administrative area of law, but the decisions about whether to give a patent and whether to give a broad patent or a narrow patent has really serious implications for equity. Obviously it has implications for the cost of a drug or medical device. If it's under patent, then it's more likely to be a single company that's offering it or a single entity that's offering it. So it's going to be more expensive. But often when someone has a patent monopoly, it also means that others are not even involved in doing research. And this is certainly the case with the pulse oximeter. So the pulse oximeter now famously, is more inaccurate among people of color. And part of what is making it difficult for others to even know what the problem is is that the, the developer, the current, company that that supplies pulse oximeters, has a patent monopoly and is not willing to provide, any data to anyone who asks. So one way to deal with that is to challenge certain kinds of patents or make sure that they're more limited. Another way is to think about requirements in terms of open data. So, you know, the biden administration, in fact issued guidance that said that any federally funded research should be widely available should be open as well as the data on which it's based. And that can be also useful because it means that more people can engage with that information and ask critical questions about it. You can imagine a third place which is, you know, to go back to the pulse oximeter example that when a device is approved by regulatory agency, like in the United States, the Food and Drug Administration, that questions of equity are incorporated into that equity. And the design is that, you know, appropriate among communities of color as much as white people, those kinds of things. And then finally, in terms of the kinds of things that the government spends its research and development money on who's making those decisions And are they really maximizing equity And that's just a handful of things, I could probably go on and on about it, but I'll just say that there are a lot of places and we tend not to think about it too much. One of those places is what the government tends to focus on. And most governments these days are spending enormous amounts of money on. For example, renewable energy research, renewable energy development trying to help small companies jump start their work in this area because they have a concern about environmental sustainability and specifically climate change. Perhaps most famously we know that that very directed what's called mission oriented research is what led to the successful launch of the space shuttle to a man on the moon. We can do those sorts of things. And again, you know, we've had cancer, Moonshot programs and wars on cancer in the United States as well as elsewhere. So, mission orient research, in fact is very popular outside of the U. S. Especially in East Asia, there's a much more focus on what are called National Innovation Systems and mission oriented research than in the United States. Where by and large, even within these mission programs, there's a lot of focus on the individual curiosity of the investigator. But just imagine if we were to focus instead our priorities are mission oriented research on questions of equity Or if we were to say that the traditional, you know, cancer Moonshot or war on cancer or space program needs to have an equity component incorporated into the conversation that might lead to different kinds of projects being funded and different kinds of outcomes and ultimately benefits for a wider swath of people. It's very interesting. There was a study that was done a few years ago analyzing the grants that are made by the National Institutes of Health in the United States. And what the analysts found was that in fact, it wasn't necessarily that the peer review committees that make these funding decisions were disadvantaging. Researchers of color. Instead they were disadvantaging particular kinds of topics. So for example, the, you know, the least funded research had to do with women. I'm sorry to say. You know, words like reproductive or ovary, menstrual. Those were the kinds of things that got the least amount of funding. The other area that tended to be less funded were related specifically to equity. Questions like socioeconomic disparity. Those also received very little funding. But the funding that got the most things like you know, corneal eye, those were often very technical and often connected obviously to some sort of commodity, you know, that it could produce some sort of you know, basically some biochemical intervention or biotechnological intervention that could produce a commodity at the end of the day rather than for example, you know, projects about socioeconomic dist that might produce a different kind of way to ensure that equity was achieved. And so those are the things that are embedded again in the very technical that have these enormous implications for equity. That one could imagine. Very different ways of structuring research funding that could produce different kinds of outcomes. So here's one way to break down considerations of equity for technology policy, but this could also be useful for technology development itself. So one way is to talk about design equity. And when I talk about design equity, what I mean, are what are the values biases and assumptions that are embedded in technical details? Right? So here again, to go back to the pulse oximeter. There are particular values embedded in in the technical design about communities of color and the fact that they're essentially not the consumer of this particular technology. There's distributional equity. There's this question about whether or not it's equitable, it's affordable. What are the kinds of things that affect that? Accessibility, for example, patent policies, data policies What I might call investment equity and that is who participates in the technologies development and how are their contributions valued. So of course, most obviously you might think about the people involved in clinical trials who ensure that the drugs that we take are safe and effective. But these days you could also argue that we're all somehow participating in technological development through our data. And the question then is, what are we getting from the incredibly valuable data that we're providing to power increasingly AI-enabled machine learning kinds of devices? There's procedural equity, which is who is actually making those decisions? Who has influence over them? Who has power at every stage in the process, from the priority articulation to the distribution? And then historical legacy, to what extent are technologies and their developers really considering the context within which these technologies are being built? So I want to talk about an example here based on an analysis that we did through our technology assessment project. And this focused on large language models. Large language models are type of machine learning that is currently being developed. It really is just an emerging technology and it promises that it can recognize text, summarize it, translate it, predict it, and generate it. And there are all kinds of promises around this technology. The idea is that it can democratize knowledge, that it can create a level playing field. So it allows an international community, for example, of scientists to interact with one another. And that it will also make a lot of customer service a lot easier, that it will allow individuals to more easily access technical information and sometimes even maybe produce legal briefs that they need. Or other kinds of legal documents without the need of technical expertise. So there's an implicit assumption around democratization and as I suggested already an implicit understanding of equity. But if you use the kinds of concepts that we've been talking about in this class and apply them here, which is what we try to do in our analysis as you read, a few things come out very clearly about the design of these technologies. So the first thing that you realize is that, this isn't a magical technology. It's created by companies, in this case primarily, it's based on these gigantic datasets and very complex algorithms. And the only entities that can afford to do that are companies and a very small handful of the big familiar tech companies. That of course has a huge impact on equity, because it means that it's not a particularly diverse group of people that are developing this technology. And so the kinds of considerations that you might have when you're developing a technology are going to be reflected by the people making it, right? It's going to reflect the people who are making it. In addition to that, the actual data, so the texts on which these large language models are based on, I'm sorry to say old books and, sort of trolling the internet, often a lot of Reddit threads. And that is a particular kind of language, historically we are not known as being a particularly equitable society. In fact, it's the arc of the universe that's been towards justice, but where the universe started wasn't particularly going in that direction, right? So the problem is that invariably the kinds of text that these large language models are spitting out are kind of racist, sexist, homophobic, xenophobic, they're deeply problematic and that's embedded in the design. And yet the technology is so complex that it's very difficult to root out. So here you see problems with procedural equity, who's involved in the decision making? Problems with design equity, what's in the design? Problems with historical legacy, right? That's embedded in the data sets. And then there's questions about how the technology might be used differently, depending on the the population that is going to be engaged with it and who the assumed customer is. And there's a lot of concern that it might be used to influence social services in nefarious ways. But then, be useful as a, to understand what the government is doing in more positive ways. And so there's a concern that it's going to be differential based on the population that has more or less power in the in the community. So those are a few things that our analysis revealed in the reading you saw of fuller discussion of that. It also talked about how even in the production of the algorithms and the processing of those algorithms that it requires so much energy, but that the energy costs are not likely to be equally born. That is likely to increase environmental injustice and it's likely to increase social inequality as well. And so these are the kinds of things that this analogical case study method is designed to do through anticipation, to identify the different dimensions of equity that are at stake in a particular emerging technology. So that then tech developers and policy makers can learn how to address it and that's what we try to do in that report, you see at the end recommendations for both of those communities. And we talk for example about how, there are certainly things that regulators can do, which I'll talk about it in a second. But there is a lot that even research funding agencies can do in terms of ensuring that there is funding for public large language models. That there is an explicit incorporation of different kinds of texts that ensures that the algorithms that are learning from the day data are learning more equitable lessons from that data. So there are things that we can do really upstream to affect these large language models that once they're rolled out, they're extraordinarily complex and essentially impossible to fix. So it becomes even more important to to address it upstream in the process. So, when we talk about shaping at these upstream levels, then we're talking, as I suggested about mission-oriented research programs, about interdisciplinary projects, bringing together the humanities, social sciences as well as the sciences and engineering. We're also talking about community engagement. In fact, very famously, there have been breast cancer survivors who sit on technical peer review panels. AIDS patients have done the same thing or people who are HIV positive, so that's possible and it often leads to very fruitful and collaborative results. In fact, another example of this is in Flint, where citizens of Flint said after the water crisis that really destabilized and affected so much of the city, that if you want to come in to do research, you we want to make sure that you're doing research that actually benefits us. And so they didn't allow researchers to just come in, including those from University of Michigan. What they said was we'll work with you to make sure that we can find a mutually beneficial arrangement and if you sign on and we work together, then we'll also help you get funding, we'll be invested in this too. In those kinds of context, right? It's a different kind of peer review, essentially, you can imagine different kinds of standards. We tend to have standards for very technical things. We can think about standards around equity as well, certain sorts of standards in terms of ensuring, for example, that. Facial recognition technologies or large language models have to have certain diversity in the data set. And there are ways of thinking about this that are similar to existing requirements around data collection and retention and conflict of interest. So these aren't new, and I think that's so important to keep in mind. And then finally we go to the regulatory tools, and as I've suggested, we tend to always focus on the regulatory tools and what I hope the readings and the conversation has done is suggest that there are a lot of other things. But let's talk about the regulatory tools as well. So in general, around the world, we're pretty good at making sure that we have strict rules in terms of what kinds of pharmaceuticals are approved. We want to make sure that they're safe and effective. We're pretty good at new technologies, making sure that they're not going to have toxic impacts for the environment. We have processes inside of universities that ensure that ethics is conducted in appropriate and straightforward way. Those are known as institutional review boards. But we haven't really developed these kinds of mechanisms to address questions of equity. And you read a couple of things from the Ada Lovelace Institute that offers some suggestions for how we might do this. So there's one set of interventions that would be very similar to just regular old pharmaceutical or drug approval, called Algorithmic Risk Assessment. And that means before you deploy something that you have to as a developer, submit data and that there's a clear set of criteria, and experts review according to those clear criteria, and then they make decisions accordingly. Right now we don't have any kind of regulatory system for any of these kinds of technologies, and so even having that is important. Now there are some places around the world that are thinking about it and developing it, but we were still in the process of doing that, so that's pre-deployment. The other option is post-deployment, and this is where you might do impact evaluation. So, to go back to conversations about responsiveness, here you're talking about responsiveness once the technology is out there in the world and you realize, hey, wait a second. It turns out that there's something about this algorithm that is disproportionately harming already marginalized communities for example, but that requires the technology to be fixable, to be changeable, right? You have to have a warning system, right? So you have somebody that you have to call or email, you have to be able to file a complaint, but then you have to be able to change the technology as well. And you need some sort of governance review process so that people can review the complaint, and make sure it's accurate and then proceed accordingly. You could imagine as part of this a post-market surveillance program. Again, we already have something like this when it comes to pharmaceuticals and in different countries, those kinds of systems vary in some places they're pretty strict, and that there's an obligation for example, physicians to report adverse events to the National Drug Authority. In other places, it's a looser system, but that is another way to ensure that there's some sort of review. The challenge with these kinds of methods though is that you don't always know, right? If it's a drug and you have an adverse event, you probably know that the drug caused it, or at least you might have an inkling with an algorithm for example, with a machine learning algorithm. It's often really hard to know what accurate and inaccurate are, and correct and in correct are, and this is why while these kinds of interventions can be useful, it's often more useful to have these conversations very early on in the development process. And the overall arching point that I want to make is that, just because those are kinds of decisions that get made early on in the decision making process, doesn't mean that government doesn't have a role to play. In fact policy is crucial at all of the steps of the process, and what I hope is that the readings that I've provided in this lecture give you some tools to think about what, where policy might make an important intervention. To ensure essentially that all of the technologies that develop can enhance rather than detract from equity.