Now, let's think about how I can apply this idea of elimination to find the inverse matrix, which solves the more general problem no matter what vectors I write down on the right hand side. Say I have a 3 by 3 matrix A and its inverse B, which I multiply together to get the identity matrix I. So before we had the matrix for A was 1 1 3, 1 2 4, and 1 1 2 and I'm going to introducing some notation, I'm going to call B composed of elements b11, b12, b13, where the first digit represents the row. So then I'll have b21. The second digit represents the column. So this would then be b22 and this one will be b row 2 column 3. I have b31, b32, b33. So that is, that's the row and that's the column equals I. What we're saying here is that B, is actually the inverse of A. So that is if I multiplied A by its inverse, I'll get the identity and actually the inverse is special because I can do it either way around. I can apply it A on the right or on the left and it'll still work because I times the inverse of A, is just the inverse of A, so it doesn't really matter which way round I do it. But I'm just doing it here, so I've got some b's to play with, I'm not going to get confused with my a's or my b's. But for this example, I'm saying B is the inverse matrix of A. Now, notice that this first column of B here, is just a vector. It's a vector that describes what the B matrix, the inverse of A, does to space. Actually it's the transformation that that vector does to the x-axis if you like. The identity matrix is just 1 0 0 0 1 0 0 0 1. So I could write this down, I could write down this A, that big square guy, times b11, b21, b31 and I would get just the first column of the identity matrix 1 0 0. Now, I could solve that by my elimination method and back substitution just in the way I did before, except I'd be juggling different numbers here. Then I could do it again for the second column of B, the second vector of B, for the second vector of the identity matrix and I could do it again for the third one. I'd be doing that in some series if you like, but here's the fun bit, I could do it all at once. So I'm going to do this process of elimination and back substitution all at once. I'm going to do it for all the columns on the right hand side simultaneously. So if I take the first row here and I take it off the second row as before, I'm going to get rid of this one. And if I take the first row off of the third row, I'm going to get rid of both this one and this one. So, if I put the unaltered ones down here 1 1 3, if I take that off of the second row, I'm going to get 0 1 1. And if I take that off the third row, I'm going to get, 0 0 minus 1. So on the right hand side, I'm going to have the unaltered one is 1 0 0, and I've taken that off of the second row. So I've taken one off that, that's minus 1 1 0 and I've taken that off of the third row as well so I've got minus 1 0 1. And so now, I can multiply that third row through by minus 1 and that's in the form I then want. Where I've got, ones on the leading diagonal and zeroes below it. When I multiply that through by minus 1, I get a plus there, and a minus there. Now, I can substitute that third row back into the second and first rows, so I can take one of it off of the second row and I can take three of it for the first row. So now the unaltered one is the bottom one, 1 0 minus 1. If I take one of those off of the second row, then I've got 0 1 0 there, I take that off of the second row over here. So if you take 1 off of minus 1 you get minus 2. Takes 0 of 1 and I've got one, take minus 1 off of zero, I'm effectively adding 1. Then I want to do that again to the first row, I want to take three of them off to make this guy here zero. So I've got then 1 1 0 there and I want to take three of these off the first row, so I take 3 off of 1 gives me minus 2, take 0 off there and I've got to take 3 of minus 1 off the 0 so that gives me plus 3. So we're nearly there I've just got this irritating one here, I've got to take this row off of this row and I'll be home. So if I take, my third rows unaltered, my second row is going to be unaltered, 1 0 minus 1, minus 2 1 1. I'll take that row off of that row, then that altered one gives me 1 0 0, take the 1 off there, take this row off of this row, minus 2 off of minus 2. I've got a take minus 2 off of minus 2, so that gives me 0, got to take 1 off of 0, that gives me minus one. And I've got to take 1 off of 3 which gives me 2. So that's my answer. So now, I've got the identity matrix here for A. In effect I've transformed A to its identity matrix. I've got my B matrix which I haven't really changed, my identity matrix over here, I've changed. So now, I've got an identity times a B matrix is equal to this guy. So the identity times something is just itself. So this is in fact, my answer for the inverse of A, or B. So I've found a way here to find the inverse of a matrix just by doing my row elimination and then my back substitution, which is really cool. So what we've done is, we found an inverse matrix A to the minus one here. And if we multiply A times A to the minus one we'll get the identity and prove to yourself if you like, just pause for a moment and have a go at doing that for yourself and verify that that times that does in fact give you the identity matrix. And in school you probably did this a different way but computationally, this way is much easier, particularly when you come to higher dimensions, a hundred rows and columns or something like that. There are computationally faster methods of doing what's called a decomposition process. And in practice what you do in any program that you write, is you simply call the solver of your problem or the function, something like inv(A) or whatever it is and it will pick the best method by inspecting the matrix you give it and return the answer. But the point here is to show how these problems are actually solved in a computer and also we'll observe some of the properties of these methods in different circumstances that will effect the sorts of things we want to do, when solving these sorts of problems. So what we've done here is, figured out how to solve by sets of linear equations in the general case, by a procedure we can implement in a computer really easily. And we've made that general by finding a general method to find the inverse of a matrix, in the general case for whatever is on the right hand side of our system of equations and hopefully that's really satisfying and really nice.