then I've got this is

in fact when I pre-multiply by one over the determinant,

is in fact the inverse of a.

So we proved here that inverse you learned in school,

is in fact correct.

But the interesting thing is that this determinant there,

this now that it scales space.

If we then take this matrix

when we do the flipping around we haven't changed

it's scaling of space we need to undo

that scaling and bring it back down to a scale of one.

So the determinant here is what we need to divide

the inverse matrix by in order

for it to probably be an inverse.

Now we could spend another video looking at the extension

of the idea of well actually the form to find out,

how to find the determinants in

the general case computationally.

But this is both tricky to show and it's pointless.

Knowing how to do the operations

isn't a useful skill anymore because we just

type det A into our computer,

and polyphenol MATLAB will do it for us.

From a learning perspective,

it doesn't add much to val echelon.

Val echelon does actually we slowly went through it.

I'm not going to teach you how

to find determinants in the general case.

If you want to know,

lookup QR decomposition then

follow that through and that's

how computation you go and find it out,

over the linear algebra test

book is the other place to look.

That's how you do it in the general case.

Now let's think about this matrix A here.

It transforms e1 and

e2 hat to two points on the same line.

It transforms e1 hat to 1,1,

and it transforms e2 hat from there over to 2,2.

They're both points on the same line.

They are just a multiple

of each other they're not linearly independent.

What this matrix in fact does,

is it transforms every point in space on a line.

Notice that its tone of that matrix is going to be 0,

if I take a, d minus b,

c determinant of a is nought because

the any area was

gone onto a lot and therefore that area is nought.

So having computed by

the geometrically or computationally,

you get a determinant of nought.

So if I had a three by three matrix

with a similar situation describing a 3D space,

and if I had the same position where one of

the new basis vectors was

just a linear multiple of the other two,

it wasn't linearly independent,

then now would mean the new space was

either a plane or if there

was only one independent basis vector,

a line like we have here.

In either case the volume enclose would be zero,

so the determinant would be zero.

Now, let's turn back to

our val echelon form and apply that idea.

Let's take this set of simultaneous equations say.

You can see that the third row

is just the sum of the first two.

So row three is equal to row one plus row two.

You can see that column three is just

given by two of column one plus column two.

If you want to pause for a moment to verify

that it really is true.

So this transformation matrix

doesn't describe three independent basis vectors.

One of them is linearly dependent on the other two.

So this doesn't describe

any 3D space it collapses it into a 2D space.

So let's see what happens when I try to

reduce this val echelon form.

If I take off the first row from the second,

okay so far so good,

I've got that my a,b,c stays the same,

and I take the first one off the second one.

So I've got 12 take 12 of 17, I get five.

If I then take the first

and second ones off the third one,

I've now got zeros everywhere here.

If I do that on here I take

12 and 17 of 29 I get zero here.

So now I've got zero C equals zero,

which is sort of true but not useful.

Then an infinite number of solutions for C in effect,

any value of C would work.

So now I can't solve my system of equations anymore,

I don't have enough information.

So if we think about this from

solving simultaneous equations point of view,

my mistake was when I went into

the shop to buy apples, bananas,

and carrots, the third time I went in

I just ordered a copy of my first two orders.

So I didn't get any new information

and I don't have enough data therefore to find

out the solution for

how much I individual apples

and bananas and carrots costs.

So what we've shown is that where

the basis vectors describing

the matrix are linearly independent,

then the determinant is zero,

and that means I can't solve the system

of simultaneous equations anymore.

Which means I can't invert the matrix because I

can't take one over the determinant either.

That means I'm stuck this matrix has no inverse.

So there are situations where I might want to do

a transformation that collapses the number

of dimensions in the space but that will come at a cost.

Another way of looking at this is that

the inverse matrix lets me undo my transformation,

it lets me get from the new vectors

to the original vectors.

If I've done two dimension by

turning a 2D space into a line,

I can't undo that anymore,

I don't have enough information because I've

lost some of it during the transformation,

I've lost that extra dimension.

So in general, it's worth checking before you propose

a new basis vector set and then use

a matrix to transform your data vectors,

that this is a transformation you can undo,

by checking that the new

basis vectors are linearly independent.

So what we've done in this last video in

this module is look at the determinant,

how much we grow space.

We've also looked at the special case

where the determinant is zero,

which means that the basis vectors

are linearly independent,

which means the inverse doesn't exist.

In this first module on

matrices what we've done so far is

define what a matrix is as

something that transformed space.

We've looked at different arch terms of

matrices like rotations and inverse [inaudible]

and shears and how to

combine them by doing successive transformations.

We've looked at how to solve systems of linear equations,

by elimination and how to find inverses.

Then finally, we've looked at

determinants and linear independence.