[MUSIC] Hi everyone, this is proof. Yoon yong jin from christ. And this is West medics for the AI beginners part one which is linear algebra And Sessions three or week six. So up to now we study about the diagonalization. And with the example now we can let's study more about how we can use diagonalization. Some other examples, for example if a square matrix A can be diagonalized it is relatively easy to compute A to the M. For a high power when for example 8 to the 38 the 50 a to the 100 like that. So why this is important. So if you have is you use the AI and AI in a deep learning we have a lot of multiplication inside of the which means a lot of multiplication of matrix. So if we have certain like the same Algorithm which is one Algorithm means one matrix. If you want to calculate 8-30 then if he can the if the matrix is diagonal matrix scare matrix then we can easily calculate to the age of 30 by using the diagonalization. Let's see how to do that. Let's assume that they can be diagonalized like this PDP members here P course because also going to matrix away and D is the diagonal matrix away. And for example to find 8-5 then just you take PDP most times PDP most times like this to five times and then p immersed in P. Here is what become identity matrix. So this form we can easily rewrite again PDIDI like this. And then this means p times did to the five times chambers. So in general, 8 to the M equals PD to the times pin birth here D to the M is very easily calculated. Can be calculated if these diagonal matrix. For example, if D course 800 G zero B000 C. Then D to the M is just each element to the M. For example, D to the 20 is 8 to the 20 000 B to the 200006 20. So only diagonal element, right. We just do the those are multiplication power of each element. So for example, hey is three x 3 matrix and can be diagonalizable such that this A equals P times 100020003 time pin birth. And here what you can see is that we can directly know that hygiene values of Eva 1, 2, 3, right. And P inside the PSP is there is a corresponding island vectors, linearly independent vectors, right. And from here the problem is that water? The again back values of A members. And what can you say about the indian vector of a birth? And what is the determinant of A birth? If you have diagonalization for the A. We can also find that the Eigen values of reimbursed Eigen vectors reimbursed and determinant over in birth. Let's do it, so first to figure out what is the Eigen values of in birth? Is that because you have A course PDP marks. Here we just take AB like this. And just use AB immerses what be immersed times a bird. So we just take 100 deep D and pin birth as one part. So 10002000 is D pin birth. And we took that we by using the IBM versus being birth members. So we can take 1000 2000 3 Pin birth in birth times pin birds like this. And again this one become pin burst in birth and 1000 2000 3 to start on our metrics in birth. And then pin birth, right? The p inverse inverse is that P. So it becomes again p times 1000 1/2 0001 number three. Because the diagonal matrix of in versus what? Just if a 00 if diagonal matrices 800 and 0-0003 800 AB 000 C. Then just inverse of that Calendar Matrix is one over a, one over b. one over c like that. So here because that are Diagonal Matrix is 1000 20003. We can, you know that the inverse is what 111 over to one overseas like this. So here we found that the AM versus what P times 1000 1/2 000 1/3 papers. So here we found that Eigenvalue of reimburses what one and 1/2 and 1/3 Eigenvectors of a inverse is what corresponding to the 1 1/2 and 1/3 hard. Eigenvectors of A corresponding to the one over one and 2 and three respectively. So this is a very interesting phenomena and determining how about determined, determinant of the inverse is what determinant of P 1000 1/2 000 1/3 P in birth. And determinant AB is just determining eight times determined to B. So we can decompose this to the determinant of p determinant of d determinant of pin birth and determinant of P inverse is one over determinant of P. So we can cancel out the terminal P. So we found that determinant of A inverses what just determined 12 diagonal matrix 1000 1/2 000 number three. So this is very interesting phenomena for that. So here you can see that you can easily find the determinant of the matrix A. Which is a square matrix A. By using this diagonalization you just take the determinant of D then determinant can be easily find. So after here we studied about all the content and so that now we are ready to are more ready to understand this steam learning and support vector machine more. So let's quickly wrap up to those those team learning and support vector machine again and how we can use this our linear algebra. So as you remind in the first week of the hour class, we study about those four questions, what is? AI why AI comes out recently and how AI enforce industrial revolution is related and so why we need to know mess for AI. Right, so here so far we study about linear algebra about how to formulate how to play with those matrix. So, in a single layer artificial neural network for shallow running. So we try to figure out there black box to calculate to expect to estimate relations between the sleep hours and exercise and diet that calorie. To estimate the weight and blood pressure for the person like this, right? So those are input and output related to each other. But don't know how to relate it, right? So by using the big data we have Andy for Andy and Sandy and Tony's case we can find the matrix which is a black box and which is record AI engines for this shallow running. So finding these case to market is a black box is finding matrix is saying the same concept. So for the in player, for the other player we just put the big data for input and output, right? So for example for in this case so X on X tex 3632500 like this and for weight and blood pressure for ND Is lion and Y two which is 70 and 110. And to find those artificial neural network is what finding the matrix between the input and output. So these matrixes city by two matrix in this case. So this is the same problem. So finding the neural network artificial neural network the AI engine is same as finding the matrix for this between the input and output layers. So we need to far estimate matrix element by using this output estimation. So we just put certain whatever I put its mission and compare with your big data. And then by using optimization you can calculate you can find the more Eject Ai engine which is three x 2 matrix here. If you have more layer between the m players are player, we called steam learning. So here we have three different matrixes. Okay, so we need to find sleep by two matrix. Two by four matrix here and four by two matrix. So this is a major is we need to find out which is about which is a team learn which is the same word. We find them learning deep neural network engines from this dictator, because the depth of this from the input to our player is deep. So we called the planning, right? And the other example I bring out in the first stages that support vector machine go to understand so far back to machine. So project the machine is simply is some separation line, right, separate two different data set in a big data. So in this case we also can use the vectors and matrix with some optimization. So among machine learning, some project machine is usually mostly used for classification. So this will leave you again, so the softball back to mission here. In the two d case, we try to find line which can differentiate this class one and class two data point, right? So this is how to choose those lines. We try to maximize the margin by using this linear algebra. And also like later, we learn about documentation. So here we can formulate this suffers back to machine line. We're playing as WT Terrible Transport X plus B equals 1, 0 -1 here. Again, now we understand what is transposed matrix W here. Okay, so w is a kind of normal vector for the plane for two to the plane to the case is line three decades. It's become plain and more higher dimension becomes like a hyper plane. So the here that we try to find the W transport X plus B equals zero, which maximize the margin. This is a short brief like introduction for the how to calculate the maximum margin, that how to calculate the WTX plus B equals zero, which has a maximum margin. So this maximum margin is too over the normal W. Here again, we already we now run about normal vector W, right. So we can easily now understand okay to find the support vector machine which maximize the margin between two data set, right? We need to maximize the margin which is margin means to over dome of doom are back to W right? So it can be a minimization problem again, which is one over to W transports W. Which is same as normal W. So and we have constraint here, so we can have an objective function here and concern here. We need to do some complex quadratic optimization problem, which we need the vector calculus in the second part. So we now kind of partly understand to understanding this deep learning or some support vector machine, like this kind of the artificial neural network or AI. Because we only studied in your algebra here to understand more, so to do to understand more. We need to study about vector calculus because vector calculus is related to the optimization. So such example, like in the first example, I bring you steam learning also need optimization to find the matrix. And by N matrix which is AI engine for them learning also support vector machine need optimization. So that for the second part we will study about this optimization, which is related to the vector calculus. Okay, let's see in the part two in the near future. And so far, thank you very much for the following our mind, like a class and see you in the next like, apart to Dr calculus for the AI mess. Thank you.