Introduction to Inner Product Spaces
0/10 completed

Norm and Distance
0/11 completed

Cauchy–Schwarz Inequality
0/10 completed

Orthogonality
0/10 completed

Orthogonal Complement
0/14 completed

Orthogonal Sets and Bases
0/12 completed

Gram Schmitt Process
0/5 completed

Orthogonal Matrices
0/12 completed

Orthogonal Transformations
0/17 completed

The Spectral Theorem
0/13 completed

{"Free":0,"Sample":1,"Paid":2}

[{"Name":"Introduction to Inner Product Spaces","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Lesson 1 - Intro - Inner Product","Duration":"10m 40s","ChapterTopicVideoID":10012,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.140","Text":"Well, we\u0027re starting a new topic in linear algebra."},{"Start":"00:04.140 ","End":"00:11.745","Text":"And it\u0027s the concept of an inner product space and also an inner product."},{"Start":"00:11.745 ","End":"00:16.829","Text":"And just a general like table of contents."},{"Start":"00:16.829 ","End":"00:19.560","Text":"And start with a preface or introduction to"},{"Start":"00:19.560 ","End":"00:22.890","Text":"what an inner product is and what\u0027s an inner product space."},{"Start":"00:22.890 ","End":"00:26.955","Text":"Then we\u0027ll define things a little more formally."},{"Start":"00:26.955 ","End":"00:30.670","Text":"And then we\u0027ll work through some examples."},{"Start":"00:30.670 ","End":"00:34.730","Text":"A very important, you have to have studied vector spaces"},{"Start":"00:34.730 ","End":"00:38.870","Text":"because the inner product space builds on vector spaces."},{"Start":"00:38.870 ","End":"00:40.940","Text":"So let\u0027s get started."},{"Start":"00:40.940 ","End":"00:44.585","Text":"Okay, The preface, It\u0027s like an introduction."},{"Start":"00:44.585 ","End":"00:51.200","Text":"So an inner product is something we defined on a vector space."},{"Start":"00:51.200 ","End":"00:53.060","Text":"And it\u0027s a function."},{"Start":"00:53.060 ","End":"01:00.800","Text":"And that function takes in two vectors and spits out a scalar, a number."},{"Start":"01:00.800 ","End":"01:03.140","Text":"That\u0027s a function of two variables."},{"Start":"01:03.140 ","End":"01:06.590","Text":"And rather than giving you the latter like F or something, We just,"},{"Start":"01:06.590 ","End":"01:11.330","Text":"we have a special notation and angular brackets."},{"Start":"01:11.330 ","End":"01:13.490","Text":"And the inner product of u and v,"},{"Start":"01:13.490 ","End":"01:15.590","Text":"we just mark it this way."},{"Start":"01:15.590 ","End":"01:18.080","Text":"And as I said, take,"},{"Start":"01:18.080 ","End":"01:19.700","Text":"we take two vectors, u and v,"},{"Start":"01:19.700 ","End":"01:22.620","Text":"and their inner product is a scalar."},{"Start":"01:22.690 ","End":"01:28.280","Text":"But not any function which takes two vectors and gives us a scalar is an inner product."},{"Start":"01:28.280 ","End":"01:32.930","Text":"There are certain conditions actually going to have four conditions."},{"Start":"01:32.930 ","End":"01:35.519","Text":"And we\u0027ll see that later."},{"Start":"01:35.710 ","End":"01:39.155","Text":"Now we always start with a vector space."},{"Start":"01:39.155 ","End":"01:41.420","Text":"So in our first example,"},{"Start":"01:41.420 ","End":"01:45.305","Text":"we\u0027ll take the vector space to be R2,"},{"Start":"01:45.305 ","End":"01:48.050","Text":"the two 2D space."},{"Start":"01:48.050 ","End":"01:50.300","Text":"And let\u0027s take a pair of vectors,"},{"Start":"01:50.300 ","End":"01:51.770","Text":"u and v only we know,"},{"Start":"01:51.770 ","End":"01:54.020","Text":"we\u0027re going to look like in R2,"},{"Start":"01:54.020 ","End":"01:56.629","Text":"just a pair of numbers."},{"Start":"01:56.629 ","End":"01:59.480","Text":"We\u0027ll call it U1, U2 and V1, V2."},{"Start":"01:59.480 ","End":"02:01.340","Text":"Although for being strict about it,"},{"Start":"02:01.340 ","End":"02:03.050","Text":"we would write them as column vectors."},{"Start":"02:03.050 ","End":"02:05.720","Text":"But we already agree that safe space."},{"Start":"02:05.720 ","End":"02:10.445","Text":"It\u0027s just more convenient to write them in a row or column."},{"Start":"02:10.445 ","End":"02:14.135","Text":"Now I have to tell you how to multiply them,"},{"Start":"02:14.135 ","End":"02:17.660","Text":"kind of how to get the inner product of these two."},{"Start":"02:17.660 ","End":"02:20.210","Text":"So I\u0027m suggesting the following."},{"Start":"02:20.210 ","End":"02:24.080","Text":"In a product we take u and v. What we do is we"},{"Start":"02:24.080 ","End":"02:31.190","Text":"multiply U1 with V1 and add to that the productive U2 and v2."},{"Start":"02:31.190 ","End":"02:36.305","Text":"Those of you who have studied physics and know what a dot-product is."},{"Start":"02:36.305 ","End":"02:38.810","Text":"This is very much like a dot-product."},{"Start":"02:38.810 ","End":"02:40.565","Text":"It\u0027s almost the same thing."},{"Start":"02:40.565 ","End":"02:43.250","Text":"And if you haven\u0027t heard of it, never mind."},{"Start":"02:43.250 ","End":"02:45.440","Text":"And to get very specific,"},{"Start":"02:45.440 ","End":"02:47.300","Text":"let\u0027s say that you was 1,"},{"Start":"02:47.300 ","End":"02:49.985","Text":"2, and v was 34."},{"Start":"02:49.985 ","End":"02:53.210","Text":"Then the dot, dot product,"},{"Start":"02:53.210 ","End":"02:59.090","Text":"the inner product of u and v is 1 times 3 plus 2 times"},{"Start":"02:59.090 ","End":"03:06.510","Text":"4 would be 11 with this definition of the inner product."},{"Start":"03:06.880 ","End":"03:12.485","Text":"So our function takes two vectors,"},{"Start":"03:12.485 ","End":"03:13.895","Text":"this one and this one,"},{"Start":"03:13.895 ","End":"03:18.275","Text":"and gives a scalar, that\u0027s the output."},{"Start":"03:18.275 ","End":"03:20.615","Text":"These two vectors or an input,"},{"Start":"03:20.615 ","End":"03:23.120","Text":"a scalar is the output."},{"Start":"03:23.120 ","End":"03:25.520","Text":"Now, I mentioned before"},{"Start":"03:25.520 ","End":"03:29.089","Text":"that not any function"},{"Start":"03:29.089 ","End":"03:32.540","Text":"that takes two vectors and gives a scalar is going to be an inner product."},{"Start":"03:32.540 ","End":"03:34.475","Text":"They have to be certain conditions."},{"Start":"03:34.475 ","End":"03:40.740","Text":"And now I\u0027m gonna give you the four conditions that it has to satisfy."},{"Start":"03:43.090 ","End":"03:48.380","Text":"The first condition is that for any u and v,"},{"Start":"03:48.380 ","End":"03:52.610","Text":"the inner product of u and v is the same as the inner product of v and u."},{"Start":"03:52.610 ","End":"03:54.605","Text":"In other words, the order doesn\u0027t matter."},{"Start":"03:54.605 ","End":"03:58.490","Text":"So let\u0027s check if this condition holds with our inner product."},{"Start":"03:58.490 ","End":"04:01.955","Text":"We took u and v, like so."},{"Start":"04:01.955 ","End":"04:03.590","Text":"And we computed it."},{"Start":"04:03.590 ","End":"04:08.765","Text":"Actually we computed earlier inner product of u and v gave us 11."},{"Start":"04:08.765 ","End":"04:11.645","Text":"Now let\u0027s try doing it the other way."},{"Start":"04:11.645 ","End":"04:14.060","Text":"Inner product of v and u,"},{"Start":"04:14.060 ","End":"04:16.760","Text":"it\u0027s time we reverse the order of the vectors 3,"},{"Start":"04:16.760 ","End":"04:18.110","Text":"4 and then 1,"},{"Start":"04:18.110 ","End":"04:21.095","Text":"2, 3 times 1 plus 4 times 2."},{"Start":"04:21.095 ","End":"04:23.960","Text":"And it also came out 11."},{"Start":"04:23.960 ","End":"04:27.680","Text":"This was no big surprise because"},{"Start":"04:27.680 ","End":"04:31.490","Text":"really we\u0027re getting the same products just in a different order."},{"Start":"04:31.490 ","End":"04:34.415","Text":"That have 1 times 3, we have 3 times 1,"},{"Start":"04:34.415 ","End":"04:36.170","Text":"2 times 4, we have 4 times 2."},{"Start":"04:36.170 ","End":"04:39.750","Text":"Obviously, going to be in general true."},{"Start":"04:40.150 ","End":"04:43.745","Text":"And by the way, I should have mentioned this upside down,"},{"Start":"04:43.745 ","End":"04:46.910","Text":"a presumed, you know what it is going in case you don\u0027t."},{"Start":"04:46.910 ","End":"04:48.800","Text":"It means for all."},{"Start":"04:48.800 ","End":"04:52.190","Text":"So like here for all u and v."},{"Start":"04:52.190 ","End":"04:57.845","Text":"And that\u0027s the set theory belongs to for all u and v in R2."},{"Start":"04:57.845 ","End":"05:04.549","Text":"And the second rule, and for any alpha and alpha will be a scalar,"},{"Start":"05:04.549 ","End":"05:07.790","Text":"then the inner product of alpha u with"},{"Start":"05:07.790 ","End":"05:13.085","Text":"v is the same as alpha times the inner product of u with v."},{"Start":"05:13.085 ","End":"05:15.800","Text":"So if we multiply Alpha by you,"},{"Start":"05:15.800 ","End":"05:17.300","Text":"then take the inner product of"},{"Start":"05:17.300 ","End":"05:21.200","Text":"these two vectors is the same as taking the inner product of the vectors,"},{"Start":"05:21.200 ","End":"05:24.410","Text":"getting a scalar, multiplying it by alpha."},{"Start":"05:24.410 ","End":"05:28.070","Text":"So let\u0027s check, I\u0027ll take the same u and v as before."},{"Start":"05:28.070 ","End":"05:29.900","Text":"Alpha, keep it general."},{"Start":"05:29.900 ","End":"05:31.520","Text":"I could have taken say,"},{"Start":"05:31.520 ","End":"05:34.265","Text":"alpha equals 7 or some number."},{"Start":"05:34.265 ","End":"05:37.250","Text":"We could, but I\u0027ll leave it as general."},{"Start":"05:37.250 ","End":"05:40.160","Text":"The general scalar."},{"Start":"05:40.160 ","End":"05:43.520","Text":"If I evaluate this pit alpha times u,"},{"Start":"05:43.520 ","End":"05:45.845","Text":"it\u0027s alpha times 12,"},{"Start":"05:45.845 ","End":"05:48.020","Text":"which is alpha 2 alpha."},{"Start":"05:48.020 ","End":"05:50.750","Text":"Okay, now let\u0027s see if these are equal."},{"Start":"05:50.750 ","End":"05:53.090","Text":"First, the left-hand side,"},{"Start":"05:53.090 ","End":"06:00.965","Text":"alpha u inner product with v is alpha u from here, v from here."},{"Start":"06:00.965 ","End":"06:03.410","Text":"And that\u0027s equal to this times this,"},{"Start":"06:03.410 ","End":"06:04.850","Text":"which is three alpha."},{"Start":"06:04.850 ","End":"06:11.010","Text":"And this with this gives us eight alpha altogether, 11 Alpha."},{"Start":"06:11.010 ","End":"06:13.750","Text":"Now the right-hand side, first,"},{"Start":"06:13.750 ","End":"06:15.985","Text":"we take the inner product of u with v,"},{"Start":"06:15.985 ","End":"06:18.680","Text":"which is this with this."},{"Start":"06:20.940 ","End":"06:24.370","Text":"And actually we\u0027ve done this calculation before."},{"Start":"06:24.370 ","End":"06:28.315","Text":"This bit comes out 11 alpha times 1111 Alpha."},{"Start":"06:28.315 ","End":"06:31.285","Text":"So we get the same result both ways."},{"Start":"06:31.285 ","End":"06:35.260","Text":"And so this second rule is satisfied."},{"Start":"06:35.260 ","End":"06:37.675","Text":"And I said there are four."},{"Start":"06:37.675 ","End":"06:40.750","Text":"Now we come to the third rule."},{"Start":"06:40.750 ","End":"06:42.070","Text":"And what it says,"},{"Start":"06:42.070 ","End":"06:44.380","Text":"if we have u, v, and w,"},{"Start":"06:44.380 ","End":"06:48.700","Text":"If I take the inner product of u plus v with w,"},{"Start":"06:48.700 ","End":"06:53.189","Text":"It\u0027s the same thing as if I take the inner product of u with w,"},{"Start":"06:53.189 ","End":"06:56.150","Text":"then a v with w and I add those two."},{"Start":"06:56.150 ","End":"06:58.220","Text":"Let\u0027s see an example."},{"Start":"06:58.220 ","End":"07:00.710","Text":"So let\u0027s take u is 1, 2 0,"},{"Start":"07:00.710 ","End":"07:03.245","Text":"V3 for W is 56."},{"Start":"07:03.245 ","End":"07:05.510","Text":"And I\u0027m going to need u plus v."},{"Start":"07:05.510 ","End":"07:12.139","Text":"And that comes out to this plus this is four comma six."},{"Start":"07:12.139 ","End":"07:16.490","Text":"Okay, as to the left-hand side first and later we\u0027ll do the right-hand side."},{"Start":"07:16.490 ","End":"07:20.300","Text":"So left hand side u plus v we have here is 46,"},{"Start":"07:20.300 ","End":"07:22.955","Text":"W from here is 56."},{"Start":"07:22.955 ","End":"07:27.950","Text":"And the inner product as take the 4 times the 56 with the six,"},{"Start":"07:27.950 ","End":"07:30.845","Text":"it comes out to be 56."},{"Start":"07:30.845 ","End":"07:33.245","Text":"And now the right-hand side,"},{"Start":"07:33.245 ","End":"07:42.720","Text":"you with w and then v with w. So that\u0027s 12 inner product with 56 and then 34 with 56."},{"Start":"07:42.820 ","End":"07:46.640","Text":"And the first in a product one times five plus two times six."},{"Start":"07:46.640 ","End":"07:49.280","Text":"The second one, 3 times 5 plus 4 times 6."},{"Start":"07:49.280 ","End":"07:50.990","Text":"And again we get 56,"},{"Start":"07:50.990 ","End":"07:52.205","Text":"which is the same."},{"Start":"07:52.205 ","End":"07:54.905","Text":"So that\u0027s rule number 3."},{"Start":"07:54.905 ","End":"07:56.660","Text":"Informally, as I said,"},{"Start":"07:56.660 ","End":"07:59.255","Text":"we\u0027re going to do everything more precisely later."},{"Start":"07:59.255 ","End":"08:06.799","Text":"What rule number four says is that if I take the inner product of any vector with itself,"},{"Start":"08:06.799 ","End":"08:12.650","Text":"I\u0027m always going to get something bigger or equal to 0, positive or 0."},{"Start":"08:12.650 ","End":"08:14.255","Text":"And what\u0027s more?"},{"Start":"08:14.255 ","End":"08:21.380","Text":"The only way you can get 0 is if u is 00 with itself gives 0,"},{"Start":"08:21.380 ","End":"08:26.150","Text":"but nothing else, everything else with itself would give you actually positive."},{"Start":"08:26.150 ","End":"08:28.370","Text":"So let\u0027s see how that works out."},{"Start":"08:28.370 ","End":"08:29.915","Text":"An example."},{"Start":"08:29.915 ","End":"08:31.490","Text":"Let\u0027s make it general."},{"Start":"08:31.490 ","End":"08:34.715","Text":"Let u be anything a comma b."},{"Start":"08:34.715 ","End":"08:39.005","Text":"So the inner product of u with itself is a B with a B."},{"Start":"08:39.005 ","End":"08:41.810","Text":"And then remember we take a times a plus b times b,"},{"Start":"08:41.810 ","End":"08:43.715","Text":"we get a squared plus b squared."},{"Start":"08:43.715 ","End":"08:46.370","Text":"Now, for any numbers a and b,"},{"Start":"08:46.370 ","End":"08:48.800","Text":"obviously there\u0027s going to be bigger or equal to 0."},{"Start":"08:48.800 ","End":"08:52.775","Text":"Each one of them is bigger or equal to 0 because their squares."},{"Start":"08:52.775 ","End":"08:59.360","Text":"And note that the only way we can get 0 here is if both a and B as 0,"},{"Start":"08:59.360 ","End":"09:00.770","Text":"as soon as one of them is non-zero,"},{"Start":"09:00.770 ","End":"09:02.510","Text":"you got something positive."},{"Start":"09:02.510 ","End":"09:05.960","Text":"So only the vector 0,"},{"Start":"09:05.960 ","End":"09:09.530","Text":"0 will give us the inner product of 0,"},{"Start":"09:09.530 ","End":"09:11.240","Text":"and that is the 0 vector."},{"Start":"09:11.240 ","End":"09:14.225","Text":"When I write 0 for a vector,"},{"Start":"09:14.225 ","End":"09:18.125","Text":"well, in this case it means the pair 0 comma 0."},{"Start":"09:18.125 ","End":"09:19.460","Text":"Okay?"},{"Start":"09:19.460 ","End":"09:22.700","Text":"So that\u0027s the four rules."},{"Start":"09:22.700 ","End":"09:27.740","Text":"And that\u0027s that introduction."},{"Start":"09:27.740 ","End":"09:30.709","Text":"I just want to say a few words."},{"Start":"09:30.709 ","End":"09:35.780","Text":"In our example, we took the vector space R2,"},{"Start":"09:35.780 ","End":"09:39.380","Text":"and then we define an inner product on a vector space."},{"Start":"09:39.380 ","End":"09:41.300","Text":"And we showed it satisfy certain rules."},{"Start":"09:41.300 ","End":"09:45.275","Text":"Now when you have a vector, vector space with an inner product,"},{"Start":"09:45.275 ","End":"09:53.550","Text":"then together That\u0027s called an inner product space and sometimes called Euclidean space."},{"Start":"09:53.920 ","End":"09:57.050","Text":"Now in our example,"},{"Start":"09:57.050 ","End":"10:01.505","Text":"all the scalars we took where real numbers."},{"Start":"10:01.505 ","End":"10:09.035","Text":"And so really our inner product is said to be over the real numbers."},{"Start":"10:09.035 ","End":"10:12.559","Text":"That\u0027s not the only possibility because"},{"Start":"10:12.559 ","End":"10:17.014","Text":"there are also inner product spaces over the complex numbers."},{"Start":"10:17.014 ","End":"10:21.605","Text":"So if our scalars were taken from C to the complex numbers,"},{"Start":"10:21.605 ","End":"10:24.395","Text":"then we\u0027d have an inner product space over the complex numbers."},{"Start":"10:24.395 ","End":"10:26.000","Text":"But in this course,"},{"Start":"10:26.000 ","End":"10:33.530","Text":"we pretty much or maybe always going to stick to real numbers."},{"Start":"10:33.530 ","End":"10:37.310","Text":"We\u0027re not going to deal with inner product spaces over complex numbers."},{"Start":"10:37.310 ","End":"10:41.250","Text":"So we\u0027re done and we\u0027ll continue in the next clip."}],"ID":10133},{"Watched":false,"Name":"Lesson 2 - Formal Def - Example 1","Duration":"13m 6s","ChapterTopicVideoID":10013,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:04.110 ","End":"00:08.320","Text":"In this clip, we\u0027ll bring the formal definition of"},{"Start":"00:08.320 ","End":"00:13.270","Text":"an inner product and also of an inner product space."},{"Start":"00:13.270 ","End":"00:20.200","Text":"And I hope you\u0027ve watched the previous clip where we introduce these slowly,"},{"Start":"00:20.200 ","End":"00:22.660","Text":"less formally with an example."},{"Start":"00:22.660 ","End":"00:24.985","Text":"And that would be a good introduction."},{"Start":"00:24.985 ","End":"00:30.280","Text":"Here we do it bit more precisely informally."},{"Start":"00:30.280 ","End":"00:32.335","Text":"Let me just read it first."},{"Start":"00:32.335 ","End":"00:36.790","Text":"An inner product on a vector space V over the field of real numbers."},{"Start":"00:36.790 ","End":"00:43.995","Text":"R is a function which assigns to each pair of vectors u and v in"},{"Start":"00:43.995 ","End":"00:52.565","Text":"the real scalar and denoted angular brackets u comma v,"},{"Start":"00:52.565 ","End":"00:56.270","Text":"such that the following four conditions are satisfied,"},{"Start":"00:56.270 ","End":"00:58.085","Text":"not bring those conditions in a moment."},{"Start":"00:58.085 ","End":"01:00.845","Text":"Let\u0027s just see what this says."},{"Start":"01:00.845 ","End":"01:03.935","Text":"We start off with a vector space V,"},{"Start":"01:03.935 ","End":"01:07.910","Text":"and we\u0027re going to assume that the field is real numbers,"},{"Start":"01:07.910 ","End":"01:11.630","Text":"although it could have been complex numbers in this course,"},{"Start":"01:11.630 ","End":"01:13.850","Text":"we\u0027re just going to take real numbers."},{"Start":"01:13.850 ","End":"01:19.100","Text":"And an inner product takes any two vectors and"},{"Start":"01:19.100 ","End":"01:26.000","Text":"gives a product which is a scalar."},{"Start":"01:26.000 ","End":"01:29.945","Text":"And although it\u0027s a function, we don\u0027t, right?"},{"Start":"01:29.945 ","End":"01:35.180","Text":"We could have written something like f of u and v, But we don\u0027t."},{"Start":"01:35.180 ","End":"01:38.900","Text":"We write it as angular brackets, productive u and v."},{"Start":"01:38.900 ","End":"01:41.930","Text":"And sometimes we say multiply u with v"},{"Start":"01:41.930 ","End":"01:47.210","Text":"or because product reminds one of multiplication a bit."},{"Start":"01:47.210 ","End":"01:50.900","Text":"Anyway, not any function that we"},{"Start":"01:50.900 ","End":"01:57.365","Text":"define like this on two vectors to give a scalar is an inner product."},{"Start":"01:57.365 ","End":"02:02.705","Text":"It has to satisfy four conditions are now I\u0027m going to bring those conditions."},{"Start":"02:02.705 ","End":"02:08.015","Text":"The first condition says that for all u and v,"},{"Start":"02:08.015 ","End":"02:15.815","Text":"the product of u and v is the same as the product of v and u inner product,"},{"Start":"02:15.815 ","End":"02:18.215","Text":"we just say product for short."},{"Start":"02:18.215 ","End":"02:20.645","Text":"This symbol, by the way,"},{"Start":"02:20.645 ","End":"02:24.470","Text":"means for all in case you haven\u0027t seen it in logic."},{"Start":"02:24.470 ","End":"02:29.570","Text":"And this symbol is the membership symbol in set theory for,"},{"Start":"02:29.570 ","End":"02:32.945","Text":"for all u and v belonging to big V."},{"Start":"02:32.945 ","End":"02:33.860","Text":"Ok?"},{"Start":"02:33.860 ","End":"02:38.570","Text":"And there\u0027s a name for this first rule, it\u0027s called symmetry."},{"Start":"02:38.570 ","End":"02:43.625","Text":"The next rule involves two vectors and a scalar."},{"Start":"02:43.625 ","End":"02:50.450","Text":"And what it says for any u and v vectors and for any scalar."},{"Start":"02:50.450 ","End":"02:57.740","Text":"If I multiply the scalar by the vector and then take the inner product with v with"},{"Start":"02:57.740 ","End":"03:01.370","Text":"v. That would be the same thing is first taking the inner product with"},{"Start":"03:01.370 ","End":"03:05.870","Text":"v and at the end multiplying by the scalar."},{"Start":"03:05.870 ","End":"03:10.970","Text":"Like I said, there were examples in the previous clip where I went a bit more slowly."},{"Start":"03:10.970 ","End":"03:16.100","Text":"And we will give examples after I\u0027ve done with the fall for rules."},{"Start":"03:16.100 ","End":"03:23.100","Text":"And here\u0027s the third rule which involves any three vectors."},{"Start":"03:23.350 ","End":"03:27.530","Text":"Thought that sentence again, for all u,"},{"Start":"03:27.530 ","End":"03:31.850","Text":"v, and w in the vector space."},{"Start":"03:31.850 ","End":"03:38.375","Text":"If we add u plus v and then take the product with w,"},{"Start":"03:38.375 ","End":"03:45.410","Text":"It\u0027s the same as taking the product of u with w and then with a v with w and adding them."},{"Start":"03:45.410 ","End":"03:47.970","Text":"Just as written here."},{"Start":"03:48.460 ","End":"03:55.190","Text":"These two rules together are called linearity."},{"Start":"03:55.190 ","End":"04:00.410","Text":"And be more precise linearity in the first argument,"},{"Start":"04:00.410 ","End":"04:03.110","Text":"the inner product has two arguments before the camera."},{"Start":"04:03.110 ","End":"04:06.350","Text":"It\u0027s the first argument and after the comma is the second argument."},{"Start":"04:06.350 ","End":"04:11.315","Text":"This all relates to what happens when I add the sum before the comma."},{"Start":"04:11.315 ","End":"04:15.065","Text":"Anyway, let\u0027s not get too pedantic linearity,"},{"Start":"04:15.065 ","End":"04:17.645","Text":"but that\u0027s not universally agreed on."},{"Start":"04:17.645 ","End":"04:23.930","Text":"Some people say that just the second rule is linearity."},{"Start":"04:23.930 ","End":"04:29.735","Text":"And the first of the two is homogeneity."},{"Start":"04:29.735 ","End":"04:33.635","Text":"So it doesn\u0027t really matter."},{"Start":"04:33.635 ","End":"04:36.950","Text":"They\u0027re just names. And as I said,"},{"Start":"04:36.950 ","End":"04:39.554","Text":"this is not a universal agreement."},{"Start":"04:39.554 ","End":"04:42.160","Text":"Next we come to rule number 4."},{"Start":"04:42.160 ","End":"04:45.325","Text":"It also has a name that not important."},{"Start":"04:45.325 ","End":"04:51.835","Text":"What it says is that if I take the inner product of a vector with itself,"},{"Start":"04:51.835 ","End":"04:54.835","Text":"I\u0027m always going to get something non-negative,"},{"Start":"04:54.835 ","End":"04:57.295","Text":"bigger or equal to 0."},{"Start":"04:57.295 ","End":"05:05.500","Text":"And the only time that it can actually be 0 is when the vector u is the 0 vector."},{"Start":"05:05.500 ","End":"05:08.740","Text":"Okay, so we\u0027ve defined in a product."},{"Start":"05:08.740 ","End":"05:11.545","Text":"Now what about inner product space?"},{"Start":"05:11.545 ","End":"05:14.830","Text":"Well, that\u0027s fairly straightforward."},{"Start":"05:14.830 ","End":"05:22.020","Text":"If we have a vector space over the real numbers together with an inner product."},{"Start":"05:22.020 ","End":"05:25.610","Text":"Which is a function that satisfies these conditions."},{"Start":"05:25.610 ","End":"05:31.430","Text":"Then together, That\u0027s called an inner product space or Euclidean space."},{"Start":"05:31.430 ","End":"05:34.205","Text":"And for emphasis, we often say"},{"Start":"05:34.205 ","End":"05:38.795","Text":"real inner product space as opposed to complex inner product space,"},{"Start":"05:38.795 ","End":"05:41.280","Text":"which we won\u0027t be learning about."},{"Start":"05:41.440 ","End":"05:44.030","Text":"Okay, That\u0027s the definitions."},{"Start":"05:44.030 ","End":"05:47.885","Text":"Now we want to see some examples."},{"Start":"05:47.885 ","End":"05:50.915","Text":"Now remember an inner product space."},{"Start":"05:50.915 ","End":"05:54.380","Text":"We have to have a vector space and an inner product."},{"Start":"05:54.380 ","End":"05:55.850","Text":"So in the first example,"},{"Start":"05:55.850 ","End":"06:04.170","Text":"the vector space will be R2 and the inner product will be defined as follows."},{"Start":"06:04.170 ","End":"06:07.059","Text":"Now I\u0027ve used the letters X, Y,"},{"Start":"06:07.059 ","End":"06:10.705","Text":"so you don\u0027t get used to u and v all the time because"},{"Start":"06:10.705 ","End":"06:15.040","Text":"some books use u and v and sometimes we want to say x,"},{"Start":"06:15.040 ","End":"06:17.965","Text":"y and she get used both."},{"Start":"06:17.965 ","End":"06:21.865","Text":"So the inner product of x with y,"},{"Start":"06:21.865 ","End":"06:25.915","Text":"and let\u0027s say x is x1, x2."},{"Start":"06:25.915 ","End":"06:27.310","Text":"Remember we\u0027re in R2,"},{"Start":"06:27.310 ","End":"06:29.230","Text":"we need two coordinates,"},{"Start":"06:29.230 ","End":"06:31.120","Text":"and y is Y1, Y2."},{"Start":"06:31.120 ","End":"06:33.850","Text":"So we just take the first component with"},{"Start":"06:33.850 ","End":"06:38.575","Text":"the first component multiplied and then second with the second them we add."},{"Start":"06:38.575 ","End":"06:43.090","Text":"This is actually the example I gave in the previous introductory clip."},{"Start":"06:43.090 ","End":"06:44.875","Text":"If we go back and look."},{"Start":"06:44.875 ","End":"06:54.830","Text":"And this is so common that it\u0027s called the standard inner product for R2."},{"Start":"06:54.830 ","End":"07:00.620","Text":"Now I want to do is prove that it really is an inner product."},{"Start":"07:00.620 ","End":"07:02.630","Text":"In the introductory clip,"},{"Start":"07:02.630 ","End":"07:05.105","Text":"we just gave an example with numbers."},{"Start":"07:05.105 ","End":"07:07.730","Text":"We have to prove it in general."},{"Start":"07:07.730 ","End":"07:10.670","Text":"And in order to verify,"},{"Start":"07:10.670 ","End":"07:16.414","Text":"remember that there are four axioms that have to be satisfied for conditions."},{"Start":"07:16.414 ","End":"07:19.415","Text":"The first one was the symmetry one,"},{"Start":"07:19.415 ","End":"07:24.365","Text":"is that the product of u and v is the same as the product of v and u."},{"Start":"07:24.365 ","End":"07:27.530","Text":"Or I could have said x and y, y and x."},{"Start":"07:27.530 ","End":"07:29.450","Text":"And also you."},{"Start":"07:29.450 ","End":"07:32.390","Text":"I could say it\u0027s components are U1, U2,"},{"Start":"07:32.390 ","End":"07:37.310","Text":"but sometimes you just want to use regular letters on it for,"},{"Start":"07:37.310 ","End":"07:38.690","Text":"let\u0027s call them a, b,"},{"Start":"07:38.690 ","End":"07:40.160","Text":"c and d components,"},{"Start":"07:40.160 ","End":"07:44.420","Text":"Vue components of v. So what we have to check,"},{"Start":"07:44.420 ","End":"07:47.135","Text":"let\u0027s see, left hand side."},{"Start":"07:47.135 ","End":"07:50.540","Text":"Compute that equal to the right-hand side."},{"Start":"07:50.540 ","End":"07:54.575","Text":"The left-hand side product of u and v, this is u,"},{"Start":"07:54.575 ","End":"07:58.625","Text":"this is v. But if I want to refer to this definition,"},{"Start":"07:58.625 ","End":"08:01.010","Text":"then a, B, C, D,"},{"Start":"08:01.010 ","End":"08:04.340","Text":"I can replace them by X1, X2, Y1, Y2."},{"Start":"08:04.340 ","End":"08:08.510","Text":"And then I have to write here X1, Y1, x2y2."},{"Start":"08:08.510 ","End":"08:11.179","Text":"And it comes out to be ac plus bd,"},{"Start":"08:11.179 ","End":"08:14.220","Text":"just this with different letters."},{"Start":"08:14.770 ","End":"08:21.530","Text":"And similarly, if I do the product of v with u and referring here to the definition,"},{"Start":"08:21.530 ","End":"08:24.950","Text":"I\u0027ll get ca plus d b."},{"Start":"08:24.950 ","End":"08:27.920","Text":"Now, this is the same as this,"},{"Start":"08:27.920 ","End":"08:32.915","Text":"because AAC is the same as CA and BD is the same as DP."},{"Start":"08:32.915 ","End":"08:38.045","Text":"So we really do have equality and we verified the first one of four."},{"Start":"08:38.045 ","End":"08:40.310","Text":"Again, I want to rule 2,"},{"Start":"08:40.310 ","End":"08:44.810","Text":"I just copied the definition of the inner product again, so we have it handy."},{"Start":"08:44.810 ","End":"08:47.210","Text":"Rule number 2 is this."},{"Start":"08:47.210 ","End":"08:49.400","Text":"We also called it homogeneity,"},{"Start":"08:49.400 ","End":"08:54.755","Text":"is if I put the scalar in front of this back to you,"},{"Start":"08:54.755 ","End":"08:59.630","Text":"It\u0027s the same as if I do the inner product and then put the scalar there."},{"Start":"08:59.630 ","End":"09:02.420","Text":"So as before, I\u0027ll call you a,"},{"Start":"09:02.420 ","End":"09:04.715","Text":"b, and v, I\u0027ll call C,"},{"Start":"09:04.715 ","End":"09:07.895","Text":"D, and I\u0027ll need alpha u,"},{"Start":"09:07.895 ","End":"09:11.750","Text":"and that would just be alpha a alpha B."},{"Start":"09:11.750 ","End":"09:16.520","Text":"Start with the left hand side alpha you product with V,"},{"Start":"09:16.520 ","End":"09:18.590","Text":"is this with this."},{"Start":"09:18.590 ","End":"09:24.090","Text":"And it\u0027s Alpha a times c plus alpha b times d."},{"Start":"09:24.160 ","End":"09:27.110","Text":"To evaluate the right-hand side, first,"},{"Start":"09:27.110 ","End":"09:29.195","Text":"I do the inner product of u with v,"},{"Start":"09:29.195 ","End":"09:31.220","Text":"which is this with this."},{"Start":"09:31.220 ","End":"09:33.830","Text":"And we already computed that before,"},{"Start":"09:33.830 ","End":"09:36.155","Text":"That\u0027s ac plus bd."},{"Start":"09:36.155 ","End":"09:38.810","Text":"But the alpha, we track the Alpha with us."},{"Start":"09:38.810 ","End":"09:44.840","Text":"And now I multiply alpha using the regular distributive law from arithmetic."},{"Start":"09:44.840 ","End":"09:49.864","Text":"Regular algebra is alpha times ac plus Alpha BD."},{"Start":"09:49.864 ","End":"09:53.315","Text":"Now, I want to see if this is equal to this."},{"Start":"09:53.315 ","End":"09:58.910","Text":"Pretty clear that this is alpha citizens of a C and this is Alpha BD,"},{"Start":"09:58.910 ","End":"10:03.170","Text":"Alpha BD here, product sign here. Same thing."},{"Start":"10:03.170 ","End":"10:07.460","Text":"So rule number two has been verified also."},{"Start":"10:07.460 ","End":"10:13.445","Text":"Now, rule number three also called linearity."},{"Start":"10:13.445 ","End":"10:16.880","Text":"And then involves three vectors, u, v, and w."},{"Start":"10:16.880 ","End":"10:18.635","Text":"Let\u0027s give them names."},{"Start":"10:18.635 ","End":"10:24.620","Text":"I mean, components, UBA be V, CDW is EF,"},{"Start":"10:24.620 ","End":"10:29.030","Text":"and we\u0027re going to need u plus v. So that will be this plus this,"},{"Start":"10:29.030 ","End":"10:33.515","Text":"a plus c, b plus d. Now I\u0027ll start with the left-hand side."},{"Start":"10:33.515 ","End":"10:37.685","Text":"So I want the inner product of u plus v with w,"},{"Start":"10:37.685 ","End":"10:40.370","Text":"u plus v is this w with this."},{"Start":"10:40.370 ","End":"10:42.665","Text":"And I\u0027m referring to this rule."},{"Start":"10:42.665 ","End":"10:45.949","Text":"And some, you can think of it as renaming,"},{"Start":"10:45.949 ","End":"10:49.190","Text":"renaming them X1, X2, Y1, Y2."},{"Start":"10:49.190 ","End":"10:52.280","Text":"Think it\u0027s easy to see that this,"},{"Start":"10:52.280 ","End":"10:55.160","Text":"with this, it\u0027s a plus c times a,"},{"Start":"10:55.160 ","End":"11:01.685","Text":"then p plus t times f. And if we use the distributive law,"},{"Start":"11:01.685 ","End":"11:05.105","Text":"this is what we get when we open the brackets."},{"Start":"11:05.105 ","End":"11:09.620","Text":"And similarly, the right-hand side,"},{"Start":"11:09.620 ","End":"11:12.260","Text":"you with W is this with this,"},{"Start":"11:12.260 ","End":"11:15.320","Text":"v with w, this with this."},{"Start":"11:15.320 ","End":"11:18.125","Text":"The first inner product gives a, E,"},{"Start":"11:18.125 ","End":"11:21.635","Text":"B, F, and then c0 plus df."},{"Start":"11:21.635 ","End":"11:24.845","Text":"And that\u0027s the same thing as we had here."},{"Start":"11:24.845 ","End":"11:29.240","Text":"Different order CE, CE, BF and BF."},{"Start":"11:29.240 ","End":"11:30.545","Text":"The rest of it\u0027s the same."},{"Start":"11:30.545 ","End":"11:36.480","Text":"So we have verified rule number three,"},{"Start":"11:36.910 ","End":"11:39.965","Text":"I guess on to number four."},{"Start":"11:39.965 ","End":"11:47.285","Text":"And number 4 says the inner product of a vector with itself is always non-negative,"},{"Start":"11:47.285 ","End":"11:49.070","Text":"in other words, positive or 0."},{"Start":"11:49.070 ","End":"11:56.459","Text":"And the only time it\u0027s 0 is when the vector itself is the 0 vector."},{"Start":"11:56.470 ","End":"12:00.665","Text":"0 vector in our case is just 0 comma 0."},{"Start":"12:00.665 ","End":"12:07.160","Text":"So let\u0027s say that we have U and its components are a and B."},{"Start":"12:07.160 ","End":"12:10.880","Text":"In a product review with you, a B with a B."},{"Start":"12:10.880 ","End":"12:14.165","Text":"So it\u0027s a times a plus b times b,"},{"Start":"12:14.165 ","End":"12:16.415","Text":"in other words, a squared plus b squared."},{"Start":"12:16.415 ","End":"12:19.625","Text":"And clearly that\u0027s bigger or equal to 0."},{"Start":"12:19.625 ","End":"12:21.725","Text":"Each of these is non-negative."},{"Start":"12:21.725 ","End":"12:24.830","Text":"And when could this possibly be 0?"},{"Start":"12:24.830 ","End":"12:28.565","Text":"If you think about it only when a and B are both 0."},{"Start":"12:28.565 ","End":"12:33.710","Text":"I\u0027m going to just set is in brackets here that the only way you\u0027re going to get a"},{"Start":"12:33.710 ","End":"12:39.425","Text":"squared plus b squared is 0 is if and only if a and B are both 0."},{"Start":"12:39.425 ","End":"12:42.250","Text":"And so that gives us that."},{"Start":"12:42.250 ","End":"12:46.880","Text":"We wanted that maybe is u,"},{"Start":"12:46.880 ","End":"12:53.990","Text":"u is 0, 0 vector if and only if inner product of u with you is 0."},{"Start":"12:53.990 ","End":"12:58.400","Text":"Okay, That completes the fourth part."},{"Start":"12:58.400 ","End":"13:01.925","Text":"And that was for the particular example."},{"Start":"13:01.925 ","End":"13:06.450","Text":"And we\u0027ll take another example in the next clip."}],"ID":10134},{"Watched":false,"Name":"Lesson 3 - Example 2","Duration":"8m 3s","ChapterTopicVideoID":10014,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.575","Text":"We\u0027re continuing from the previous clip where we did the example one."},{"Start":"00:04.575 ","End":"00:07.815","Text":"And now we\u0027re doing example 2."},{"Start":"00:07.815 ","End":"00:12.840","Text":"What I\u0027m going to do is take the same vector space as before,"},{"Start":"00:12.840 ","End":"00:18.150","Text":"R2, but define the product differently."},{"Start":"00:18.150 ","End":"00:24.260","Text":"So let\u0027s take a general back to you and another general vector"},{"Start":"00:24.260 ","End":"00:31.205","Text":"v. I have to tell you what the product is and you write it in angular brackets."},{"Start":"00:31.205 ","End":"00:36.335","Text":"And it\u0027s going to be equal to this formula."},{"Start":"00:36.335 ","End":"00:44.165","Text":"Twice the first component of the first one times the first component of the second one."},{"Start":"00:44.165 ","End":"00:50.810","Text":"And minus this one with this one."},{"Start":"00:50.810 ","End":"00:56.945","Text":"And then minus this component with this component and plus this one with this one."},{"Start":"00:56.945 ","End":"00:59.705","Text":"Each time it involve something from here,"},{"Start":"00:59.705 ","End":"01:01.100","Text":"times something from here,"},{"Start":"01:01.100 ","End":"01:03.485","Text":"but a different coefficient each time."},{"Start":"01:03.485 ","End":"01:06.335","Text":"We don\u0027t know yet that it\u0027s an inner product."},{"Start":"01:06.335 ","End":"01:11.855","Text":"Meanwhile, it just satisfies the very basic property that it takes"},{"Start":"01:11.855 ","End":"01:18.515","Text":"two vectors as input and gives out a scalar."},{"Start":"01:18.515 ","End":"01:21.080","Text":"In the first exercise,"},{"Start":"01:21.080 ","End":"01:23.000","Text":"we defined it as X1,"},{"Start":"01:23.000 ","End":"01:27.665","Text":"Y1 plus X2, Y2."},{"Start":"01:27.665 ","End":"01:32.855","Text":"That actually is called the standard inner product for R2."},{"Start":"01:32.855 ","End":"01:35.945","Text":"I think I may have mentioned it. This will not be the standard one."},{"Start":"01:35.945 ","End":"01:37.520","Text":"It\u0027s going to be another one."},{"Start":"01:37.520 ","End":"01:41.910","Text":"So let\u0027s see that it satisfies the four properties."},{"Start":"01:41.950 ","End":"01:46.775","Text":"The first property or axiom was called,"},{"Start":"01:46.775 ","End":"01:48.800","Text":"we call it symmetry."},{"Start":"01:48.800 ","End":"01:53.975","Text":"Which basically says that the order of taking the product of matter,"},{"Start":"01:53.975 ","End":"01:56.210","Text":"we want to prove this in general."},{"Start":"01:56.210 ","End":"01:59.210","Text":"Let\u0027s say the EU has components a,"},{"Start":"01:59.210 ","End":"02:02.660","Text":"b, and v has components c, d."},{"Start":"02:02.660 ","End":"02:04.790","Text":"And to figure out uv,"},{"Start":"02:04.790 ","End":"02:07.100","Text":"basically I\u0027m looking at this formula,"},{"Start":"02:07.100 ","End":"02:09.814","Text":"but I have different lattices that have X1,"},{"Start":"02:09.814 ","End":"02:12.545","Text":"I have a instead of Y1 I have B."},{"Start":"02:12.545 ","End":"02:14.870","Text":"I just wrote the equivalence underneath."},{"Start":"02:14.870 ","End":"02:16.925","Text":"So just copying the formula,"},{"Start":"02:16.925 ","End":"02:20.675","Text":"just translating it to ABCD."},{"Start":"02:20.675 ","End":"02:23.795","Text":"You\u0027d probably be able to figure it out with all,"},{"Start":"02:23.795 ","End":"02:26.285","Text":"without all these extra annotations,"},{"Start":"02:26.285 ","End":"02:28.865","Text":"just by mapping it out mentally."},{"Start":"02:28.865 ","End":"02:31.445","Text":"That wherever you see x one, you\u0027d say, aha,"},{"Start":"02:31.445 ","End":"02:34.220","Text":"that\u0027s a and so on. It looks a bit messy."},{"Start":"02:34.220 ","End":"02:40.130","Text":"But anyway, this is our answer to a C minus AD minus BC plus bd."},{"Start":"02:40.130 ","End":"02:44.430","Text":"Now let\u0027s switch the order which is the right-hand side."},{"Start":"02:44.650 ","End":"02:48.450","Text":"And this is what we get."},{"Start":"02:48.610 ","End":"02:51.620","Text":"Again, I\u0027ve written it out in full,"},{"Start":"02:51.620 ","End":"02:53.240","Text":"call this one X1, X2,"},{"Start":"02:53.240 ","End":"02:58.025","Text":"Y1, Y2, and follow this formula and translated into ABCD."},{"Start":"02:58.025 ","End":"03:01.400","Text":"Anyway, this is what we get to ca,"},{"Start":"03:01.400 ","End":"03:03.845","Text":"cb minus DA DB."},{"Start":"03:03.845 ","End":"03:05.555","Text":"And the question is,"},{"Start":"03:05.555 ","End":"03:07.715","Text":"are these two equal?"},{"Start":"03:07.715 ","End":"03:11.305","Text":"Well, let\u0027s look to AAC into CIS,"},{"Start":"03:11.305 ","End":"03:13.520","Text":"say minus a d here,"},{"Start":"03:13.520 ","End":"03:16.160","Text":"minus DA here, yeah,"},{"Start":"03:16.160 ","End":"03:18.500","Text":"minus bc with minus c,"},{"Start":"03:18.500 ","End":"03:21.560","Text":"b and plus b, d plus db."},{"Start":"03:21.560 ","End":"03:24.215","Text":"So yes, every term is the same."},{"Start":"03:24.215 ","End":"03:26.270","Text":"So these two are equal."},{"Start":"03:26.270 ","End":"03:31.260","Text":"And so we have verified the first out of four."},{"Start":"03:31.740 ","End":"03:36.370","Text":"And now the second, which is homogeneity,"},{"Start":"03:36.370 ","End":"03:43.735","Text":"which says here that if you put a scalar before the first vector or outside,"},{"Start":"03:43.735 ","End":"03:47.590","Text":"the product, doesn\u0027t make a difference."},{"Start":"03:47.590 ","End":"03:49.570","Text":"This I\u0027ve just copied earlier,"},{"Start":"03:49.570 ","End":"03:51.070","Text":"so we\u0027ll have a from a, from before,"},{"Start":"03:51.070 ","End":"03:53.320","Text":"we have it handy now."},{"Start":"03:53.320 ","End":"03:59.920","Text":"So let\u0027s say that u is a b and v is c. D will also need alpha u."},{"Start":"03:59.920 ","End":"04:01.600","Text":"So I computed that already."},{"Start":"04:01.600 ","End":"04:03.955","Text":"That\u0027s alpha a, alpha B,"},{"Start":"04:03.955 ","End":"04:06.010","Text":"starting with the left-hand side,"},{"Start":"04:06.010 ","End":"04:07.795","Text":"which is to take alpha u,"},{"Start":"04:07.795 ","End":"04:10.420","Text":"which is this bit before this comma,"},{"Start":"04:10.420 ","End":"04:14.550","Text":"and then v, which is the CD."},{"Start":"04:14.550 ","End":"04:23.075","Text":"And we just have to follow the recipe that\u0027s here with this X1, X2, Y1, Y2."},{"Start":"04:23.075 ","End":"04:24.950","Text":"Okay? It\u0027s tedious."},{"Start":"04:24.950 ","End":"04:28.380","Text":"You can check this is what we get."},{"Start":"04:28.480 ","End":"04:31.699","Text":"And for the right-hand side,"},{"Start":"04:31.699 ","End":"04:35.480","Text":"we first multiply u by v."},{"Start":"04:35.480 ","End":"04:38.495","Text":"We sometimes say multiply cuz it\u0027s a product."},{"Start":"04:38.495 ","End":"04:44.660","Text":"You take the inner product of u and v. And this is X1, X2, X3, X4."},{"Start":"04:44.660 ","End":"04:51.035","Text":"Look at the formula and we get this expression and at the end we multiply by alpha."},{"Start":"04:51.035 ","End":"04:52.970","Text":"And now the question is,"},{"Start":"04:52.970 ","End":"04:55.249","Text":"are these two equal?"},{"Start":"04:55.249 ","End":"05:00.140","Text":"Well, I say yes because if you just expand the brackets,"},{"Start":"05:00.140 ","End":"05:01.865","Text":"multiply everything by alpha."},{"Start":"05:01.865 ","End":"05:06.665","Text":"Here is 12, I see is alpha times to AAC, same thing."},{"Start":"05:06.665 ","End":"05:13.190","Text":"Alpha times 80 alpha d alpha B C Alpha BC with the minus Alpha BD,"},{"Start":"05:13.190 ","End":"05:15.185","Text":"Alpha BD. So yes."},{"Start":"05:15.185 ","End":"05:20.630","Text":"And so this has been demonstrated and that\u0027s two out of four."},{"Start":"05:20.630 ","End":"05:23.060","Text":"And on to number 3,"},{"Start":"05:23.060 ","End":"05:24.875","Text":"this I just copied."},{"Start":"05:24.875 ","End":"05:27.275","Text":"This is what we have to show."},{"Start":"05:27.275 ","End":"05:28.850","Text":"Linearity."},{"Start":"05:28.850 ","End":"05:32.945","Text":"That if I take a sum and the inner product with w,"},{"Start":"05:32.945 ","End":"05:35.870","Text":"I can take each one separately,"},{"Start":"05:35.870 ","End":"05:42.470","Text":"you with w and The v with w and then add. So."},{"Start":"05:42.470 ","End":"05:44.030","Text":"Let\u0027s prove it."},{"Start":"05:44.030 ","End":"05:45.740","Text":"I\u0027ll call the components of u,"},{"Start":"05:45.740 ","End":"05:50.930","Text":"a and b and v with CT and WEF."},{"Start":"05:50.930 ","End":"05:56.960","Text":"And I also need u plus v. So that\u0027s this plus this, that\u0027s a plus c,"},{"Start":"05:56.960 ","End":"06:01.910","Text":"b plus d. Start with the left-hand side,"},{"Start":"06:01.910 ","End":"06:08.225","Text":"u plus v product with w. Here\u0027s the u plus v,"},{"Start":"06:08.225 ","End":"06:10.445","Text":"which from here I copied."},{"Start":"06:10.445 ","End":"06:14.630","Text":"And here is w. From here the f."},{"Start":"06:14.630 ","End":"06:18.320","Text":"And I\u0027ve labeled them X1, X2, Y1, Y2."},{"Start":"06:18.320 ","End":"06:21.510","Text":"So we can use this formula."},{"Start":"06:21.880 ","End":"06:26.435","Text":"The first term is gotta be two X1Y1."},{"Start":"06:26.435 ","End":"06:31.550","Text":"And so it\u0027s twice x1 is a plus c, y1 is e."},{"Start":"06:31.550 ","End":"06:38.450","Text":"Similarly, the other terms simplify to a e to c,"},{"Start":"06:38.450 ","End":"06:42.470","Text":"e minus a, minus C,"},{"Start":"06:42.470 ","End":"06:43.715","Text":"F and so on."},{"Start":"06:43.715 ","End":"06:47.450","Text":"Okay, so yeah, that\u0027s simplification."},{"Start":"06:47.450 ","End":"06:50.420","Text":"Now we\u0027ll get to the right hand side."},{"Start":"06:50.420 ","End":"06:53.060","Text":"Here we have two inner products,"},{"Start":"06:53.060 ","End":"06:57.425","Text":"you with W here and then v with w here,"},{"Start":"06:57.425 ","End":"06:58.820","Text":"and then we\u0027ll add them."},{"Start":"06:58.820 ","End":"07:04.459","Text":"Okay, So we get for the first one to a E,"},{"Start":"07:04.459 ","End":"07:06.965","Text":"and then we need minus x1,"},{"Start":"07:06.965 ","End":"07:09.440","Text":"y2, so that\u0027s minus a f."},{"Start":"07:09.440 ","End":"07:11.720","Text":"Okay, this is tedious."},{"Start":"07:11.720 ","End":"07:19.640","Text":"The first up to here is the first in a product and then the second inner product."},{"Start":"07:19.640 ","End":"07:23.765","Text":"And and at this point we want to check if,"},{"Start":"07:23.765 ","End":"07:27.005","Text":"if this is equal to this,"},{"Start":"07:27.005 ","End":"07:28.640","Text":"it\u0027s not immediately clear."},{"Start":"07:28.640 ","End":"07:31.025","Text":"So what I suggest is you just check them off."},{"Start":"07:31.025 ","End":"07:40.535","Text":"Okay, we have 28. Here is two a e to c0 with two CE, AF,"},{"Start":"07:40.535 ","End":"07:43.610","Text":"with a AF, CF,"},{"Start":"07:43.610 ","End":"07:47.510","Text":"CF be PE with a minus here,"},{"Start":"07:47.510 ","End":"07:51.320","Text":"minus D minus D plus B,"},{"Start":"07:51.320 ","End":"07:55.264","Text":"f plus df, dy plus the, uh, yes."},{"Start":"07:55.264 ","End":"08:01.310","Text":"So we have verified this property,"},{"Start":"08:01.310 ","End":"08:04.050","Text":"which was number three."}],"ID":10135},{"Watched":false,"Name":"Lesson 4 - Criterion for Product to be Inner","Duration":"18m 24s","ChapterTopicVideoID":10011,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.620 ","End":"00:02.880","Text":"In this clip, we\u0027ll be talking about"},{"Start":"00:02.880 ","End":"00:05.700","Text":"a necessary and sufficient condition for"},{"Start":"00:05.700 ","End":"00:09.105","Text":"the vector space RN to be an inner product space."},{"Start":"00:09.105 ","End":"00:11.520","Text":"It\u0027s not quite precise language"},{"Start":"00:11.520 ","End":"00:16.650","Text":"because there is an inner product hidden here somewhere in the definition,"},{"Start":"00:16.650 ","End":"00:19.754","Text":"then we\u0027ll really be talking about the inner product."},{"Start":"00:19.754 ","End":"00:27.175","Text":"But let\u0027s leave that and let\u0027s just start with a preface, like an introduction."},{"Start":"00:27.175 ","End":"00:31.235","Text":"And later on we\u0027ll see precisely what this means."},{"Start":"00:31.235 ","End":"00:35.210","Text":"Now I\u0027ll loosely be using the term product on"},{"Start":"00:35.210 ","End":"00:40.100","Text":"a vector space as an operation that takes two vectors and gives a scalar."},{"Start":"00:40.100 ","End":"00:41.360","Text":"It\u0027s like an inner product,"},{"Start":"00:41.360 ","End":"00:44.180","Text":"but before I check the four conditions."},{"Start":"00:44.180 ","End":"00:53.480","Text":"So in the previous clip we add defined a product like this where X1,"},{"Start":"00:53.480 ","End":"00:58.140","Text":"X2, Y1, Y2 was given by this formula."},{"Start":"00:58.180 ","End":"01:00.350","Text":"In the previous clip,"},{"Start":"01:00.350 ","End":"01:04.775","Text":"we did work quite hard to show that this actually does satisfy the four conditions."},{"Start":"01:04.775 ","End":"01:06.740","Text":"But it was very hard work."},{"Start":"01:06.740 ","End":"01:09.245","Text":"And really the purpose of this is two,"},{"Start":"01:09.245 ","End":"01:15.170","Text":"find kind of a shortcut way of deciding whether a product is an inner product or not."},{"Start":"01:15.170 ","End":"01:19.160","Text":"Here I copied this line but added a bit of color."},{"Start":"01:19.160 ","End":"01:21.470","Text":"And you\u0027ll see why."},{"Start":"01:21.470 ","End":"01:29.495","Text":"Turns out that this product can be written in matrix form as a product of three matrices."},{"Start":"01:29.495 ","End":"01:33.065","Text":"I\u0027ll leave you to check the computation."},{"Start":"01:33.065 ","End":"01:35.180","Text":"But if you multiply these three matrices,"},{"Start":"01:35.180 ","End":"01:36.815","Text":"you will get exactly this."},{"Start":"01:36.815 ","End":"01:38.420","Text":"But I will tell you is how we,"},{"Start":"01:38.420 ","End":"01:40.655","Text":"how we build this."},{"Start":"01:40.655 ","End":"01:41.840","Text":"Here."},{"Start":"01:41.840 ","End":"01:47.315","Text":"We put the first vector as a row vector."},{"Start":"01:47.315 ","End":"01:52.670","Text":"Then I\u0027ll show you in a minute how to build the matrix and the colors are a hint."},{"Start":"01:52.670 ","End":"01:56.945","Text":"And then we put the second vector as a column vector."},{"Start":"01:56.945 ","End":"02:00.090","Text":"Really it\u0027s supposed to be a column vector."},{"Start":"02:00.430 ","End":"02:04.985","Text":"The way we fill this matrix from this expression is as follows."},{"Start":"02:04.985 ","End":"02:06.725","Text":"We\u0027ll look at each coefficient."},{"Start":"02:06.725 ","End":"02:08.420","Text":"This one is X1,"},{"Start":"02:08.420 ","End":"02:10.640","Text":"Y1, and this 11,"},{"Start":"02:10.640 ","End":"02:12.665","Text":"we take us first row,"},{"Start":"02:12.665 ","End":"02:15.755","Text":"first column, we put a two in, that\u0027s that."},{"Start":"02:15.755 ","End":"02:17.810","Text":"Then here, first row,"},{"Start":"02:17.810 ","End":"02:21.455","Text":"second column minus 1, that\u0027s here."},{"Start":"02:21.455 ","End":"02:25.955","Text":"Second row, first column minus one here."},{"Start":"02:25.955 ","End":"02:31.350","Text":"And second row, second column one, which is here."},{"Start":"02:31.390 ","End":"02:34.280","Text":"Another way of writing this,"},{"Start":"02:34.280 ","End":"02:36.080","Text":"and that is as follows."},{"Start":"02:36.080 ","End":"02:40.549","Text":"Now remember really all our vectors are column vectors just for convenience,"},{"Start":"02:40.549 ","End":"02:42.875","Text":"we sometimes write them as row vectors."},{"Start":"02:42.875 ","End":"02:48.140","Text":"But really, if this is u and this is V,"},{"Start":"02:48.140 ","End":"02:50.705","Text":"Then this is u transpose of,"},{"Start":"02:50.705 ","End":"02:53.600","Text":"changed it from a column vector to a row vector."},{"Start":"02:53.600 ","End":"02:55.550","Text":"So it\u0027s u transpose."},{"Start":"02:55.550 ","End":"03:00.110","Text":"This one is just v itself."},{"Start":"03:00.110 ","End":"03:01.550","Text":"And the middle one,"},{"Start":"03:01.550 ","End":"03:03.470","Text":"I\u0027ll just give it a name,"},{"Start":"03:03.470 ","End":"03:08.670","Text":"a, where a is this 2 minus 1, minus 1, 1."},{"Start":"03:08.980 ","End":"03:13.640","Text":"What I\u0027m concentrating on now is a technique for getting"},{"Start":"03:13.640 ","End":"03:18.979","Text":"this expression and writing these coefficients in a matrix."},{"Start":"03:18.979 ","End":"03:20.930","Text":"Let me take another example."},{"Start":"03:20.930 ","End":"03:22.940","Text":"I\u0027ll take a three by three example,"},{"Start":"03:22.940 ","End":"03:25.205","Text":"so you really get the idea."},{"Start":"03:25.205 ","End":"03:30.230","Text":"So we\u0027ll take our 3 and then vectors will have three components."},{"Start":"03:30.230 ","End":"03:32.195","Text":"And again, this is not accurate."},{"Start":"03:32.195 ","End":"03:35.105","Text":"You is really the column vector and V is the column vector."},{"Start":"03:35.105 ","End":"03:36.439","Text":"Like I said, for convenience,"},{"Start":"03:36.439 ","End":"03:41.975","Text":"we sometimes write them horizontally row vectors and we put commas in."},{"Start":"03:41.975 ","End":"03:45.455","Text":"And let\u0027s define this."},{"Start":"03:45.455 ","End":"03:48.605","Text":"Let\u0027s call it a product, something which,"},{"Start":"03:48.605 ","End":"03:53.915","Text":"which takes two vectors and gives me a scalar as this messy expression."},{"Start":"03:53.915 ","End":"03:58.010","Text":"Notice the similarity between this and this, that in each case,"},{"Start":"03:58.010 ","End":"04:02.150","Text":"I\u0027m always taking x something, why something?"},{"Start":"04:02.150 ","End":"04:05.705","Text":"And a number in front and a combination of those."},{"Start":"04:05.705 ","End":"04:09.845","Text":"So that\u0027s all we\u0027ll be working with is the lobby of this type."},{"Start":"04:09.845 ","End":"04:12.500","Text":"And then we\u0027ll be able to build a matrix."},{"Start":"04:12.500 ","End":"04:14.910","Text":"So let\u0027s do this 1."},{"Start":"04:15.190 ","End":"04:17.450","Text":"First, we\u0027ll add some color,"},{"Start":"04:17.450 ","End":"04:19.580","Text":"although this is just for teaching purposes,"},{"Start":"04:19.580 ","End":"04:23.520","Text":"I don\u0027t imagine on an exam you\u0027d start coloring it."},{"Start":"04:23.740 ","End":"04:29.540","Text":"Notice particularly that had the one is not written explicitly."},{"Start":"04:29.540 ","End":"04:31.280","Text":"So make sure you write it."},{"Start":"04:31.280 ","End":"04:36.035","Text":"And also notice that there\u0027s a missing X3, Y3 term."},{"Start":"04:36.035 ","End":"04:38.810","Text":"So we write that in with a 0."},{"Start":"04:38.810 ","End":"04:42.185","Text":"Next, we\u0027ll use the technique as before."},{"Start":"04:42.185 ","End":"04:44.690","Text":"And this can be written in matrix form."},{"Start":"04:44.690 ","End":"04:47.660","Text":"Like I said, I\u0027m not going to actually do the multiplication,"},{"Start":"04:47.660 ","End":"04:51.350","Text":"but you can check, you multiply it out, you\u0027ll get this."},{"Start":"04:51.350 ","End":"04:54.695","Text":"And we built this matrix same way."},{"Start":"04:54.695 ","End":"04:58.730","Text":"First row, first column is 01, first row,"},{"Start":"04:58.730 ","End":"05:00.800","Text":"second column minus 2,"},{"Start":"05:00.800 ","End":"05:03.125","Text":"first row, third column is the 3."},{"Start":"05:03.125 ","End":"05:06.800","Text":"Second row, first column minus 2, and so on."},{"Start":"05:06.800 ","End":"05:09.050","Text":"Up to third row, third column."},{"Start":"05:09.050 ","End":"05:10.950","Text":"The 0."},{"Start":"05:11.470 ","End":"05:13.670","Text":"And just like before,"},{"Start":"05:13.670 ","End":"05:20.450","Text":"there\u0027s an alternative way of writing it as u transpose a v,"},{"Start":"05:20.450 ","End":"05:23.015","Text":"where a is this matrix here."},{"Start":"05:23.015 ","End":"05:28.145","Text":"U is in, is properly the column vector,"},{"Start":"05:28.145 ","End":"05:31.265","Text":"and V is this column vector."},{"Start":"05:31.265 ","End":"05:35.750","Text":"And so this is not really you like we said before,"},{"Start":"05:35.750 ","End":"05:38.704","Text":"it really U transpose to be precise."},{"Start":"05:38.704 ","End":"05:44.570","Text":"So this is u transpose a times v. Okay,"},{"Start":"05:44.570 ","End":"05:48.545","Text":"so those are two examples, two-dimensional and three-dimensional."},{"Start":"05:48.545 ","End":"05:57.290","Text":"And what we\u0027re gonna do next to find some conditions on this matrix a,"},{"Start":"05:57.290 ","End":"06:03.800","Text":"to decide whether these products are really inner products or not."},{"Start":"06:03.800 ","End":"06:07.415","Text":"But I will give you one spoiler."},{"Start":"06:07.415 ","End":"06:12.305","Text":"And that is that the matrix has to be symmetric."},{"Start":"06:12.305 ","End":"06:16.190","Text":"Notice that this one is the minus two and minus two are reflections,"},{"Start":"06:16.190 ","End":"06:17.300","Text":"the three and the three,"},{"Start":"06:17.300 ","End":"06:18.650","Text":"the five and the five."},{"Start":"06:18.650 ","End":"06:23.000","Text":"And if you go back and take a look where that two-by-two is."},{"Start":"06:23.000 ","End":"06:28.160","Text":"Yeah, this is also symmetrical because the minus one and minus one are reflections."},{"Start":"06:28.160 ","End":"06:30.500","Text":"So that\u0027s one condition."},{"Start":"06:30.500 ","End":"06:33.440","Text":"And we\u0027ll soon see what else."},{"Start":"06:33.440 ","End":"06:38.150","Text":"Before we tackle the question I asked before about conditions on the matrix,"},{"Start":"06:38.150 ","End":"06:43.805","Text":"I have to introduce a new concept called positive-definite."},{"Start":"06:43.805 ","End":"06:50.705","Text":"And we\u0027re only going to use it in the context of symmetric real matrices."},{"Start":"06:50.705 ","End":"06:57.755","Text":"Although it, in principle it can be defined in general for matrices."},{"Start":"06:57.755 ","End":"07:02.060","Text":"But, and some people do,"},{"Start":"07:02.060 ","End":"07:05.840","Text":"but we will only use the term positive-definite in"},{"Start":"07:05.840 ","End":"07:09.890","Text":"the context of real symmetric square matrices,"},{"Start":"07:09.890 ","End":"07:11.105","Text":"they have to be, of course,"},{"Start":"07:11.105 ","End":"07:14.220","Text":"as symmetric has to be square."},{"Start":"07:14.980 ","End":"07:18.875","Text":"The end, I\u0027ll give a definition of positive definite."},{"Start":"07:18.875 ","End":"07:21.980","Text":"But it turns out that we can do quite a lot without even defining"},{"Start":"07:21.980 ","End":"07:26.270","Text":"it through the proposition in mathematics,"},{"Start":"07:26.270 ","End":"07:31.730","Text":"proposition like a mini theorem is that a real symmetric matrix is"},{"Start":"07:31.730 ","End":"07:37.535","Text":"positive definite if and only if all its principal minors are positive."},{"Start":"07:37.535 ","End":"07:41.945","Text":"Now I\u0027ll explain that in a moment towards principal minors."},{"Start":"07:41.945 ","End":"07:48.140","Text":"But the point is that once we have this proposition,"},{"Start":"07:48.140 ","End":"07:53.270","Text":"for practical purposes, I can use this proposition as a makeshift definition."},{"Start":"07:53.270 ","End":"07:56.540","Text":"I\u0027ll just define a positive definite matrix"},{"Start":"07:56.540 ","End":"07:59.765","Text":"to be one whose principal minors are positive."},{"Start":"07:59.765 ","End":"08:03.275","Text":"Okay, but I have to tell you what our principal minors,"},{"Start":"08:03.275 ","End":"08:07.115","Text":"you may have come across the term minor in the context of determinants."},{"Start":"08:07.115 ","End":"08:09.470","Text":"But you know what, I won\u0027t define minor on its own."},{"Start":"08:09.470 ","End":"08:13.055","Text":"We\u0027ll just, I\u0027ll just show you what I mean by the principle minors."},{"Start":"08:13.055 ","End":"08:17.675","Text":"And let\u0027s take this example of this matrix,"},{"Start":"08:17.675 ","End":"08:23.405","Text":"and it\u0027s certainly symmetric, 223355."},{"Start":"08:23.405 ","End":"08:29.990","Text":"The principle minors, the determinant of,"},{"Start":"08:29.990 ","End":"08:32.430","Text":"well, I\u0027ll draw it for you."},{"Start":"08:32.830 ","End":"08:38.240","Text":"Like just this part is one principle minor,"},{"Start":"08:38.240 ","End":"08:40.820","Text":"then the two-by-two determinant is"},{"Start":"08:40.820 ","End":"08:44.675","Text":"another principle minor and the three by three determinant is another."},{"Start":"08:44.675 ","End":"08:50.075","Text":"We just keep taking larger and larger square portions,"},{"Start":"08:50.075 ","End":"08:52.025","Text":"just like in the picture."},{"Start":"08:52.025 ","End":"08:54.260","Text":"So if it\u0027s a three by three matrix,"},{"Start":"08:54.260 ","End":"08:57.240","Text":"I need to check three determinants."},{"Start":"08:57.310 ","End":"09:03.590","Text":"The first one is just the determinant of a single number 1,"},{"Start":"09:03.590 ","End":"09:06.544","Text":"which is just one and that\u0027s bigger than 0."},{"Start":"09:06.544 ","End":"09:08.420","Text":"So it\u0027s positive."},{"Start":"09:08.420 ","End":"09:10.610","Text":"And we want to show that they\u0027re all positive."},{"Start":"09:10.610 ","End":"09:16.475","Text":"The next one is 12291 times nine is nine."},{"Start":"09:16.475 ","End":"09:18.470","Text":"Minus 2 times 2 is 4."},{"Start":"09:18.470 ","End":"09:21.455","Text":"9 minus 4 is 5 positive."},{"Start":"09:21.455 ","End":"09:25.415","Text":"And then this one, I\u0027ll spare you the computation to tell you the answer."},{"Start":"09:25.415 ","End":"09:26.960","Text":"This one comes out nine."},{"Start":"09:26.960 ","End":"09:28.880","Text":"And the important thing, it\u0027s positive."},{"Start":"09:28.880 ","End":"09:33.950","Text":"So these are the three principal minors and they\u0027re all positive."},{"Start":"09:33.950 ","End":"09:40.430","Text":"So this matrix is positive-definite by our temporary definition."},{"Start":"09:40.430 ","End":"09:47.180","Text":"Next, I want to give an example of something that isn\u0027t positive-definite."},{"Start":"09:47.180 ","End":"09:51.395","Text":"I\u0027ll take a different three-by-three matrix, this one."},{"Start":"09:51.395 ","End":"09:54.740","Text":"And let\u0027s check its principal minors."},{"Start":"09:54.740 ","End":"09:58.520","Text":"So we take the determinant of this than the determinant of this."},{"Start":"09:58.520 ","End":"10:03.560","Text":"The thing is, once you hit something that\u0027s not positive, you stop."},{"Start":"10:03.560 ","End":"10:07.400","Text":"Because we only care if they\u0027re all positive or not."},{"Start":"10:07.400 ","End":"10:09.590","Text":"Positive means 0 or negative."},{"Start":"10:09.590 ","End":"10:11.270","Text":"Well, the first one is okay,"},{"Start":"10:11.270 ","End":"10:12.725","Text":"it\u0027s one that\u0027s positive."},{"Start":"10:12.725 ","End":"10:14.210","Text":"But the second one,"},{"Start":"10:14.210 ","End":"10:16.370","Text":"1 times 4 is 4."},{"Start":"10:16.370 ","End":"10:18.470","Text":"Negative 2 times negative 2 is 4,"},{"Start":"10:18.470 ","End":"10:21.185","Text":"4 minus 4, 0 not positive."},{"Start":"10:21.185 ","End":"10:23.945","Text":"So this matrix is not."},{"Start":"10:23.945 ","End":"10:26.600","Text":"Positive definite."},{"Start":"10:26.600 ","End":"10:28.640","Text":"So here we are."},{"Start":"10:28.640 ","End":"10:32.045","Text":"This is the original title of the clip."},{"Start":"10:32.045 ","End":"10:36.000","Text":"And let\u0027s try and see if we can answer this."},{"Start":"10:36.220 ","End":"10:38.705","Text":"And there\u0027s a theorem."},{"Start":"10:38.705 ","End":"10:46.820","Text":"And let\u0027s say we have a real symmetric matrix that\u0027s of size n by n square matrix."},{"Start":"10:46.820 ","End":"10:52.835","Text":"Let\u0027s suppose we have a product on Iran."},{"Start":"10:52.835 ","End":"10:57.905","Text":"As a product, I mean a function that takes two vectors and gives a scalar."},{"Start":"10:57.905 ","End":"10:59.885","Text":"And suppose it\u0027s of this form,"},{"Start":"10:59.885 ","End":"11:03.500","Text":"U transpose a v for some matrix a,"},{"Start":"11:03.500 ","End":"11:06.320","Text":"just like we had before in our examples."},{"Start":"11:06.320 ","End":"11:10.490","Text":"Then this product is an inner product."},{"Start":"11:10.490 ","End":"11:14.960","Text":"Inner meaning it satisfies those four axioms."},{"Start":"11:14.960 ","End":"11:21.050","Text":"It\u0027s an inner product, an RN if and only if this matrix is positive definite."},{"Start":"11:21.050 ","End":"11:26.120","Text":"And we saw before, some matrices are positive definite and some aren\u0027t."},{"Start":"11:26.120 ","End":"11:31.835","Text":"And this will be the distinguishing criterion of when such an expression,"},{"Start":"11:31.835 ","End":"11:34.625","Text":"such a product will be an inner product."},{"Start":"11:34.625 ","End":"11:37.760","Text":"And I\u0027d like to give an example."},{"Start":"11:37.760 ","End":"11:44.360","Text":"And I\u0027m going to take the example that we had earlier in the previous clip,"},{"Start":"11:44.360 ","End":"11:51.155","Text":"where we took the space R2 and we defined a product like this."},{"Start":"11:51.155 ","End":"11:55.370","Text":"And we showed that it within the product from the definition by showing that"},{"Start":"11:55.370 ","End":"11:59.524","Text":"all four axioms or conditions apply."},{"Start":"11:59.524 ","End":"12:01.640","Text":"And we worked quite hard at it."},{"Start":"12:01.640 ","End":"12:08.360","Text":"And I\u0027d like to show you how we can really cut down on the work if we use this theorem."},{"Start":"12:08.360 ","End":"12:10.520","Text":"So to use the theorem,"},{"Start":"12:10.520 ","End":"12:12.650","Text":"we rewrite this in this form,"},{"Start":"12:12.650 ","End":"12:14.660","Text":"which we did earlier in this clip."},{"Start":"12:14.660 ","End":"12:24.200","Text":"We wrote this function or product as you transpose a v, like so."},{"Start":"12:24.200 ","End":"12:27.500","Text":"And remember we got the two, like first row,"},{"Start":"12:27.500 ","End":"12:29.285","Text":"first column is the 2,"},{"Start":"12:29.285 ","End":"12:31.250","Text":"first row, second column minus one."},{"Start":"12:31.250 ","End":"12:32.510","Text":"We did this already."},{"Start":"12:32.510 ","End":"12:41.765","Text":"So now this is our matrix a and we have to just decide if a is positive definite or not."},{"Start":"12:41.765 ","End":"12:45.800","Text":"And if it is, then this will be an inner product otherwise not well,"},{"Start":"12:45.800 ","End":"12:48.840","Text":"we know the answer already because we did it."},{"Start":"12:49.450 ","End":"12:53.300","Text":"There are only two principal minors here,"},{"Start":"12:53.300 ","End":"12:55.745","Text":"just the two and then the whole thing,"},{"Start":"12:55.745 ","End":"13:01.010","Text":"this one\u0027s positive, so we continue then this one is also positive."},{"Start":"13:01.010 ","End":"13:04.230","Text":"And so by the proposition,"},{"Start":"13:04.230 ","End":"13:08.080","Text":"this matrix is positive definite."},{"Start":"13:08.080 ","End":"13:12.925","Text":"And so this definition is an inner product."},{"Start":"13:12.925 ","End":"13:18.520","Text":"Okay, Another example, one\u0027s not enough this time we\u0027ll take it on R3,"},{"Start":"13:18.520 ","End":"13:26.695","Text":"where we have the inner product of two vectors defined as follows."},{"Start":"13:26.695 ","End":"13:30.205","Text":"It\u0027s always some number,"},{"Start":"13:30.205 ","End":"13:32.020","Text":"some x and some y,"},{"Start":"13:32.020 ","End":"13:34.195","Text":"and we add those together."},{"Start":"13:34.195 ","End":"13:36.895","Text":"Okay, here it is."},{"Start":"13:36.895 ","End":"13:40.225","Text":"And it\u0027ll tell you already that this is an inner product."},{"Start":"13:40.225 ","End":"13:42.700","Text":"And we\u0027ll use the theorem to show it,"},{"Start":"13:42.700 ","End":"13:47.025","Text":"rather than working hard in proving all four axioms."},{"Start":"13:47.025 ","End":"13:50.840","Text":"So we write this in matrix form."},{"Start":"13:50.840 ","End":"13:54.860","Text":"This is one, X1, Y1."},{"Start":"13:54.860 ","End":"13:56.795","Text":"And that\u0027s the one here."},{"Start":"13:56.795 ","End":"14:02.960","Text":"Remember missing entries, like there is no X1, Y2,"},{"Start":"14:02.960 ","End":"14:06.740","Text":"so that gets a 0 and there\u0027s x1,"},{"Start":"14:06.740 ","End":"14:10.025","Text":"Y3, first row, third column is a half."},{"Start":"14:10.025 ","End":"14:12.740","Text":"Again, there\u0027s missing X2,"},{"Start":"14:12.740 ","End":"14:15.950","Text":"Y1, then x2, y2."},{"Start":"14:15.950 ","End":"14:20.164","Text":"It\u0027s also one. Here\u0027s one."},{"Start":"14:20.164 ","End":"14:21.710","Text":"Anyway."},{"Start":"14:21.710 ","End":"14:25.055","Text":"Continuing, this is the matrix we get."},{"Start":"14:25.055 ","End":"14:30.215","Text":"And so we need to do now is show that this matrix is positive definite."},{"Start":"14:30.215 ","End":"14:34.590","Text":"And we\u0027re going to do it by checking the principle minors."},{"Start":"14:35.110 ","End":"14:42.245","Text":"The first one, just the one that\u0027s determinant of that is 1, it\u0027s positive."},{"Start":"14:42.245 ","End":"14:44.855","Text":"Next we have 1, 0, 0, 1."},{"Start":"14:44.855 ","End":"14:50.300","Text":"That is determinant of this is one and it\u0027s positive."},{"Start":"14:50.300 ","End":"14:52.565","Text":"Now we have this one."},{"Start":"14:52.565 ","End":"14:56.730","Text":"Expand by the second row,"},{"Start":"14:56.770 ","End":"15:02.600","Text":"cross out the row and the column from this."},{"Start":"15:02.600 ","End":"15:04.970","Text":"And remember the checkerboard plus,"},{"Start":"15:04.970 ","End":"15:07.175","Text":"minus is plus, minus, this is plus,"},{"Start":"15:07.175 ","End":"15:10.040","Text":"so it\u0027s plus one times,"},{"Start":"15:10.040 ","End":"15:17.495","Text":"plus one times the determinant of 1, 1.5511."},{"Start":"15:17.495 ","End":"15:21.740","Text":"And this is 1 minus a quarter, this is three-quarters."},{"Start":"15:21.740 ","End":"15:24.575","Text":"So yeah, it is positive."},{"Start":"15:24.575 ","End":"15:30.290","Text":"So by the proposition this matrix here is positive definite"},{"Start":"15:30.290 ","End":"15:39.345","Text":"and we do have an inner product from the theorem."},{"Start":"15:39.345 ","End":"15:42.310","Text":"Okay, now we\u0027re basically done,"},{"Start":"15:42.310 ","End":"15:47.005","Text":"but I said I\u0027d give you the proper definition of positive definite matrix."},{"Start":"15:47.005 ","End":"15:51.910","Text":"This part is optional and you can safely skip this,"},{"Start":"15:51.910 ","End":"15:54.460","Text":"but I feel I should give it."},{"Start":"15:54.460 ","End":"15:56.605","Text":"So usually she\u0027ll have it."},{"Start":"15:56.605 ","End":"15:58.840","Text":"And the definition says, Okay,"},{"Start":"15:58.840 ","End":"16:05.245","Text":"let\u0027s take an n by n real symmetric matrix will explain the asterisk in a moment."},{"Start":"16:05.245 ","End":"16:08.215","Text":"Then a is defined,"},{"Start":"16:08.215 ","End":"16:10.585","Text":"is called positive definite."},{"Start":"16:10.585 ","End":"16:18.445","Text":"If this expression u transpose times a times u is positive."},{"Start":"16:18.445 ","End":"16:23.600","Text":"For every non-zero vector u. In Iran."},{"Start":"16:23.600 ","End":"16:26.150","Text":"Of course, if u is 0, this thing will be 0."},{"Start":"16:26.150 ","End":"16:27.410","Text":"It can\u0027t be positive,"},{"Start":"16:27.410 ","End":"16:32.135","Text":"but it\u0027s gotta be positive for every UE except for 0."},{"Start":"16:32.135 ","End":"16:38.000","Text":"The asterisk is just a fine point that it\u0027s not completely universal."},{"Start":"16:38.000 ","End":"16:42.650","Text":"Some books don\u0027t require symmetry."},{"Start":"16:42.650 ","End":"16:45.530","Text":"But in this course we do so we\u0027re only going to apply"},{"Start":"16:45.530 ","End":"16:50.465","Text":"the concept of positive-definite two symmetric matrices."},{"Start":"16:50.465 ","End":"16:56.405","Text":"Now it turns out that it\u0027s very difficult to work with this definition as is,"},{"Start":"16:56.405 ","End":"17:00.980","Text":"if we\u0027re trying to decide if a matrix is positive definite or not."},{"Start":"17:00.980 ","End":"17:03.440","Text":"Which is why we use the proposition with"},{"Start":"17:03.440 ","End":"17:08.390","Text":"the principle minors or the show you how difficult it would be otherwise."},{"Start":"17:08.390 ","End":"17:12.110","Text":"Earlier we had this example of a matrix and we showed it was"},{"Start":"17:12.110 ","End":"17:18.785","Text":"positive-definite by checking the principle minors and they all turn out to be positive."},{"Start":"17:18.785 ","End":"17:20.810","Text":"And that\u0027s fairly easy,"},{"Start":"17:20.810 ","End":"17:24.980","Text":"almost immediate except possibly for computing a three by three determinant,"},{"Start":"17:24.980 ","End":"17:26.570","Text":"which again is not difficult."},{"Start":"17:26.570 ","End":"17:29.735","Text":"But now suppose we didn\u0027t have that proposition."},{"Start":"17:29.735 ","End":"17:32.735","Text":"Suppose we had to work straight from the definition."},{"Start":"17:32.735 ","End":"17:36.425","Text":"Then we\u0027d have to show that for each non-zero you and it\u0027s cool."},{"Start":"17:36.425 ","End":"17:39.890","Text":"You XYZ, we\u0027d have the inequality."},{"Start":"17:39.890 ","End":"17:44.810","Text":"Like you transpose a, you."},{"Start":"17:44.810 ","End":"17:47.870","Text":"And if you actually multiply it out,"},{"Start":"17:47.870 ","End":"17:50.360","Text":"we get this expression."},{"Start":"17:50.360 ","End":"17:54.860","Text":"So what we\u0027d have to show is that this is bigger than 0 for every x, y, and z."},{"Start":"17:54.860 ","End":"17:57.545","Text":"Okay, we can simplify it a bit."},{"Start":"17:57.545 ","End":"18:00.260","Text":"And that\u0027s a bit better than this."},{"Start":"18:00.260 ","End":"18:07.175","Text":"But to show that this is positive for every XYZ except for all zeros."},{"Start":"18:07.175 ","End":"18:09.890","Text":"Could be pretty tricky to use"},{"Start":"18:09.890 ","End":"18:13.729","Text":"some clever algebraic tricks or you might even try using calculus."},{"Start":"18:13.729 ","End":"18:15.590","Text":"Then the event, it\u0027s not easy."},{"Start":"18:15.590 ","End":"18:21.680","Text":"And that\u0027s why we\u0027re very happy to have that proposition with the principle minors."},{"Start":"18:21.680 ","End":"18:25.470","Text":"Anyway, we are now done."}],"ID":10136},{"Watched":false,"Name":"Exercise 1","Duration":"3m 34s","ChapterTopicVideoID":9634,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.475","Text":"In this exercise, we\u0027re going to define a product on the vector space R2."},{"Start":"00:05.475 ","End":"00:08.490","Text":"Let\u0027s say we have two general vectors,"},{"Start":"00:08.490 ","End":"00:10.410","Text":"u and v with components X1,"},{"Start":"00:10.410 ","End":"00:12.790","Text":"X2 and here Y1, Y2."},{"Start":"00:12.790 ","End":"00:19.910","Text":"Then the product is defined to be angular brackets u comma v."},{"Start":"00:19.910 ","End":"00:21.920","Text":"It\u0027s how we write the product."},{"Start":"00:21.920 ","End":"00:24.994","Text":"And this following formula."},{"Start":"00:24.994 ","End":"00:28.730","Text":"We put in two vectors and we get out a scalar."},{"Start":"00:28.730 ","End":"00:30.830","Text":"That\u0027s what makes it a product."},{"Start":"00:30.830 ","End":"00:35.270","Text":"The question is, is it an inner product?"},{"Start":"00:35.270 ","End":"00:38.630","Text":"Now there\u0027s more than one way of doing this."},{"Start":"00:38.630 ","End":"00:43.415","Text":"We could do the long way and take the four axioms or"},{"Start":"00:43.415 ","End":"00:46.280","Text":"conditions that are products has to"},{"Start":"00:46.280 ","End":"00:49.819","Text":"satisfy to be an inner product and that would take awhile."},{"Start":"00:49.819 ","End":"00:53.060","Text":"Oh, we could use the shortcut with the theorems."},{"Start":"00:53.060 ","End":"00:55.295","Text":"And that\u0027s what I\u0027m gonna do."},{"Start":"00:55.295 ","End":"01:01.910","Text":"So we learnt how to rewrite such a thing in matrix form."},{"Start":"01:01.910 ","End":"01:10.625","Text":"Let me just again remind you that really we are using column vectors,"},{"Start":"01:10.625 ","End":"01:13.130","Text":"but just for convenience,"},{"Start":"01:13.130 ","End":"01:16.865","Text":"we write them as row vectors sometimes."},{"Start":"01:16.865 ","End":"01:21.950","Text":"So you transpose is this,"},{"Start":"01:21.950 ","End":"01:26.825","Text":"but v is the column vector and this matrix."},{"Start":"01:26.825 ","End":"01:29.375","Text":"How do we get this as wouldn\u0027t have to show you."},{"Start":"01:29.375 ","End":"01:33.800","Text":"So we look at the coefficients and if something\u0027s missing, I can put a one."},{"Start":"01:33.800 ","End":"01:35.990","Text":"Also fill in with zeros if,"},{"Start":"01:35.990 ","End":"01:39.740","Text":"if need be X1, Y1."},{"Start":"01:39.740 ","End":"01:42.260","Text":"So i\u0027m, I go to row one,"},{"Start":"01:42.260 ","End":"01:44.900","Text":"column one and put a one there."},{"Start":"01:44.900 ","End":"01:48.710","Text":"Here, row 1, column 2."},{"Start":"01:48.710 ","End":"01:50.810","Text":"I\u0027m sorry, that\u0027s a typo."},{"Start":"01:50.810 ","End":"01:53.180","Text":"This should be two row one,"},{"Start":"01:53.180 ","End":"01:56.750","Text":"column two is a minus 3."},{"Start":"01:56.750 ","End":"02:02.060","Text":"Row two column one also minus 3 and row 2, column 2, 4."},{"Start":"02:02.060 ","End":"02:06.950","Text":"So this, how we get this and this matrix."},{"Start":"02:06.950 ","End":"02:09.380","Text":"A note that is,"},{"Start":"02:09.380 ","End":"02:13.685","Text":"it is symmetric, real and symmetric."},{"Start":"02:13.685 ","End":"02:18.125","Text":"Otherwise, our theorems don\u0027t work."},{"Start":"02:18.125 ","End":"02:24.755","Text":"So by the theorems propositions that we learned,"},{"Start":"02:24.755 ","End":"02:29.810","Text":"all we have to check is if this matrix is positive definite or not."},{"Start":"02:29.810 ","End":"02:31.250","Text":"If it\u0027s positive definite,"},{"Start":"02:31.250 ","End":"02:34.460","Text":"this is an inner product and otherwise not."},{"Start":"02:34.460 ","End":"02:43.445","Text":"And the way we check for positive definite is using the method of the principle minors."},{"Start":"02:43.445 ","End":"02:46.190","Text":"We first take the determinant of this,"},{"Start":"02:46.190 ","End":"02:48.275","Text":"and then the determinant of this."},{"Start":"02:48.275 ","End":"02:52.520","Text":"We take square matrices starting with the top-left,"},{"Start":"02:52.520 ","End":"02:56.300","Text":"and we expect all of them to be positive."},{"Start":"02:56.300 ","End":"03:01.250","Text":"And then the matrix will be positive-definite."},{"Start":"03:01.250 ","End":"03:08.675","Text":"In our case, what we get is that this determinant is positive, just the one."},{"Start":"03:08.675 ","End":"03:10.775","Text":"But when we take this,"},{"Start":"03:10.775 ","End":"03:14.090","Text":"1 times 4 is 4."},{"Start":"03:14.090 ","End":"03:18.290","Text":"And then we subtract nine and we get minus 5,"},{"Start":"03:18.290 ","End":"03:21.770","Text":"which is negative, which is not positive."},{"Start":"03:21.770 ","End":"03:26.630","Text":"And so the matrix isn\u0027t positive-definite."},{"Start":"03:26.630 ","End":"03:35.310","Text":"And that means that our definition of a product is not an inner product. And we\u0027re done."}],"ID":10137},{"Watched":false,"Name":"Exercise 2","Duration":"5m 46s","ChapterTopicVideoID":10016,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:08.385","Text":"In this exercise, we have the vector space R2 and we\u0027re going to define a product on it."},{"Start":"00:08.385 ","End":"00:16.320","Text":"So if u and v are two general vectors in R2 with these components,"},{"Start":"00:16.320 ","End":"00:18.570","Text":"will define the product of u and v,"},{"Start":"00:18.570 ","End":"00:20.460","Text":"which are right in angular brackets."},{"Start":"00:20.460 ","End":"00:21.720","Text":"Let\u0027s call it the product."},{"Start":"00:21.720 ","End":"00:24.810","Text":"It means a function that takes two vectors and gives a scalar."},{"Start":"00:24.810 ","End":"00:27.165","Text":"So it\u0027s defined as follows."},{"Start":"00:27.165 ","End":"00:31.290","Text":"But notice that there is a parameter K in it."},{"Start":"00:31.290 ","End":"00:34.480","Text":"So this exercise is a bit different."},{"Start":"00:34.640 ","End":"00:43.789","Text":"And the question is to find out which values of k would make this thing an inner product,"},{"Start":"00:43.789 ","End":"00:46.595","Text":"not just the product."},{"Start":"00:46.595 ","End":"00:50.120","Text":"So that\u0027s the question."},{"Start":"00:50.120 ","End":"00:54.860","Text":"And we\u0027re going to tackle it using the concept of positive-definite matrix."},{"Start":"00:54.860 ","End":"01:02.540","Text":"The other route to go is to work with all four axioms or conditions for in a product."},{"Start":"01:02.540 ","End":"01:04.580","Text":"And that\u0027s too complicated."},{"Start":"01:04.580 ","End":"01:08.510","Text":"Okay, So the way we go about it with the positive-definite"},{"Start":"01:08.510 ","End":"01:13.070","Text":"is to rewrite this definition in matrix form."},{"Start":"01:13.070 ","End":"01:15.469","Text":"Remember these are really column vectors."},{"Start":"01:15.469 ","End":"01:20.915","Text":"So the actually you is transpose of the column vector."},{"Start":"01:20.915 ","End":"01:25.040","Text":"But V is, we just wrote it as a column vector."},{"Start":"01:25.040 ","End":"01:30.380","Text":"And these numbers, these coefficients,"},{"Start":"01:30.380 ","End":"01:32.780","Text":"elements of the matrix,"},{"Start":"01:32.780 ","End":"01:35.675","Text":"we get from here as follows."},{"Start":"01:35.675 ","End":"01:39.185","Text":"The 11 means first row, first column."},{"Start":"01:39.185 ","End":"01:40.700","Text":"And that\u0027s a one,"},{"Start":"01:40.700 ","End":"01:43.160","Text":"that\u0027s this first row."},{"Start":"01:43.160 ","End":"01:45.740","Text":"Second column is a minus 3."},{"Start":"01:45.740 ","End":"01:48.005","Text":"Second row, first column,"},{"Start":"01:48.005 ","End":"01:52.530","Text":"minus 3, second row, second column."},{"Start":"01:53.140 ","End":"02:00.695","Text":"In this exercise, we\u0027re going to define a product on the vector space R2 as follows."},{"Start":"02:00.695 ","End":"02:02.000","Text":"Call it a product."},{"Start":"02:02.000 ","End":"02:06.260","Text":"It just means a function that takes two vectors and gives a scalar."},{"Start":"02:06.260 ","End":"02:09.830","Text":"So it define it according to this formula."},{"Start":"02:09.830 ","End":"02:14.960","Text":"And notice that it has a parameter k in it."},{"Start":"02:14.960 ","End":"02:18.770","Text":"So that\u0027s makes it a bit unusual."},{"Start":"02:18.770 ","End":"02:20.375","Text":"Okay?"},{"Start":"02:20.375 ","End":"02:21.980","Text":"Now the question is,"},{"Start":"02:21.980 ","End":"02:27.095","Text":"for which values of this parameter k does the above,"},{"Start":"02:27.095 ","End":"02:33.365","Text":"meaning this definition, define an inner product on the vector space R2."},{"Start":"02:33.365 ","End":"02:35.090","Text":"It\u0027s just a plain product."},{"Start":"02:35.090 ","End":"02:36.785","Text":"To be an inner product,"},{"Start":"02:36.785 ","End":"02:42.740","Text":"it has to satisfy four conditions or axioms."},{"Start":"02:42.740 ","End":"02:46.280","Text":"But we\u0027re not going to use those because that\u0027s very,"},{"Start":"02:46.280 ","End":"02:50.570","Text":"very, not very, but it\u0027s quite long and complicated."},{"Start":"02:50.570 ","End":"02:53.750","Text":"And we have some theorems that make life easier."},{"Start":"02:53.750 ","End":"02:57.005","Text":"Remember with positive-definite matrix, okay?"},{"Start":"02:57.005 ","End":"02:59.795","Text":"So what we do is as follows."},{"Start":"02:59.795 ","End":"03:04.430","Text":"We rewrite this matrix form."},{"Start":"03:04.430 ","End":"03:09.320","Text":"And the way it works is as follows."},{"Start":"03:09.320 ","End":"03:12.140","Text":"Here we put, well,"},{"Start":"03:12.140 ","End":"03:14.975","Text":"this is actually u transpose."},{"Start":"03:14.975 ","End":"03:21.290","Text":"Just being pedantic, you would be the column matrix and V is the column matrix,"},{"Start":"03:21.290 ","End":"03:23.855","Text":"but for convenience rather than the row matrix."},{"Start":"03:23.855 ","End":"03:26.585","Text":"So strictly speaking, this is u transpose,"},{"Start":"03:26.585 ","End":"03:30.470","Text":"this is v, and this is the matrix a,"},{"Start":"03:30.470 ","End":"03:31.970","Text":"which we build as follows."},{"Start":"03:31.970 ","End":"03:34.235","Text":"We look at the coefficients here."},{"Start":"03:34.235 ","End":"03:35.990","Text":"In each case we have x something."},{"Start":"03:35.990 ","End":"03:38.315","Text":"Why something in a number in front."},{"Start":"03:38.315 ","End":"03:41.375","Text":"And like the first one,"},{"Start":"03:41.375 ","End":"03:44.420","Text":"don\u0027t forget that if you don\u0027t see anything, it\u0027s a one."},{"Start":"03:44.420 ","End":"03:48.740","Text":"So we put a one in the first row,"},{"Start":"03:48.740 ","End":"03:51.095","Text":"first column, that\u0027s this."},{"Start":"03:51.095 ","End":"03:55.190","Text":"Then we have minus three in the first row,"},{"Start":"03:55.190 ","End":"03:56.930","Text":"second column, that\u0027s this."},{"Start":"03:56.930 ","End":"03:58.460","Text":"Minus three in second row,"},{"Start":"03:58.460 ","End":"03:59.840","Text":"first column is this,"},{"Start":"03:59.840 ","End":"04:02.945","Text":"and k in second row, second column."},{"Start":"04:02.945 ","End":"04:05.735","Text":"And also remember if there\u0027s missing terms,"},{"Start":"04:05.735 ","End":"04:08.555","Text":"then fill them in with zeros."},{"Start":"04:08.555 ","End":"04:10.325","Text":"Here, nothing\u0027s missing."},{"Start":"04:10.325 ","End":"04:12.380","Text":"All four of them are present."},{"Start":"04:12.380 ","End":"04:15.260","Text":"And so we get this matrix,"},{"Start":"04:15.260 ","End":"04:17.210","Text":"That\u0027s our matrix a."},{"Start":"04:17.210 ","End":"04:19.985","Text":"Notice that it is symmetric."},{"Start":"04:19.985 ","End":"04:27.105","Text":"Just making a note of that because our theory only works for symmetric matrices."},{"Start":"04:27.105 ","End":"04:31.120","Text":"And basically the theory we learn says that the answer to"},{"Start":"04:31.120 ","End":"04:35.140","Text":"this question of if this is an inner product or not,"},{"Start":"04:35.140 ","End":"04:40.570","Text":"is the same as the answer to whether this matrix is positive definite or not."},{"Start":"04:40.570 ","End":"04:46.135","Text":"So what we\u0027re gonna do is check if a is positive definite."},{"Start":"04:46.135 ","End":"04:49.630","Text":"And we have this proposition that this is going to be positive"},{"Start":"04:49.630 ","End":"04:54.460","Text":"definite if the principle minors are all positive."},{"Start":"04:54.460 ","End":"04:57.280","Text":"So there are two determinants to compute,"},{"Start":"04:57.280 ","End":"04:59.950","Text":"and we have to have both of them being"},{"Start":"04:59.950 ","End":"05:04.420","Text":"positive in order for the matrix to be positive-definite."},{"Start":"05:04.420 ","End":"05:06.175","Text":"Here we have a number."},{"Start":"05:06.175 ","End":"05:09.795","Text":"The determinant of just one is one and that\u0027s positive."},{"Start":"05:09.795 ","End":"05:12.620","Text":"The next determinant as a parameter in it."},{"Start":"05:12.620 ","End":"05:17.390","Text":"So we just make this a condition on K that this should be positive."},{"Start":"05:17.390 ","End":"05:20.330","Text":"Now the determinant is this diagonals product"},{"Start":"05:20.330 ","End":"05:23.345","Text":"minus this diagonal product one times k is k."},{"Start":"05:23.345 ","End":"05:25.775","Text":"This times this is nine."},{"Start":"05:25.775 ","End":"05:30.740","Text":"So we need k minus nine."},{"Start":"05:30.740 ","End":"05:35.359","Text":"And that means that k is bigger than nine."},{"Start":"05:35.359 ","End":"05:39.154","Text":"And so we can say that for k bigger than nine,"},{"Start":"05:39.154 ","End":"05:46.500","Text":"our definition product is actually an inner product. That\u0027s the answer."}],"ID":10138},{"Watched":false,"Name":"Exercise 3","Duration":"4m 6s","ChapterTopicVideoID":10017,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.325","Text":"In this exercise, we\u0027re going to define a product on our three."},{"Start":"00:05.325 ","End":"00:07.965","Text":"Let\u0027s take two typical vectors."},{"Start":"00:07.965 ","End":"00:11.310","Text":"And we\u0027re going to define their product,"},{"Start":"00:11.310 ","End":"00:17.100","Text":"which we write with angular brackets and the comma as X1, Y1,"},{"Start":"00:17.100 ","End":"00:22.320","Text":"and just this formula basically, except that note,"},{"Start":"00:22.320 ","End":"00:29.999","Text":"it contains the parameter k. And the question is,"},{"Start":"00:29.999 ","End":"00:39.129","Text":"for what values of the parameter k does this product become an inner product on R3?"},{"Start":"00:39.129 ","End":"00:41.870","Text":"And as usual, in this type of exercise,"},{"Start":"00:41.870 ","End":"00:48.425","Text":"we\u0027re not going to do it from the definition and prove all the four axioms or conditions."},{"Start":"00:48.425 ","End":"00:53.030","Text":"We\u0027re going to do it using the concept of a positive definite matrix."},{"Start":"00:53.030 ","End":"00:55.475","Text":"And I\u0027m going through this a bit quicker than usual,"},{"Start":"00:55.475 ","End":"00:58.295","Text":"this model first such exercise."},{"Start":"00:58.295 ","End":"01:07.025","Text":"What we have to do is to copy the coefficients from here into a three-by-three matrix."},{"Start":"01:07.025 ","End":"01:09.500","Text":"So for example, first row,"},{"Start":"01:09.500 ","End":"01:11.570","Text":"first column, it\u0027s a one."},{"Start":"01:11.570 ","End":"01:14.255","Text":"So that\u0027s where I put a one."},{"Start":"01:14.255 ","End":"01:18.784","Text":"Then I see 13 means first row, third column,"},{"Start":"01:18.784 ","End":"01:23.840","Text":"I need a K. Then second row,"},{"Start":"01:23.840 ","End":"01:28.230","Text":"second column, it\u0027s a one would be here."},{"Start":"01:28.360 ","End":"01:35.555","Text":"Third row, first column is here, That\u0027s a k."},{"Start":"01:35.555 ","End":"01:41.195","Text":"And third row, third column, also a one."},{"Start":"01:41.195 ","End":"01:47.315","Text":"And anything that doesn\u0027t appear just becomes 0."},{"Start":"01:47.315 ","End":"01:55.940","Text":"Okay, So this is in matrix form what we get some space here."},{"Start":"01:55.940 ","End":"01:58.940","Text":"And by the theory we learned,"},{"Start":"01:58.940 ","End":"02:06.230","Text":"we\u0027re going to get an inner product if and only if matrix here is positive-definite."},{"Start":"02:06.230 ","End":"02:08.930","Text":"Remember the concept of principal minors."},{"Start":"02:08.930 ","End":"02:12.740","Text":"Take the determinant of this than the determinant of this two-by-two and then"},{"Start":"02:12.740 ","End":"02:17.870","Text":"this three-by-three lifter all positive, That\u0027s good."},{"Start":"02:17.870 ","End":"02:20.570","Text":"So we try the first determinant."},{"Start":"02:20.570 ","End":"02:23.975","Text":"That\u0027s just one bigger than 0, fine."},{"Start":"02:23.975 ","End":"02:26.015","Text":"Next 1001."},{"Start":"02:26.015 ","End":"02:30.184","Text":"The determinant of that is one which is bigger than 0."},{"Start":"02:30.184 ","End":"02:31.775","Text":"So again, positive."},{"Start":"02:31.775 ","End":"02:34.190","Text":"Because if we get an entry that\u0027s not positive,"},{"Start":"02:34.190 ","End":"02:36.905","Text":"we can stop right away."},{"Start":"02:36.905 ","End":"02:40.310","Text":"Now the last one, which is this."},{"Start":"02:40.310 ","End":"02:44.870","Text":"So we can compute this by expanding it."},{"Start":"02:44.870 ","End":"02:46.955","Text":"Say on the middle row."},{"Start":"02:46.955 ","End":"02:55.205","Text":"We take this one and erase the row and column belongs to."},{"Start":"02:55.205 ","End":"02:57.500","Text":"We\u0027ll also have a matter of a sign."},{"Start":"02:57.500 ","End":"02:58.880","Text":"Remember the checkerboard plus,"},{"Start":"02:58.880 ","End":"03:01.340","Text":"minus, plus, minus, plus."},{"Start":"03:01.340 ","End":"03:06.515","Text":"So we have plus for the sign and then a one for the entry."},{"Start":"03:06.515 ","End":"03:09.950","Text":"And then determinant of a two-by-two that\u0027s left,"},{"Start":"03:09.950 ","End":"03:14.225","Text":"which is 1 k, k one."},{"Start":"03:14.225 ","End":"03:23.720","Text":"And this comes out to 1 minus k squared k."},{"Start":"03:23.720 ","End":"03:27.770","Text":"So for all three of these principal minors to be positive,"},{"Start":"03:27.770 ","End":"03:31.280","Text":"we have to have one minus k squared positive."},{"Start":"03:31.280 ","End":"03:34.100","Text":"This is positive for any k and this is"},{"Start":"03:34.100 ","End":"03:37.400","Text":"positive or any kth are really just the condition on here."},{"Start":"03:37.400 ","End":"03:44.405","Text":"And this quadratic inequality is equivalent to k squared less than 1."},{"Start":"03:44.405 ","End":"03:46.820","Text":"And there\u0027s many ways of solving this."},{"Start":"03:46.820 ","End":"03:51.455","Text":"You should get this result that K is between minus 11."},{"Start":"03:51.455 ","End":"03:55.160","Text":"And that is our answer really."},{"Start":"03:55.160 ","End":"04:00.590","Text":"And so we say that for k between minus 11,"},{"Start":"04:00.590 ","End":"04:06.900","Text":"our definition is really an inner product. And we\u0027re done."}],"ID":10139},{"Watched":false,"Name":"Exercise 4","Duration":"12m 7s","ChapterTopicVideoID":10018,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.200","Text":"This exercise is a bit more theoretical,"},{"Start":"00:04.200 ","End":"00:07.360","Text":"but it\u0027s not too difficult."},{"Start":"00:07.640 ","End":"00:13.830","Text":"We\u0027re talking about the space RN now and the general and could be R2,"},{"Start":"00:13.830 ","End":"00:17.145","Text":"R3, R4 are 10, anything."},{"Start":"00:17.145 ","End":"00:23.115","Text":"And we\u0027re defining a product where we take any two vectors,"},{"Start":"00:23.115 ","End":"00:29.415","Text":"u and v, with components x1 through xn and Y1 through YN."},{"Start":"00:29.415 ","End":"00:32.850","Text":"Eclipse I can u1 to un, v1 to vn."},{"Start":"00:32.850 ","End":"00:34.740","Text":"I like to vary the letters here."},{"Start":"00:34.740 ","End":"00:37.679","Text":"Don\u0027t get too used to one kind of notation."},{"Start":"00:37.679 ","End":"00:43.370","Text":"And the product we define is u,"},{"Start":"00:43.370 ","End":"00:49.190","Text":"v equals the sum from one to n of k i,"},{"Start":"00:49.190 ","End":"00:51.500","Text":"x i, y i,"},{"Start":"00:51.500 ","End":"00:54.170","Text":"and k1 through kN."},{"Start":"00:54.170 ","End":"00:59.910","Text":"Those are the different KPIs there, any positive numbers."},{"Start":"00:59.950 ","End":"01:02.240","Text":"But fixed."},{"Start":"01:02.240 ","End":"01:05.285","Text":"We have to show that the above definition,"},{"Start":"01:05.285 ","End":"01:09.200","Text":"this one here actually is an inner product on"},{"Start":"01:09.200 ","End":"01:13.970","Text":"R n. And then there\u0027s an additional question."},{"Start":"01:13.970 ","End":"01:20.670","Text":"What do we get if k i equals 1 for all I from one to n?"},{"Start":"01:20.800 ","End":"01:23.360","Text":"And because this is so abstract,"},{"Start":"01:23.360 ","End":"01:29.090","Text":"Let\u0027s take an example of a specific n equals four."},{"Start":"01:29.090 ","End":"01:31.490","Text":"And let\u0027s take k1, k2,"},{"Start":"01:31.490 ","End":"01:34.040","Text":"k3, k4 as follows."},{"Start":"01:34.040 ","End":"01:37.955","Text":"We\u0027ll take k1 is 10, 2, 3, 4."},{"Start":"01:37.955 ","End":"01:47.660","Text":"And then we can say specifically that this product is first of all on the abstract level,"},{"Start":"01:47.660 ","End":"01:50.105","Text":"K1 X1, Y1 plus k2 x2,"},{"Start":"01:50.105 ","End":"01:52.880","Text":"y2 up to k for x for y for,"},{"Start":"01:52.880 ","End":"01:55.790","Text":"and if we spell it out with the k1,"},{"Start":"01:55.790 ","End":"01:58.265","Text":"k2, k3, k4 that we have here."},{"Start":"01:58.265 ","End":"02:00.410","Text":"This is what we get."},{"Start":"02:00.410 ","End":"02:07.340","Text":"So this is our product and is this an inner product?"},{"Start":"02:07.340 ","End":"02:09.380","Text":"But before we tackle that,"},{"Start":"02:09.380 ","End":"02:12.650","Text":"I think we can easily answer this one now,"},{"Start":"02:12.650 ","End":"02:16.760","Text":"if all the k\u0027s are equal to one,"},{"Start":"02:16.760 ","End":"02:20.000","Text":"for example, here we would just get X1,"},{"Start":"02:20.000 ","End":"02:23.000","Text":"Y1, X2, Y2 plus X3,"},{"Start":"02:23.000 ","End":"02:27.140","Text":"Y3 plus x for y for all these numbers would be replaced by one."},{"Start":"02:27.140 ","End":"02:30.095","Text":"And in general, it would be,"},{"Start":"02:30.095 ","End":"02:32.990","Text":"instead of this definition here,"},{"Start":"02:32.990 ","End":"02:34.700","Text":"replace k i by one."},{"Start":"02:34.700 ","End":"02:36.965","Text":"It\u0027s like throwing it out."},{"Start":"02:36.965 ","End":"02:40.700","Text":"And this is what we call the standard inner product."},{"Start":"02:40.700 ","End":"02:43.685","Text":"So that\u0027s the answer to this."},{"Start":"02:43.685 ","End":"02:45.530","Text":"If, if all the k\u0027s a one,"},{"Start":"02:45.530 ","End":"02:49.925","Text":"it\u0027s the standard inner product and we know that it is an inner product."},{"Start":"02:49.925 ","End":"02:52.130","Text":"In other areas of math,"},{"Start":"02:52.130 ","End":"02:53.990","Text":"this is also called the dot-product."},{"Start":"02:53.990 ","End":"02:54.800","Text":"You may have heard of it."},{"Start":"02:54.800 ","End":"02:56.790","Text":"If not, don\u0027t worry."},{"Start":"02:56.800 ","End":"03:00.905","Text":"I\u0027m going to solve this in two different ways."},{"Start":"03:00.905 ","End":"03:04.130","Text":"One of them using the definition of the inner product,"},{"Start":"03:04.130 ","End":"03:05.765","Text":"remember we have to show,"},{"Start":"03:05.765 ","End":"03:13.685","Text":"we have to show that it satisfies four axioms or conditions that will be the longer one."},{"Start":"03:13.685 ","End":"03:15.800","Text":"And then we have the after that,"},{"Start":"03:15.800 ","End":"03:20.510","Text":"I\u0027ll show you the other method with positive definite matrices."},{"Start":"03:20.510 ","End":"03:26.120","Text":"Okay, So let\u0027s start on the proof using the definition."},{"Start":"03:26.120 ","End":"03:27.665","Text":"There are four things to show."},{"Start":"03:27.665 ","End":"03:31.355","Text":"The first one is what we called symmetry."},{"Start":"03:31.355 ","End":"03:33.830","Text":"That doesn\u0027t matter the order."},{"Start":"03:33.830 ","End":"03:38.375","Text":"The product of u and v is the same as the product of v and u."},{"Start":"03:38.375 ","End":"03:47.285","Text":"Now the left-hand side UV is the sum of KI XII for reverse the order."},{"Start":"03:47.285 ","End":"03:50.765","Text":"Then we\u0027re just switching the roles of the x\u0027s and the y\u0027s."},{"Start":"03:50.765 ","End":"03:54.140","Text":"So what we\u0027ll get is KI x,"},{"Start":"03:54.140 ","End":"03:58.040","Text":"xi sum from one to n. And I think it\u0027s pretty"},{"Start":"03:58.040 ","End":"04:02.660","Text":"clear that these two sums are equal because each corresponding term is equal,"},{"Start":"04:02.660 ","End":"04:06.620","Text":"because X i, y i is she."},{"Start":"04:06.620 ","End":"04:11.750","Text":"Now number two, which is what we called homogeneity,"},{"Start":"04:11.750 ","End":"04:16.220","Text":"which relates to where we put a constant."},{"Start":"04:16.220 ","End":"04:20.765","Text":"We can multiply a constant by the first argument,"},{"Start":"04:20.765 ","End":"04:25.250","Text":"or we can multiply the result by the constant scalar."},{"Start":"04:25.250 ","End":"04:28.655","Text":"I should say alpha is a scalar and it won\u0027t make a difference."},{"Start":"04:28.655 ","End":"04:30.470","Text":"Let\u0027s see if that\u0027s true."},{"Start":"04:30.470 ","End":"04:32.180","Text":"For the left-hand side,"},{"Start":"04:32.180 ","End":"04:35.795","Text":"we first multiply the vector u by Alpha."},{"Start":"04:35.795 ","End":"04:38.660","Text":"Perhaps I skipped a step here."},{"Start":"04:38.660 ","End":"04:40.370","Text":"Maybe I should write it alpha,"},{"Start":"04:40.370 ","End":"04:48.884","Text":"you would be alpha x one, alpha x two."},{"Start":"04:48.884 ","End":"04:52.645","Text":"That up to x"},{"Start":"04:52.645 ","End":"05:00.745","Text":"n. And so the role of X I is replaced by alpha x i at each point."},{"Start":"05:00.745 ","End":"05:03.325","Text":"So when we take the product,"},{"Start":"05:03.325 ","End":"05:06.730","Text":"we now have the sum of KI and instead of x i,"},{"Start":"05:06.730 ","End":"05:09.910","Text":"we have alpha x I times y I."},{"Start":"05:09.910 ","End":"05:16.375","Text":"Now, we can take alpha out in front of each term,"},{"Start":"05:16.375 ","End":"05:18.745","Text":"but then we can take it out of the hole,"},{"Start":"05:18.745 ","End":"05:23.660","Text":"sigma, because a constant can come outside of the sigma."},{"Start":"05:24.000 ","End":"05:31.829","Text":"On the other hand, the right-hand side is alpha times."},{"Start":"05:31.829 ","End":"05:35.045","Text":"Uv, which is just this thing with an alpha in front."},{"Start":"05:35.045 ","End":"05:38.675","Text":"And so these two are equal."},{"Start":"05:38.675 ","End":"05:41.000","Text":"And so that\u0027s number two."},{"Start":"05:41.000 ","End":"05:47.915","Text":"And now number three involves three vectors, u, v, and w."},{"Start":"05:47.915 ","End":"05:50.615","Text":"We haven\u0027t given W."},{"Start":"05:50.615 ","End":"05:56.045","Text":"The names of the components will make that just w1 through wn."},{"Start":"05:56.045 ","End":"05:59.610","Text":"Sounds clear a bit of space here."},{"Start":"06:00.100 ","End":"06:02.450","Text":"For the left-hand side,"},{"Start":"06:02.450 ","End":"06:04.340","Text":"we add u plus v,"},{"Start":"06:04.340 ","End":"06:08.435","Text":"which means in each component getting x i plus y i."},{"Start":"06:08.435 ","End":"06:12.335","Text":"And then we take KI times that times wi,"},{"Start":"06:12.335 ","End":"06:17.795","Text":"summation by the distributive law."},{"Start":"06:17.795 ","End":"06:25.445","Text":"That\u0027s, we can multiply the Wi inside here."},{"Start":"06:25.445 ","End":"06:31.790","Text":"And the same with the KI. Basically this becomes K i x i w i,"},{"Start":"06:31.790 ","End":"06:35.525","Text":"and this one kept becomes KI wi."},{"Start":"06:35.525 ","End":"06:39.469","Text":"And then we can also split the sum up into two."},{"Start":"06:39.469 ","End":"06:42.035","Text":"It\u0027s an extra step missing years."},{"Start":"06:42.035 ","End":"06:48.060","Text":"I just throw the KI in and then we get this sum plus this."},{"Start":"06:48.910 ","End":"06:51.080","Text":"For the right-hand side,"},{"Start":"06:51.080 ","End":"06:53.240","Text":"I do each of these separately."},{"Start":"06:53.240 ","End":"06:57.350","Text":"You with w gives me the sum of k i, x i,"},{"Start":"06:57.350 ","End":"07:03.185","Text":"w i, and v with w just have y I inserted the XI."},{"Start":"07:03.185 ","End":"07:06.710","Text":"And this plus this gives us what we already have here."},{"Start":"07:06.710 ","End":"07:11.554","Text":"So that\u0027s number 3 proved."},{"Start":"07:11.554 ","End":"07:13.865","Text":"And finally, number four,"},{"Start":"07:13.865 ","End":"07:18.875","Text":"we have to show that the product of each vector with itself is bigger or equal to 0,"},{"Start":"07:18.875 ","End":"07:25.849","Text":"with the equals 0 occurring only when the vector u is 0."},{"Start":"07:25.849 ","End":"07:30.560","Text":"And the product of u with itself is the sum of KI and XI,"},{"Start":"07:30.560 ","End":"07:34.714","Text":"XI, which means chi squared."},{"Start":"07:34.714 ","End":"07:38.630","Text":"Now, here\u0027s where it\u0027s important to note that"},{"Start":"07:38.630 ","End":"07:42.965","Text":"the k i\u0027s are all positive otherwise this reasoning wouldn\u0027t work."},{"Start":"07:42.965 ","End":"07:45.770","Text":"Xy squared is non-negative,"},{"Start":"07:45.770 ","End":"07:47.435","Text":"it\u0027s positive or 0,"},{"Start":"07:47.435 ","End":"07:49.250","Text":"and the KI is a positive,"},{"Start":"07:49.250 ","End":"07:52.415","Text":"so each term is non-negative."},{"Start":"07:52.415 ","End":"07:56.615","Text":"And when I add up non-negative stuff, still non-negative."},{"Start":"07:56.615 ","End":"08:01.580","Text":"Now the only way I can get 0 here,"},{"Start":"08:01.580 ","End":"08:06.845","Text":"since the k i\u0027s are strictly positive is for all the x i\u0027s to be 0."},{"Start":"08:06.845 ","End":"08:09.545","Text":"And if all the exercises 0,"},{"Start":"08:09.545 ","End":"08:17.435","Text":"then the vector u is the 0 vector because u is made up of the excise in a row or column."},{"Start":"08:17.435 ","End":"08:23.810","Text":"So that\u0027s number 4 and that it for method one proof."},{"Start":"08:23.810 ","End":"08:29.945","Text":"Remember that I\u0027m going to do it the other way with positive, positive definite matrices."},{"Start":"08:29.945 ","End":"08:34.205","Text":"And I\u0027ll give it a title alternate solution."},{"Start":"08:34.205 ","End":"08:38.345","Text":"Now, when we go to matrix form,"},{"Start":"08:38.345 ","End":"08:40.160","Text":"like in the previous examples,"},{"Start":"08:40.160 ","End":"08:46.850","Text":"what we did is we took a matrix and we assume it\u0027s all zeros."},{"Start":"08:46.850 ","End":"08:54.110","Text":"We put stuff in place of the zeros according to the rule where if we see x i,"},{"Start":"08:54.110 ","End":"08:58.235","Text":"y i, o in general x something, why something?"},{"Start":"08:58.235 ","End":"09:00.170","Text":"This would be the row number."},{"Start":"09:00.170 ","End":"09:03.440","Text":"This is the column number and this is the entry put there."},{"Start":"09:03.440 ","End":"09:07.940","Text":"That means in the ith row and the ith column we put KI."},{"Start":"09:07.940 ","End":"09:10.040","Text":"So first row, first column,"},{"Start":"09:10.040 ","End":"09:12.020","Text":"we put K1, second row,"},{"Start":"09:12.020 ","End":"09:17.255","Text":"second column, K2 up to nth row and jth column we put kn."},{"Start":"09:17.255 ","End":"09:21.590","Text":"If you think about that, it just means that we have k1 through kN on the diagonal."},{"Start":"09:21.590 ","End":"09:25.790","Text":"And only thing that\u0027s not mentioned here is just all zeros."},{"Start":"09:25.790 ","End":"09:29.630","Text":"So this is the matrix that we get."},{"Start":"09:29.630 ","End":"09:35.435","Text":"And this we have to now show is positive definite."},{"Start":"09:35.435 ","End":"09:39.120","Text":"And then that will be an inner product."},{"Start":"09:39.460 ","End":"09:44.585","Text":"Let\u0027s go back to our example that we had earlier because it\u0027s getting a bit abstract."},{"Start":"09:44.585 ","End":"09:46.940","Text":"So in our case,"},{"Start":"09:46.940 ","End":"09:50.300","Text":"with the, you can look back,"},{"Start":"09:50.300 ","End":"09:52.070","Text":"I suggest you rewind."},{"Start":"09:52.070 ","End":"09:57.470","Text":"If you don\u0027t remember, we had the k i\u0027s were 10,"},{"Start":"09:57.470 ","End":"09:59.210","Text":"2, 3, and 4."},{"Start":"09:59.210 ","End":"10:08.475","Text":"So these go on the diagonal and the matrix will be positive-definite."},{"Start":"10:08.475 ","End":"10:11.530","Text":"If the principle minors are all positive,"},{"Start":"10:11.530 ","End":"10:13.645","Text":"remember what the principle minors are."},{"Start":"10:13.645 ","End":"10:16.520","Text":"These determinants."},{"Start":"10:17.070 ","End":"10:19.255","Text":"For them."},{"Start":"10:19.255 ","End":"10:24.925","Text":"This determinant, this determinant each time of a square matrix."},{"Start":"10:24.925 ","End":"10:26.680","Text":"And if they\u0027re all positive,"},{"Start":"10:26.680 ","End":"10:31.780","Text":"then this matrix is positive definite matrix."},{"Start":"10:31.780 ","End":"10:33.790","Text":"If we happen to hit a negative one,"},{"Start":"10:33.790 ","End":"10:36.340","Text":"we can stop right there and say that it\u0027s not."},{"Start":"10:36.340 ","End":"10:37.855","Text":"Now."},{"Start":"10:37.855 ","End":"10:41.169","Text":"In, in this case because they\u0027re all positive,"},{"Start":"10:41.169 ","End":"10:44.965","Text":"it\u0027s fairly clear that these determinants are all going to be positive."},{"Start":"10:44.965 ","End":"10:50.450","Text":"And let\u0027s just keep going with this particular example."},{"Start":"10:50.940 ","End":"10:54.370","Text":"We have four principal minors,"},{"Start":"10:54.370 ","End":"10:56.125","Text":"just the 10 and 10 2."},{"Start":"10:56.125 ","End":"11:01.915","Text":"Now the determinant of each of these is the product of the main diagonal."},{"Start":"11:01.915 ","End":"11:04.570","Text":"That\u0027s how it works with diagonal matrix is 10,"},{"Start":"11:04.570 ","End":"11:08.440","Text":"this is 20, 60."},{"Start":"11:08.440 ","End":"11:11.875","Text":"I don\u0027t know what, 240 anyway,"},{"Start":"11:11.875 ","End":"11:15.820","Text":"positive because each of them is a product of positive numbers."},{"Start":"11:15.820 ","End":"11:18.460","Text":"Now this is not just in our case,"},{"Start":"11:18.460 ","End":"11:24.820","Text":"but we in general know that all the k i\u0027s k1 through kN positive."},{"Start":"11:24.820 ","End":"11:31.750","Text":"So any square matrix I take and take the determinant of it."},{"Start":"11:31.750 ","End":"11:35.240","Text":"It\u0027s going to be k1 up to some,"},{"Start":"11:35.240 ","End":"11:39.859","Text":"I don\u0027t know KI, it\u0027s going to be some product of positive numbers."},{"Start":"11:39.859 ","End":"11:47.720","Text":"So what I said here works in general just that we have to use...gov k1,"},{"Start":"11:47.720 ","End":"11:50.840","Text":"k1, k2, and so on."},{"Start":"11:50.840 ","End":"11:54.470","Text":"Up to the last one will be k1 through kN,"},{"Start":"11:54.470 ","End":"11:58.770","Text":"and that\u0027s just the product of k1 to kn, just positive."},{"Start":"11:58.840 ","End":"12:04.115","Text":"Okay, that was pretty hard work,"},{"Start":"12:04.115 ","End":"12:07.650","Text":"but we are finally done."}],"ID":10140},{"Watched":false,"Name":"Exercise 5","Duration":"13m 48s","ChapterTopicVideoID":10015,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.710","Text":"This exercise is a bit different to the ones we\u0027ve done up to now so far,"},{"Start":"00:04.710 ","End":"00:07.665","Text":"we\u0027ve just been using the vector space RN,"},{"Start":"00:07.665 ","End":"00:09.645","Text":"like a2 or A3."},{"Start":"00:09.645 ","End":"00:11.835","Text":"And in this one,"},{"Start":"00:11.835 ","End":"00:17.055","Text":"vector space is the space of m by n real matrices."},{"Start":"00:17.055 ","End":"00:21.270","Text":"And we want to see if we can define an inner product here too."},{"Start":"00:21.270 ","End":"00:28.050","Text":"So let\u0027s start off by defining a product will take two general matrices,"},{"Start":"00:28.050 ","End":"00:34.830","Text":"a and B, and define their product to be the trace of B transpose times a."},{"Start":"00:34.830 ","End":"00:37.320","Text":"And I\u0027ll remind you in a moment,"},{"Start":"00:37.320 ","End":"00:41.830","Text":"What\u0027s a transpose and what\u0027s a trace and why this makes sense."},{"Start":"00:41.830 ","End":"00:47.270","Text":"Anyway, our task is to prove or check that"},{"Start":"00:47.270 ","End":"00:53.190","Text":"this product really is an inner product on this vector space."},{"Start":"00:53.260 ","End":"00:58.640","Text":"The elements of the space we can call the matrices and we can also call them vectors."},{"Start":"00:58.640 ","End":"01:00.870","Text":"Both will apply."},{"Start":"01:00.970 ","End":"01:03.020","Text":"To clarify things."},{"Start":"01:03.020 ","End":"01:04.430","Text":"Let\u0027s take an example."},{"Start":"01:04.430 ","End":"01:08.855","Text":"Let\u0027s take three by two matrices."},{"Start":"01:08.855 ","End":"01:13.730","Text":"And let me take an example where a is this and B is this,"},{"Start":"01:13.730 ","End":"01:15.230","Text":"three rows, two columns,"},{"Start":"01:15.230 ","End":"01:17.539","Text":"so that\u0027s the right size."},{"Start":"01:17.539 ","End":"01:23.135","Text":"Now let\u0027s apply the definition of a product to our a and b."},{"Start":"01:23.135 ","End":"01:26.285","Text":"We need to trace the moment."},{"Start":"01:26.285 ","End":"01:33.524","Text":"The transpose of B is what happens when we take b and we switch rows and columns."},{"Start":"01:33.524 ","End":"01:37.025","Text":"The first column is the first row,"},{"Start":"01:37.025 ","End":"01:39.710","Text":"second column is the second row,"},{"Start":"01:39.710 ","End":"01:42.980","Text":"or the other way around first row is the first column."},{"Start":"01:42.980 ","End":"01:47.150","Text":"This is B transpose times a."},{"Start":"01:47.150 ","End":"01:49.460","Text":"Now, are these the right size?"},{"Start":"01:49.460 ","End":"01:53.660","Text":"Yeah, I can multiply a two by three with a three by two."},{"Start":"01:53.660 ","End":"01:58.445","Text":"For example. Let\u0027s just do the first one."},{"Start":"01:58.445 ","End":"02:04.010","Text":"This first row with the first column should give us the entry here."},{"Start":"02:04.010 ","End":"02:04.970","Text":"Let\u0027s check if that is."},{"Start":"02:04.970 ","End":"02:09.260","Text":"So 5 times 0 is 0,"},{"Start":"02:09.260 ","End":"02:10.520","Text":"3 times 2 is 6,"},{"Start":"02:10.520 ","End":"02:14.435","Text":"0 times 4 0, 0 and 1600 is six."},{"Start":"02:14.435 ","End":"02:19.385","Text":"And you can check the other three are perhaps I\u0027ll do another one."},{"Start":"02:19.385 ","End":"02:22.520","Text":"Let\u0027s see what is in the second row, second column."},{"Start":"02:22.520 ","End":"02:23.675","Text":"So that\u0027s this."},{"Start":"02:23.675 ","End":"02:31.850","Text":"With this, that gives me 0 times 1 plus 1 times 3 plus 2 times 0 gives me the three."},{"Start":"02:31.850 ","End":"02:34.445","Text":"Now in practice, I wouldn\u0027t even bother"},{"Start":"02:34.445 ","End":"02:37.835","Text":"computing the other two because I know I\u0027m going to take the trace."},{"Start":"02:37.835 ","End":"02:39.800","Text":"And now I\u0027m going to remind you what Traces is"},{"Start":"02:39.800 ","End":"02:42.229","Text":"the sum of the elements along the diagonal."},{"Start":"02:42.229 ","End":"02:45.185","Text":"So if we\u0027re only going to be carrying about the diagonal,"},{"Start":"02:45.185 ","End":"02:49.010","Text":"in practice, I wouldn\u0027t have computed the 14 and the 10."},{"Start":"02:49.010 ","End":"02:51.485","Text":"Anyway, six plus three is nine."},{"Start":"02:51.485 ","End":"02:55.460","Text":"And that\u0027s a real number, it\u0027s a scalar."},{"Start":"02:55.460 ","End":"02:58.790","Text":"So I just showed that we take two vectors,"},{"Start":"02:58.790 ","End":"03:02.840","Text":"which in this case our matrices and their product gives us a scalar."},{"Start":"03:02.840 ","End":"03:06.335","Text":"And that\u0027s part of the definition of an inner product."},{"Start":"03:06.335 ","End":"03:15.755","Text":"We still have to do is show for properties or axioms conditions."},{"Start":"03:15.755 ","End":"03:18.020","Text":"Let me note, I mean,"},{"Start":"03:18.020 ","End":"03:20.570","Text":"he wasn\u0027t a fluke that we got a square matrix here."},{"Start":"03:20.570 ","End":"03:21.860","Text":"We\u0027d have to get a square matrix."},{"Start":"03:21.860 ","End":"03:25.295","Text":"We take the trace because if a is m by n,"},{"Start":"03:25.295 ","End":"03:28.265","Text":"and you can just follow this with three by two."},{"Start":"03:28.265 ","End":"03:33.410","Text":"And then B transpose would be the reverse n by m."},{"Start":"03:33.410 ","End":"03:37.784","Text":"So if I multiply this with this,"},{"Start":"03:37.784 ","End":"03:43.360","Text":"n by m multiplied by m by n will be an n by n,"},{"Start":"03:43.360 ","End":"03:45.370","Text":"like here, it\u0027s a two-by-two."},{"Start":"03:45.370 ","End":"03:46.750","Text":"And so it\u0027s square."},{"Start":"03:46.750 ","End":"03:50.305","Text":"And a square matrix has a trace because it has a diagonal."},{"Start":"03:50.305 ","End":"03:52.059","Text":"Okay?"},{"Start":"03:52.059 ","End":"03:59.215","Text":"So to continue, we need to remember the properties of the trace."},{"Start":"03:59.215 ","End":"04:03.190","Text":"Will be using three properties of the trace."},{"Start":"04:03.190 ","End":"04:10.195","Text":"And after that, we\u0027ll go ahead and prove the four axioms."},{"Start":"04:10.195 ","End":"04:16.615","Text":"Okay, first property, the trace of the sum is the sum of the traces."},{"Start":"04:16.615 ","End":"04:21.940","Text":"Second property, if I take a matrix,"},{"Start":"04:21.940 ","End":"04:24.310","Text":"multiply it by a scalar,"},{"Start":"04:24.310 ","End":"04:26.485","Text":"and take the trace of that."},{"Start":"04:26.485 ","End":"04:30.655","Text":"It\u0027s k times the trace of x."},{"Start":"04:30.655 ","End":"04:34.450","Text":"Put a dot here to separate the k from the TR,"},{"Start":"04:34.450 ","End":"04:36.940","Text":"k times the trace of x."},{"Start":"04:36.940 ","End":"04:44.740","Text":"And the last property is if I take the trace of the transpose of a matrix,"},{"Start":"04:44.740 ","End":"04:47.890","Text":"it\u0027s the same as the transpose of the matrix."},{"Start":"04:47.890 ","End":"04:49.990","Text":"So those are the three properties."},{"Start":"04:49.990 ","End":"04:55.780","Text":"And as we start the real work of showing that our definition of"},{"Start":"04:55.780 ","End":"05:03.150","Text":"product really is an inner product and it satisfies the four axioms for in a product."},{"Start":"05:03.150 ","End":"05:06.590","Text":"The first axiom is what we called symmetry,"},{"Start":"05:06.590 ","End":"05:08.870","Text":"that the order doesn\u0027t matter."},{"Start":"05:08.870 ","End":"05:10.985","Text":"Though we did it with vectors."},{"Start":"05:10.985 ","End":"05:13.760","Text":"We consider matrices here as vectors."},{"Start":"05:13.760 ","End":"05:17.255","Text":"So we want to show that this product we defined,"},{"Start":"05:17.255 ","End":"05:19.955","Text":"we take a with B,"},{"Start":"05:19.955 ","End":"05:23.340","Text":"it\u0027s the same as B with a."},{"Start":"05:23.710 ","End":"05:28.145","Text":"And the next property called homogeneity."},{"Start":"05:28.145 ","End":"05:30.740","Text":"It\u0027s really part of the linearity."},{"Start":"05:30.740 ","End":"05:40.205","Text":"Me means or says that if we have two vectors or matrices in this case and a scalar,"},{"Start":"05:40.205 ","End":"05:46.595","Text":"it doesn\u0027t matter if we multiply the scalar by the first matrix and then do the product,"},{"Start":"05:46.595 ","End":"05:48.935","Text":"or first to the product,"},{"Start":"05:48.935 ","End":"05:52.370","Text":"and then multiply by the scalar K."},{"Start":"05:52.370 ","End":"05:55.520","Text":"Then there\u0027s the linearity."},{"Start":"05:55.520 ","End":"06:00.065","Text":"This one we have three matrices."},{"Start":"06:00.065 ","End":"06:07.760","Text":"What we say is that if I add a plus b and then take the product of that with C,"},{"Start":"06:07.760 ","End":"06:13.860","Text":"it\u0027s the same as taking the product of a with C and B with C and adding them."},{"Start":"06:13.860 ","End":"06:16.030","Text":"And the last axiom,"},{"Start":"06:16.030 ","End":"06:18.280","Text":"sometimes called positive definiteness,"},{"Start":"06:18.280 ","End":"06:22.300","Text":"is that for any matrix,"},{"Start":"06:22.300 ","End":"06:28.675","Text":"the product of a with itself is always bigger or equal to 0."},{"Start":"06:28.675 ","End":"06:35.620","Text":"And the 0 case occurring only if the matrix is 0."},{"Start":"06:35.620 ","End":"06:38.930","Text":"When I say 0, the 0 matrix."},{"Start":"06:39.480 ","End":"06:43.150","Text":"Okay, So far we\u0027ve just presented"},{"Start":"06:43.150 ","End":"06:48.355","Text":"everything now we actually have to do the work of showing 1234."},{"Start":"06:48.355 ","End":"06:53.155","Text":"Okay, we\u0027ll start with the first axiom symmetry."},{"Start":"06:53.155 ","End":"06:56.400","Text":"The product of a with B,"},{"Start":"06:56.400 ","End":"07:01.490","Text":"well defined to be the trace of should really put brackets here,"},{"Start":"07:01.490 ","End":"07:04.550","Text":"B transpose times a."},{"Start":"07:04.550 ","End":"07:09.230","Text":"And one of the properties of the trace was that the trace of"},{"Start":"07:09.230 ","End":"07:14.135","Text":"x transpose is the same as the trace of x."},{"Start":"07:14.135 ","End":"07:19.475","Text":"So I\u0027m applying it with x being this expression."},{"Start":"07:19.475 ","End":"07:22.115","Text":"So I can put a transpose here."},{"Start":"07:22.115 ","End":"07:28.085","Text":"Now the transpose of a product,"},{"Start":"07:28.085 ","End":"07:31.625","Text":"you reverse the order and take the transpose of each."},{"Start":"07:31.625 ","End":"07:39.270","Text":"So what we would get would be a transpose B transpose transpose."},{"Start":"07:39.270 ","End":"07:42.445","Text":"A transpose transpose brings it back to B."},{"Start":"07:42.445 ","End":"07:47.590","Text":"So we get this. And this is exactly the product of B with a."},{"Start":"07:47.590 ","End":"07:51.880","Text":"If we look at the definition and just replace a with B and B with a is what we get."},{"Start":"07:51.880 ","End":"07:56.275","Text":"So we\u0027ve shown the symmetry."},{"Start":"07:56.275 ","End":"08:02.830","Text":"The first axiom and next property number 2, the homogeneity."},{"Start":"08:02.830 ","End":"08:08.845","Text":"So we take KA product with b."},{"Start":"08:08.845 ","End":"08:17.000","Text":"And by definition, it\u0027s the trace of the second one transpose times the first one."},{"Start":"08:17.100 ","End":"08:21.659","Text":"Now when we have a product like this of matrices with a constant,"},{"Start":"08:21.659 ","End":"08:24.365","Text":"we can bring the constant to the front."},{"Start":"08:24.365 ","End":"08:29.165","Text":"I like to put this in brackets so the K comes in front of B transpose."},{"Start":"08:29.165 ","End":"08:34.070","Text":"And next we\u0027ll use the property that the trace of k times"},{"Start":"08:34.070 ","End":"08:39.890","Text":"a matrix X equals K times trace of x."},{"Start":"08:39.890 ","End":"08:43.370","Text":"So I can bring this K in front of the trace."},{"Start":"08:43.370 ","End":"08:50.600","Text":"But this pit after the k is exactly the definition of inner product of a with B."},{"Start":"08:50.600 ","End":"08:53.030","Text":"So if we look at the beginning and the end,"},{"Start":"08:53.030 ","End":"08:58.190","Text":"That\u0027s property your axiom number 2, taken care of."},{"Start":"08:58.190 ","End":"09:01.280","Text":"So let\u0027s move on to number three."},{"Start":"09:01.280 ","End":"09:04.265","Text":"And this is the linearity one."},{"Start":"09:04.265 ","End":"09:09.545","Text":"We start with the inner product of a plus B with C."},{"Start":"09:09.545 ","End":"09:17.430","Text":"And by definition it\u0027s trace of the second one transpose times the first one."},{"Start":"09:18.430 ","End":"09:22.220","Text":"Using the distributive law for matrices,"},{"Start":"09:22.220 ","End":"09:27.694","Text":"the C transpose I multiply by a and then by B with an addition."},{"Start":"09:27.694 ","End":"09:31.970","Text":"Then the trace of a sum is the sum of the traces."},{"Start":"09:31.970 ","End":"09:36.815","Text":"And this bit is the first term is exactly the product of"},{"Start":"09:36.815 ","End":"09:41.300","Text":"a with C because it\u0027s the second transpose times the first."},{"Start":"09:41.300 ","End":"09:44.240","Text":"And similarly, this is just the product of"},{"Start":"09:44.240 ","End":"09:47.630","Text":"B with C. And if we compare the first and the last,"},{"Start":"09:47.630 ","End":"09:49.535","Text":"that\u0027s what we have to show."},{"Start":"09:49.535 ","End":"09:52.970","Text":"Now we come to the fourth and last property."},{"Start":"09:52.970 ","End":"09:56.315","Text":"So let\u0027s let our matrix a,"},{"Start":"09:56.315 ","End":"10:00.260","Text":"which is m by n B as follows."},{"Start":"10:00.260 ","End":"10:05.194","Text":"We use double index subscript."},{"Start":"10:05.194 ","End":"10:13.070","Text":"Notice it goes one to m and here one through n. And just to remind you,"},{"Start":"10:13.070 ","End":"10:19.790","Text":"we\u0027re heading towards showing that product of a with a is non-negative."},{"Start":"10:19.790 ","End":"10:24.305","Text":"So what I need to do is compute a transpose a."},{"Start":"10:24.305 ","End":"10:25.835","Text":"Now look."},{"Start":"10:25.835 ","End":"10:29.765","Text":"This is exactly, this."},{"Start":"10:29.765 ","End":"10:31.595","Text":"Just copied this here."},{"Start":"10:31.595 ","End":"10:35.660","Text":"And I made a note, this is an m by n matrix."},{"Start":"10:35.660 ","End":"10:38.585","Text":"This is not the same."},{"Start":"10:38.585 ","End":"10:40.940","Text":"I reverse the rows and columns."},{"Start":"10:40.940 ","End":"10:47.495","Text":"Anyway, this is the transpose and this one is n rows by m columns."},{"Start":"10:47.495 ","End":"10:52.865","Text":"So I know the output is going to be an n-by-n matrix."},{"Start":"10:52.865 ","End":"10:57.440","Text":"And I only care about the entries along"},{"Start":"10:57.440 ","End":"11:02.899","Text":"the diagonal because the rest of them are not going to contribute to the trace."},{"Start":"11:02.899 ","End":"11:05.780","Text":"Let\u0027s take one of these entries."},{"Start":"11:05.780 ","End":"11:07.730","Text":"For example, this one."},{"Start":"11:07.730 ","End":"11:10.220","Text":"How do I get the entry,"},{"Start":"11:10.220 ","End":"11:12.095","Text":"second row, second column."},{"Start":"11:12.095 ","End":"11:16.279","Text":"I take the second row here, which is this,"},{"Start":"11:16.279 ","End":"11:21.695","Text":"and multiply it element-wise with the second column from here."},{"Start":"11:21.695 ","End":"11:28.145","Text":"Now if you look at this, what I get is a12 times a12 plus a2 times a2 2."},{"Start":"11:28.145 ","End":"11:30.140","Text":"So each element is going to be squared."},{"Start":"11:30.140 ","End":"11:32.840","Text":"It\u0027s this one square plus this 1 squared dot, dot,"},{"Start":"11:32.840 ","End":"11:36.079","Text":"dot up to this one squared."},{"Start":"11:36.079 ","End":"11:38.629","Text":"This will be an intermediate result."},{"Start":"11:38.629 ","End":"11:43.490","Text":"And now I\u0027m going to start from a with a system,"},{"Start":"11:43.490 ","End":"11:45.860","Text":"you get some more space."},{"Start":"11:45.860 ","End":"11:52.040","Text":"By definition, this is the trace of a transpose times a."},{"Start":"11:52.040 ","End":"11:56.450","Text":"So we need the trace of this matrix,"},{"Start":"11:56.450 ","End":"11:59.270","Text":"which is c1 e1 plus e2 two."},{"Start":"11:59.270 ","End":"12:00.950","Text":"Okay, so look here,"},{"Start":"12:00.950 ","End":"12:03.530","Text":"I wrote C11, C22,"},{"Start":"12:03.530 ","End":"12:05.465","Text":"and so on up to CNN."},{"Start":"12:05.465 ","End":"12:09.380","Text":"Now, the example we did with C22,"},{"Start":"12:09.380 ","End":"12:12.200","Text":"which is A12 squared,"},{"Start":"12:12.200 ","End":"12:15.410","Text":"a2 2 squared am 2 squared."},{"Start":"12:15.410 ","End":"12:17.945","Text":"And if we did the first one,"},{"Start":"12:17.945 ","End":"12:20.900","Text":"it would be this width,"},{"Start":"12:20.900 ","End":"12:23.330","Text":"this would give us this,"},{"Start":"12:23.330 ","End":"12:26.135","Text":"and that would be each one of these squared."},{"Start":"12:26.135 ","End":"12:27.920","Text":"So it\u0027s A11 squared,"},{"Start":"12:27.920 ","End":"12:29.570","Text":"a21 squared, da, da,"},{"Start":"12:29.570 ","End":"12:31.415","Text":"da, and so on."},{"Start":"12:31.415 ","End":"12:33.920","Text":"Up to the last one."},{"Start":"12:33.920 ","End":"12:39.275","Text":"Oh minus will highlight that also this with this gives us this,"},{"Start":"12:39.275 ","End":"12:43.580","Text":"which is everything with an n in the second place squared,"},{"Start":"12:43.580 ","End":"12:44.720","Text":"a 1 n squared,"},{"Start":"12:44.720 ","End":"12:47.165","Text":"a2 n squared, and so on."},{"Start":"12:47.165 ","End":"12:54.470","Text":"Now, notice that this whole thing is just the sum of lots of squares."},{"Start":"12:54.470 ","End":"12:58.280","Text":"So obviously it\u0027s bigger or equal to 0."},{"Start":"12:58.280 ","End":"13:02.090","Text":"Now, when can I get equality here?"},{"Start":"13:02.090 ","End":"13:07.775","Text":"The only way to get equality is if these are all 0."},{"Start":"13:07.775 ","End":"13:10.610","Text":"If you look at it, these are all,"},{"Start":"13:10.610 ","End":"13:12.485","Text":"this is just the first column."},{"Start":"13:12.485 ","End":"13:14.480","Text":"This 0, 0, 0,"},{"Start":"13:14.480 ","End":"13:16.940","Text":"the first row here doesn\u0027t really matter."},{"Start":"13:16.940 ","End":"13:18.755","Text":"These are all 0,"},{"Start":"13:18.755 ","End":"13:21.920","Text":"these are all 0, these are all 0."},{"Start":"13:21.920 ","End":"13:26.075","Text":"I chose this one because this is our a here."},{"Start":"13:26.075 ","End":"13:28.730","Text":"So all the coefficients are 0,"},{"Start":"13:28.730 ","End":"13:32.090","Text":"which means that a is the 0 matrix."},{"Start":"13:32.090 ","End":"13:37.744","Text":"In other words, the product of a with itself is 0 if and only if a is the 0,"},{"Start":"13:37.744 ","End":"13:39.920","Text":"this is the 0 matrix,"},{"Start":"13:39.920 ","End":"13:41.840","Text":"this is 0, the number."},{"Start":"13:41.840 ","End":"13:46.490","Text":"And that proves the property for axiom four."},{"Start":"13:46.490 ","End":"13:49.050","Text":"And we\u0027re done."}],"ID":10141},{"Watched":false,"Name":"Exercise 6","Duration":"5m 20s","ChapterTopicVideoID":9635,"CourseChapterTopicPlaylistID":7308,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.435","Text":"Here we have another exercise, an inner products."},{"Start":"00:03.435 ","End":"00:07.949","Text":"Most of the exercises involve one of the vector spaces,"},{"Start":"00:07.949 ","End":"00:10.650","Text":"RN, or two or three or whatever."},{"Start":"00:10.650 ","End":"00:13.935","Text":"Here, it\u0027s different."},{"Start":"00:13.935 ","End":"00:16.425","Text":"We have this vector space,"},{"Start":"00:16.425 ","End":"00:22.990","Text":"which is the space of continuous functions on the closed interval a, b."},{"Start":"00:23.090 ","End":"00:27.030","Text":"So the functions are also vectors in,"},{"Start":"00:27.030 ","End":"00:32.775","Text":"in a sense, we have to define what we want to define an inner product."},{"Start":"00:32.775 ","End":"00:35.260","Text":"So we start off with a product,"},{"Start":"00:35.260 ","End":"00:39.380","Text":"which means that we have to provide a function"},{"Start":"00:39.380 ","End":"00:43.910","Text":"from two vectors gives us a scalar or in this case two functions."},{"Start":"00:43.910 ","End":"00:49.070","Text":"And we define it as the integral from a to b of f of x, g of x dx."},{"Start":"00:49.070 ","End":"00:50.660","Text":"Now there\u0027s a definite integral."},{"Start":"00:50.660 ","End":"00:52.880","Text":"So the result is the number a scalar."},{"Start":"00:52.880 ","End":"00:55.850","Text":"So many two functions we get a scalar."},{"Start":"00:55.850 ","End":"00:57.380","Text":"So it\u0027s a product."},{"Start":"00:57.380 ","End":"01:00.500","Text":"But the question is if there\u0027s an inner product and for that,"},{"Start":"01:00.500 ","End":"01:05.975","Text":"we need to show that it verifies the four conditions or axioms."},{"Start":"01:05.975 ","End":"01:08.615","Text":"So let\u0027s start on that."},{"Start":"01:08.615 ","End":"01:12.665","Text":"I\u0027ll just rephrase them in the context of functions that we\u0027ve done them."},{"Start":"01:12.665 ","End":"01:14.240","Text":"It\u0027s also a refresher."},{"Start":"01:14.240 ","End":"01:16.235","Text":"The first one is symmetry,"},{"Start":"01:16.235 ","End":"01:19.670","Text":"which means that the inner product of f with g is the"},{"Start":"01:19.670 ","End":"01:23.510","Text":"same as the inner product of g with f for any f and g,"},{"Start":"01:23.510 ","End":"01:25.070","Text":"The symbol is for all,"},{"Start":"01:25.070 ","End":"01:28.170","Text":"for all f and g in our vector space."},{"Start":"01:28.180 ","End":"01:32.735","Text":"The second one is sometimes called homogeneity."},{"Start":"01:32.735 ","End":"01:36.620","Text":"It involves a scalar K,"},{"Start":"01:36.620 ","End":"01:42.500","Text":"which says that if you multiply f by k and take the inner product,"},{"Start":"01:42.500 ","End":"01:43.940","Text":"or first take the inner product,"},{"Start":"01:43.940 ","End":"01:45.320","Text":"multiply the result by k,"},{"Start":"01:45.320 ","End":"01:47.435","Text":"you get the same thing."},{"Start":"01:47.435 ","End":"01:50.975","Text":"The third axiom, the kind of linearity,"},{"Start":"01:50.975 ","End":"01:59.075","Text":"which says that if I take the inner product of a sum f plus g with h,"},{"Start":"01:59.075 ","End":"02:06.539","Text":"It\u0027s the same as doing f with h and g with h separately and then adding the result."},{"Start":"02:06.700 ","End":"02:12.665","Text":"The fourth axiom says that for any f in our space,"},{"Start":"02:12.665 ","End":"02:18.200","Text":"the inner product of f with itself is bigger or equal to 00"},{"Start":"02:18.200 ","End":"02:23.990","Text":"only in the case when F is the 0 function,"},{"Start":"02:23.990 ","End":"02:26.855","Text":"the function that\u0027s always 0."},{"Start":"02:26.855 ","End":"02:30.260","Text":"Okay, so now we have to check these four."},{"Start":"02:30.260 ","End":"02:33.350","Text":"So let\u0027s start, start with number one."},{"Start":"02:33.350 ","End":"02:35.465","Text":"When we called Symmetry,"},{"Start":"02:35.465 ","End":"02:41.345","Text":"product of f with g is defined to be the integral from a to b."},{"Start":"02:41.345 ","End":"02:42.830","Text":"I should really say f of x,"},{"Start":"02:42.830 ","End":"02:44.915","Text":"g of x dx."},{"Start":"02:44.915 ","End":"02:49.580","Text":"And since multiplication is commutative, the order does matter."},{"Start":"02:49.580 ","End":"02:52.250","Text":"F times g is g times f."},{"Start":"02:52.250 ","End":"02:56.825","Text":"And this is exactly the definition of the inner product of g with f,"},{"Start":"02:56.825 ","End":"02:59.840","Text":"g and f just reverse roles."},{"Start":"02:59.840 ","End":"03:02.510","Text":"And on to number two."},{"Start":"03:02.510 ","End":"03:08.480","Text":"Let\u0027s put the k in front of f. The inner product is the integral"},{"Start":"03:08.480 ","End":"03:14.210","Text":"from a to b of the function k f times g. By the properties of integration,"},{"Start":"03:14.210 ","End":"03:19.130","Text":"the constant can come out in front of the integral."},{"Start":"03:19.130 ","End":"03:25.430","Text":"And so we have K and this bit is just f product with g."},{"Start":"03:25.430 ","End":"03:28.145","Text":"So that\u0027s number 2 checked."},{"Start":"03:28.145 ","End":"03:30.500","Text":"Number three was linearity."},{"Start":"03:30.500 ","End":"03:34.730","Text":"So we start with the sum f plus g product with H."},{"Start":"03:34.730 ","End":"03:43.805","Text":"So from a definition It\u0027s the integral of this times this dx distributive law,"},{"Start":"03:43.805 ","End":"03:47.780","Text":"f plus g times h is f times h plus g times h."},{"Start":"03:47.780 ","End":"03:49.580","Text":"And then properties of the integral,"},{"Start":"03:49.580 ","End":"03:53.480","Text":"integral of a sum is sum of the integrals so I can split it up."},{"Start":"03:53.480 ","End":"03:57.635","Text":"And the first path is just the product of F with H."},{"Start":"03:57.635 ","End":"04:05.360","Text":"And the second one is just g times h product of g and h."},{"Start":"04:05.360 ","End":"04:07.999","Text":"So that proves Number 3."},{"Start":"04:07.999 ","End":"04:09.680","Text":"One more to go."},{"Start":"04:09.680 ","End":"04:19.834","Text":"Now num before product of f with itself is the integral from a to b of f squared dx."},{"Start":"04:19.834 ","End":"04:23.615","Text":"Now if I take the integral of a positive function,"},{"Start":"04:23.615 ","End":"04:25.340","Text":"the result will be positive."},{"Start":"04:25.340 ","End":"04:26.840","Text":"Just think of it as area."},{"Start":"04:26.840 ","End":"04:30.290","Text":"I feel like the functions above the x-axis,"},{"Start":"04:30.290 ","End":"04:33.815","Text":"then the area is going to be above the x axis also."},{"Start":"04:33.815 ","End":"04:37.430","Text":"So that\u0027s part of it."},{"Start":"04:37.430 ","End":"04:39.589","Text":"Now, when can this thing be 0?"},{"Start":"04:39.589 ","End":"04:44.630","Text":"In order for an integral to be 0 over an interval,"},{"Start":"04:44.630 ","End":"04:47.300","Text":"it has to be 0 everywhere."},{"Start":"04:47.300 ","End":"04:50.090","Text":"I\u0027m assuming the function\u0027s continuous as soon as it isn\u0027t 0,"},{"Start":"04:50.090 ","End":"04:53.660","Text":"it sticks above the x-axis, so to speak."},{"Start":"04:53.660 ","End":"04:55.160","Text":"We\u0027ll get some area."},{"Start":"04:55.160 ","End":"04:58.505","Text":"And the event It\u0027s a theorem in calculus."},{"Start":"04:58.505 ","End":"05:03.365","Text":"So the only way we\u0027re going to get 0 is if F is the 0 function."},{"Start":"05:03.365 ","End":"05:07.940","Text":"So this is 0 if and only if this is 0,"},{"Start":"05:07.940 ","End":"05:12.035","Text":"which is 0, if and only if f is 0."},{"Start":"05:12.035 ","End":"05:15.050","Text":"Yeah, of course If f squared is the 0 function,"},{"Start":"05:15.050 ","End":"05:17.090","Text":"then so is f. That\u0027s for sure."},{"Start":"05:17.090 ","End":"05:20.700","Text":"Okay, So we are done."}],"ID":10142}],"Thumbnail":null,"ID":7308},{"Name":"Norm and Distance","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Lesson 1 - Norm of a Vector","Duration":"8m 18s","ChapterTopicVideoID":10005,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.405","Text":"In this clip, we\u0027ll be introducing a new concept."},{"Start":"00:03.405 ","End":"00:05.400","Text":"The norm of a vector,"},{"Start":"00:05.400 ","End":"00:10.200","Text":"sometimes called the length of a vector in an inner product space."},{"Start":"00:10.200 ","End":"00:12.570","Text":"We\u0027ll be writing inner product space a lot."},{"Start":"00:12.570 ","End":"00:20.760","Text":"There\u0027s an abbreviation IPS in this context will mean inner product space."},{"Start":"00:20.840 ","End":"00:22.939","Text":"Let\u0027s give a definition."},{"Start":"00:22.939 ","End":"00:26.390","Text":"We start with a vector space,"},{"Start":"00:26.390 ","End":"00:29.730","Text":"which is an inner product space."},{"Start":"00:30.170 ","End":"00:36.860","Text":"If little v is a vector in V, we\u0027re going to define the norm of v"},{"Start":"00:36.860 ","End":"00:43.100","Text":"and the notation of it will be v inside double vertical bars."},{"Start":"00:43.100 ","End":"00:48.670","Text":"A little bit like absolute value but double bars."},{"Start":"00:48.670 ","End":"00:56.845","Text":"The way it\u0027s defined is the square root of the inner product of v with itself."},{"Start":"00:56.845 ","End":"01:00.570","Text":"Note that this is a number because"},{"Start":"01:00.570 ","End":"01:05.270","Text":"the inner product of v with itself is always non-negative,"},{"Start":"01:05.270 ","End":"01:07.820","Text":"which means that we can take the square root"},{"Start":"01:07.820 ","End":"01:13.080","Text":"and it will be a non-negative number itself."},{"Start":"01:14.330 ","End":"01:18.950","Text":"Sometimes the norm of a vector is called its length."},{"Start":"01:18.950 ","End":"01:24.875","Text":"We might even do that if we come to some geometric applications of the norm."},{"Start":"01:24.875 ","End":"01:27.200","Text":"Now it\u0027s time for some examples."},{"Start":"01:27.200 ","End":"01:29.555","Text":"I\u0027ll be bringing 3 and all."},{"Start":"01:29.555 ","End":"01:32.630","Text":"Our first example of an inner product space"},{"Start":"01:32.630 ","End":"01:38.865","Text":"will be our 3 with the standard inner product."},{"Start":"01:38.865 ","End":"01:43.725","Text":"We\u0027ll compute the norm I just picked for example,"},{"Start":"01:43.725 ","End":"01:47.390","Text":"the vector 1, 2, minus 2."},{"Start":"01:47.390 ","End":"01:52.765","Text":"I chose it because it\u0027s going to come out nice numbers in the answer."},{"Start":"01:52.765 ","End":"01:58.130","Text":"The norm by definition is the square root of the product of v with itself."},{"Start":"01:58.130 ","End":"02:02.650","Text":"In our case, v is 1, 2 minus 2."},{"Start":"02:02.650 ","End":"02:06.290","Text":"The standard inner product means it\u0027s this with this,"},{"Start":"02:06.290 ","End":"02:08.180","Text":"this with this, this with this,"},{"Start":"02:08.180 ","End":"02:11.180","Text":"we just multiply component-wise an add."},{"Start":"02:11.180 ","End":"02:13.495","Text":"We get this,"},{"Start":"02:13.495 ","End":"02:17.955","Text":"1 plus 4 plus 4 is 9."},{"Start":"02:17.955 ","End":"02:23.790","Text":"You\u0027ve got the square root of 9 and it has a nice answer, 3."},{"Start":"02:23.790 ","End":"02:25.900","Text":"The norm of this vector is 3,"},{"Start":"02:25.900 ","End":"02:29.065","Text":"or the length of this vector is 3."},{"Start":"02:29.065 ","End":"02:34.000","Text":"Now another example, a different space in the first example,"},{"Start":"02:34.000 ","End":"02:36.880","Text":"R^3 that\u0027s most common is RN."},{"Start":"02:36.880 ","End":"02:44.340","Text":"Here we have the space of 2 by 3 vectors over the real numbers."},{"Start":"02:44.340 ","End":"02:51.040","Text":"In the previous section we showed that we can define an inner product as follows."},{"Start":"02:51.040 ","End":"02:56.350","Text":"The product of A with B is the trace of B transpose times A."},{"Start":"02:56.350 ","End":"03:02.995","Text":"I\u0027m not going to repeat what transpose and trace are."},{"Start":"03:02.995 ","End":"03:05.975","Text":"We\u0027ll do it through the example."},{"Start":"03:05.975 ","End":"03:10.575","Text":"I\u0027ll choose this to be my 2 by 3 matrix."},{"Start":"03:10.575 ","End":"03:12.960","Text":"It\u0027s a vector, and it\u0027s a matrix."},{"Start":"03:12.960 ","End":"03:14.700","Text":"Call it A."},{"Start":"03:14.700 ","End":"03:20.870","Text":"The norm of a by definition is the square root of the inner product of A with itself."},{"Start":"03:20.870 ","End":"03:23.809","Text":"Now we look at the definition here."},{"Start":"03:23.809 ","End":"03:26.400","Text":"I guess this is not the same A as this."},{"Start":"03:26.400 ","End":"03:27.950","Text":"This is just the general rule."},{"Start":"03:27.950 ","End":"03:30.065","Text":"Anyway, if I put B equals A,"},{"Start":"03:30.065 ","End":"03:32.150","Text":"I\u0027ll get A transpose,"},{"Start":"03:32.150 ","End":"03:35.440","Text":"A trace square root."},{"Start":"03:35.440 ","End":"03:38.655","Text":"First I do the transpose."},{"Start":"03:38.655 ","End":"03:42.395","Text":"Here we see A and here\u0027s A transpose,"},{"Start":"03:42.395 ","End":"03:46.770","Text":"which we get by switching rows and columns."},{"Start":"03:47.510 ","End":"03:51.240","Text":"Now we have to do the multiplication,"},{"Start":"03:51.240 ","End":"03:52.750","Text":"the traces of the product,"},{"Start":"03:52.750 ","End":"03:55.325","Text":"it should really have extra brackets here."},{"Start":"03:55.325 ","End":"03:58.135","Text":"Now if I multiply these 2,"},{"Start":"03:58.135 ","End":"04:00.640","Text":"I\u0027m going to get a 3 by 3 matrix,"},{"Start":"04:00.640 ","End":"04:04.045","Text":"but I don\u0027t need all the entries because I\u0027m going to take the trace,"},{"Start":"04:04.045 ","End":"04:06.260","Text":"so I\u0027ll only need the diagonal."},{"Start":"04:06.260 ","End":"04:08.350","Text":"I only care about these 3."},{"Start":"04:08.350 ","End":"04:09.940","Text":"These are don\u0027t care."},{"Start":"04:09.940 ","End":"04:11.200","Text":"I just put asterisks,"},{"Start":"04:11.200 ","End":"04:12.835","Text":"because I don\u0027t care what they are."},{"Start":"04:12.835 ","End":"04:16.950","Text":"That we\u0027ll have to do only 3 out of the 9 entries."},{"Start":"04:16.950 ","End":"04:20.200","Text":"For the first 1 we do this row with this column,"},{"Start":"04:20.200 ","End":"04:22.525","Text":"3 times 3 plus 2 times 2."},{"Start":"04:22.525 ","End":"04:27.850","Text":"That comes out to be 13."},{"Start":"04:27.850 ","End":"04:32.035","Text":"Next, the second row with the second column."},{"Start":"04:32.035 ","End":"04:34.750","Text":"It\u0027s 1 times 1 plus 1 times 1."},{"Start":"04:34.750 ","End":"04:36.595","Text":"That makes it 2,"},{"Start":"04:36.595 ","End":"04:39.620","Text":"and then the last 1,"},{"Start":"04:39.620 ","End":"04:41.080","Text":"the 3rd row, 3rd column."},{"Start":"04:41.080 ","End":"04:45.170","Text":"We take this 3rd row with this 3rd column,"},{"Start":"04:45.170 ","End":"04:50.615","Text":"minus 1 times minus 1 plus 0 times 0 is 1."},{"Start":"04:50.615 ","End":"04:58.495","Text":"Now we need to add 13 plus 2 plus 1 is 16."},{"Start":"04:58.495 ","End":"05:02.490","Text":"We need the square root of 16 and that\u0027s nice number,"},{"Start":"05:02.490 ","End":"05:05.415","Text":"it comes out to be 4."},{"Start":"05:05.415 ","End":"05:10.505","Text":"That\u0027s the answer, that is the norm or length"},{"Start":"05:10.505 ","End":"05:16.240","Text":"of this matrix here with this inner product."},{"Start":"05:16.240 ","End":"05:18.800","Text":"Now our 3rd and last example,"},{"Start":"05:18.800 ","End":"05:21.545","Text":"this time we\u0027re going to consider the space."},{"Start":"05:21.545 ","End":"05:29.090","Text":"Remember what this is continuous functions on the closed segment from 0-1."},{"Start":"05:29.090 ","End":"05:31.010","Text":"That\u0027s the vector space,"},{"Start":"05:31.010 ","End":"05:33.350","Text":"but an inner product space needs an inner product."},{"Start":"05:33.350 ","End":"05:35.090","Text":"We\u0027ve seen this before."},{"Start":"05:35.090 ","End":"05:42.500","Text":"We define the inner product of 2 functions as the integral from 0-1."},{"Start":"05:42.500 ","End":"05:47.495","Text":"In this case, of f of x times g of x, dx."},{"Start":"05:47.495 ","End":"05:50.780","Text":"Because these we\u0027re taking continuous functions"},{"Start":"05:50.780 ","End":"05:54.320","Text":"will always have an integral by the way."},{"Start":"05:54.320 ","End":"05:56.870","Text":"What we want to do is compute the norm"},{"Start":"05:56.870 ","End":"05:59.180","Text":"and just take, for example, this function."},{"Start":"05:59.180 ","End":"06:00.320","Text":"I\u0027ll take a polynomial,"},{"Start":"06:00.320 ","End":"06:01.520","Text":"they\u0027re easier to deal with."},{"Start":"06:01.520 ","End":"06:08.120","Text":"I\u0027ll take this polynomial function and let\u0027s see what its norm is."},{"Start":"06:08.120 ","End":"06:09.995","Text":"Definition of the norm,"},{"Start":"06:09.995 ","End":"06:13.340","Text":"the square root of the product of the thing with itself."},{"Start":"06:13.340 ","End":"06:16.445","Text":"The definition of inner product is here."},{"Start":"06:16.445 ","End":"06:20.630","Text":"I\u0027ll put p instead of f and g. I\u0027ve got the integral of p of x,"},{"Start":"06:20.630 ","End":"06:23.730","Text":"p of x dx from 0-1."},{"Start":"06:23.740 ","End":"06:26.900","Text":"Now next we could just go ahead"},{"Start":"06:26.900 ","End":"06:30.560","Text":"and multiply out this polynomial with this polynomial"},{"Start":"06:30.560 ","End":"06:32.390","Text":"and get a 4th degree polynomial."},{"Start":"06:32.390 ","End":"06:36.035","Text":"That will be hard work."},{"Start":"06:36.035 ","End":"06:37.730","Text":"If you just look at it for a moment,"},{"Start":"06:37.730 ","End":"06:41.450","Text":"we\u0027ll see that this polynomial the 4, 4, 1,"},{"Start":"06:41.450 ","End":"06:44.930","Text":"should remind you that this is a perfect square."},{"Start":"06:44.930 ","End":"06:49.075","Text":"It\u0027s 2x minus 1 squared."},{"Start":"06:49.075 ","End":"06:51.834","Text":"If we notice that,"},{"Start":"06:51.834 ","End":"06:59.060","Text":"then here we have the integral of 2x minus 1^4 because,"},{"Start":"06:59.060 ","End":"07:02.760","Text":"again squared 2 plus 2 is 4."},{"Start":"07:04.850 ","End":"07:11.160","Text":"It\u0027s the integral of a linear something to the 4th."},{"Start":"07:11.160 ","End":"07:15.485","Text":"I can take it as if it was something to the 4th."},{"Start":"07:15.485 ","End":"07:19.520","Text":"That\u0027s going to be that thing to the 5th/5."},{"Start":"07:19.520 ","End":"07:24.320","Text":"But there is also the inner derivative,"},{"Start":"07:24.320 ","End":"07:30.035","Text":"which is 2 so I also have to divide by the inner derivative."},{"Start":"07:30.035 ","End":"07:35.465","Text":"This only works with linear functions that you can divide by the internal derivative."},{"Start":"07:35.465 ","End":"07:42.770","Text":"Now we have to substitute 1, substitute 0 and subtract."},{"Start":"07:42.770 ","End":"07:45.095","Text":"But we\u0027re still under the square root, of course."},{"Start":"07:45.095 ","End":"07:49.760","Text":"Now, if I substitute 1, 2x minus 1 is 1."},{"Start":"07:49.760 ","End":"07:54.485","Text":"I\u0027ve got 1^5/2 times 5, that\u0027s 1/10."},{"Start":"07:54.485 ","End":"07:56.910","Text":"I put in x is 0,"},{"Start":"07:56.910 ","End":"07:58.830","Text":"I\u0027ve got minus 1^5,"},{"Start":"07:58.830 ","End":"08:00.885","Text":"which is minus 1/10."},{"Start":"08:00.885 ","End":"08:04.605","Text":"A 1/10 minus, minus a 1/10 is 2/10."},{"Start":"08:04.605 ","End":"08:07.830","Text":"Of course, 2/10 is a 1/5,"},{"Start":"08:07.830 ","End":"08:09.710","Text":"and we just leave it like that."},{"Start":"08:09.710 ","End":"08:11.090","Text":"The square root of the 1/5,"},{"Start":"08:11.090 ","End":"08:13.670","Text":"we don\u0027t need a numerical answer or anything."},{"Start":"08:13.670 ","End":"08:18.030","Text":"That was the 3rd example and we\u0027re done."}],"ID":10143},{"Watched":false,"Name":"Lesson 2 - Unit Vector - Normalization","Duration":"5m 6s","ChapterTopicVideoID":10006,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.405","Text":"In this clip, we\u0027ll learn about unit vectors and normalization of vectors,"},{"Start":"00:06.405 ","End":"00:12.825","Text":"and normalized vectors in inner product spaces."},{"Start":"00:12.825 ","End":"00:18.325","Text":"This is a continuation of the previous clip where we learnt what a norm is."},{"Start":"00:18.325 ","End":"00:25.430","Text":"A unit vector is a vector which has a norm or length of 1."},{"Start":"00:25.430 ","End":"00:28.745","Text":"In other words, is norm of v equals 1,"},{"Start":"00:28.745 ","End":"00:31.980","Text":"and v is a unit vector."},{"Start":"00:32.240 ","End":"00:34.520","Text":"There\u0027s a convention that"},{"Start":"00:34.520 ","End":"00:37.984","Text":"if you want to indicate that something is a unit vector,"},{"Start":"00:37.984 ","End":"00:41.104","Text":"you write it with a hat."},{"Start":"00:41.104 ","End":"00:43.825","Text":"Think it\u0027s called a caret."},{"Start":"00:43.825 ","End":"00:47.594","Text":"On the keyboard, it\u0027s above the 6 usually."},{"Start":"00:47.594 ","End":"00:52.035","Text":"Anyway, v caret or v hat."},{"Start":"00:52.035 ","End":"00:58.670","Text":"Now if we have any non-zero vector there\u0027s a way of converting it"},{"Start":"00:58.670 ","End":"01:03.995","Text":"or associating with it a unit vector as follows."},{"Start":"01:03.995 ","End":"01:10.295","Text":"You take the given vector and divide it by its norm."},{"Start":"01:10.295 ","End":"01:14.075","Text":"The norm will not be 0 because v is not 0,"},{"Start":"01:14.075 ","End":"01:16.805","Text":"and this will be a unit vector,"},{"Start":"01:16.805 ","End":"01:20.165","Text":"we\u0027ll call it v caret."},{"Start":"01:20.165 ","End":"01:22.430","Text":"This is very easy to prove,"},{"Start":"01:22.430 ","End":"01:23.990","Text":"but I won\u0027t do it,"},{"Start":"01:23.990 ","End":"01:25.900","Text":"you just accept that."},{"Start":"01:25.900 ","End":"01:32.045","Text":"There\u0027s a name for this process of associating a unit vector with a given vector."},{"Start":"01:32.045 ","End":"01:35.015","Text":"This is called normalization."},{"Start":"01:35.015 ","End":"01:40.435","Text":"Taking a vector and dividing it by its norm normalization,"},{"Start":"01:40.435 ","End":"01:43.280","Text":"and the result of the normalization,"},{"Start":"01:43.280 ","End":"01:45.770","Text":"which is this v over its norm,"},{"Start":"01:45.770 ","End":"01:50.220","Text":"this vector is called a normalized vector."},{"Start":"01:51.530 ","End":"01:54.779","Text":"These are just names."},{"Start":"01:54.779 ","End":"01:58.725","Text":"Now let\u0027s take some examples."},{"Start":"01:58.725 ","End":"02:07.220","Text":"Here, our inner product space is R^3 with the standard inner product."},{"Start":"02:07.220 ","End":"02:10.670","Text":"Remember an inner product space is a vector space with an inner product,"},{"Start":"02:10.670 ","End":"02:13.750","Text":"and it\u0027ll be the standard 1 and let\u0027s choose as v,"},{"Start":"02:13.750 ","End":"02:17.000","Text":"the vector 1, 2, 2."},{"Start":"02:17.000 ","End":"02:19.160","Text":"I want to normalize it."},{"Start":"02:19.160 ","End":"02:24.205","Text":"So the formula is this divided by its norm."},{"Start":"02:24.205 ","End":"02:26.460","Text":"Actually the reason I chose 1, 2, 2"},{"Start":"02:26.460 ","End":"02:30.350","Text":"is because we had that in the previous clip"},{"Start":"02:30.350 ","End":"02:34.760","Text":"and we already computed its norm and it turned out to be 3."},{"Start":"02:34.760 ","End":"02:38.585","Text":"So I\u0027ll just straightaway write that here."},{"Start":"02:38.585 ","End":"02:42.230","Text":"I can rewrite this as follows."},{"Start":"02:42.230 ","End":"02:45.785","Text":"Just take each component and divide it by 3."},{"Start":"02:45.785 ","End":"02:51.730","Text":"That was the first example and now let\u0027s do another example."},{"Start":"02:51.730 ","End":"02:57.170","Text":"This time, this is our vector space 2 by 3 real matrices."},{"Start":"02:57.170 ","End":"03:00.200","Text":"The inner product is this,"},{"Start":"03:00.200 ","End":"03:02.089","Text":"we\u0027ve seen this before."},{"Start":"03:02.089 ","End":"03:10.320","Text":"The specific vector or matrix is this 1."},{"Start":"03:10.820 ","End":"03:13.740","Text":"I want to normalize it."},{"Start":"03:13.740 ","End":"03:20.515","Text":"By the formula, I take A and just divide it by its norm."},{"Start":"03:20.515 ","End":"03:24.560","Text":"Once again, I made my choice based on the previous clip"},{"Start":"03:24.560 ","End":"03:28.580","Text":"where we actually computed the norm of this and it came out to be 4."},{"Start":"03:28.580 ","End":"03:31.290","Text":"So I\u0027m going to use that result."},{"Start":"03:31.670 ","End":"03:34.160","Text":"Then if I do the division,"},{"Start":"03:34.160 ","End":"03:42.180","Text":"it means dividing each element in the matrix by this 4 and so this is what we get."},{"Start":"03:42.230 ","End":"03:44.960","Text":"That\u0027s 2 examples."},{"Start":"03:44.960 ","End":"03:47.525","Text":"Let\u0027s get onto the 3rd."},{"Start":"03:47.525 ","End":"03:53.600","Text":"This time the vector space will be continuous functions on the interval 0, 1."},{"Start":"03:53.600 ","End":"03:58.340","Text":"The inner product is the integral as here,"},{"Start":"03:58.340 ","End":"04:00.670","Text":"we\u0027ve seen this before."},{"Start":"04:00.670 ","End":"04:04.550","Text":"The particular example I\u0027m going to choose from here"},{"Start":"04:04.550 ","End":"04:07.940","Text":"will be this function which is actually a polynomial,"},{"Start":"04:07.940 ","End":"04:10.085","Text":"4x squared minus 4x plus 1."},{"Start":"04:10.085 ","End":"04:13.405","Text":"We want to normalize it."},{"Start":"04:13.405 ","End":"04:19.134","Text":"Normalize means divide it by its norm."},{"Start":"04:19.134 ","End":"04:22.700","Text":"We already computed its norm in the previous clip."},{"Start":"04:22.700 ","End":"04:24.080","Text":"That\u0027s why I chose this."},{"Start":"04:24.080 ","End":"04:27.840","Text":"It came out to be the square root of a 1/5."},{"Start":"04:28.240 ","End":"04:34.220","Text":"Just dividing this into each of the coefficients"},{"Start":"04:34.220 ","End":"04:39.080","Text":"so I can write my p this way."},{"Start":"04:39.080 ","End":"04:44.850","Text":"I should really say p of x with the hat on."},{"Start":"04:44.850 ","End":"04:46.790","Text":"That\u0027s the answer."},{"Start":"04:46.790 ","End":"04:49.565","Text":"But for those of you who like to simplify,"},{"Start":"04:49.565 ","End":"04:57.250","Text":"1 over the square root of 1/5 is equal to the square root of 5 if you think about it."},{"Start":"04:57.250 ","End":"05:00.200","Text":"So optionally you could tidy it up a bit"},{"Start":"05:00.200 ","End":"05:04.100","Text":"and this would be the answer or just leave it like this."},{"Start":"05:04.100 ","End":"05:07.110","Text":"We are done."}],"ID":10144},{"Watched":false,"Name":"Lesson 3 - Distance Between Vectors","Duration":"7m 10s","ChapterTopicVideoID":10004,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.840","Text":"In this clip, we\u0027ll learn the concept of a distance"},{"Start":"00:03.840 ","End":"00:07.335","Text":"between vectors in an inner product space."},{"Start":"00:07.335 ","End":"00:12.720","Text":"This builds on the concept of a norm which we just recently did."},{"Start":"00:12.720 ","End":"00:17.805","Text":"If we have an inner product space and we take 2 vectors,"},{"Start":"00:17.805 ","End":"00:24.675","Text":"the distance between u and v is going to be a non-negative number."},{"Start":"00:24.675 ","End":"00:29.940","Text":"I\u0027m going to write it as like a function d of u and v,"},{"Start":"00:29.940 ","End":"00:31.845","Text":"d for distance, of course."},{"Start":"00:31.845 ","End":"00:36.520","Text":"It\u0027s defined by the simple formula."},{"Start":"00:36.520 ","End":"00:42.890","Text":"The distance between u and v is the norm of u minus v."},{"Start":"00:42.890 ","End":"00:47.535","Text":"Let\u0027s get straight away to the examples."},{"Start":"00:47.535 ","End":"00:52.745","Text":"This example is in the vector space R^3 with the standard inner product."},{"Start":"00:52.745 ","End":"00:58.875","Text":"Let\u0027s take our 2 vectors to be u and v as follows."},{"Start":"00:58.875 ","End":"01:04.700","Text":"We want to find the distance between these 2 vectors, u and v."},{"Start":"01:04.700 ","End":"01:10.550","Text":"By definition, that would be the norm of u minus v."},{"Start":"01:10.550 ","End":"01:13.665","Text":"Here we have to do a subtraction,"},{"Start":"01:13.665 ","End":"01:20.715","Text":"4 minus 3 is 1, 7 minus 5 is 2, minus 3, minus 1 is minus 2."},{"Start":"01:20.715 ","End":"01:24.630","Text":"Now we need to compute the norm of this."},{"Start":"01:24.630 ","End":"01:27.810","Text":"This looks familiar."},{"Start":"01:27.810 ","End":"01:32.855","Text":"I think we did this particular calculation in a previous exercise,"},{"Start":"01:32.855 ","End":"01:34.980","Text":"and it came out 3 and I know,"},{"Start":"01:34.980 ","End":"01:36.675","Text":"we\u0027ll do it again."},{"Start":"01:36.675 ","End":"01:43.585","Text":"We take the square root of the inner product of it with this vector with itself."},{"Start":"01:43.585 ","End":"01:45.645","Text":"Standard inner product,"},{"Start":"01:45.645 ","End":"01:50.120","Text":"so it\u0027s 1 times 1 plus 2 times 2 plus negative 2 times negative 2."},{"Start":"01:50.120 ","End":"01:52.130","Text":"This is what we get."},{"Start":"01:52.130 ","End":"01:56.875","Text":"Square root of 9 is 3, that\u0027s the answer."},{"Start":"01:56.875 ","End":"02:02.900","Text":"That\u0027s the distance between u and v. Now example 2,"},{"Start":"02:02.900 ","End":"02:11.164","Text":"where we have the space of 2 by 3 matrices and the inner product,"},{"Start":"02:11.164 ","End":"02:13.100","Text":"we\u0027ve seen this before."},{"Start":"02:13.100 ","End":"02:18.560","Text":"A times B, we take the trace of B transpose times A."},{"Start":"02:18.560 ","End":"02:20.450","Text":"Let\u0027s take our 2 vectors,"},{"Start":"02:20.450 ","End":"02:25.140","Text":"which are actually matrices to be this and this."},{"Start":"02:25.140 ","End":"02:30.140","Text":"We want to compute the distance between them,"},{"Start":"02:30.140 ","End":"02:32.420","Text":"so the distance from A to B,"},{"Start":"02:32.420 ","End":"02:38.930","Text":"by definition is the norm of a minus B, and A minus B,"},{"Start":"02:38.930 ","End":"02:40.190","Text":"we just do a subtraction."},{"Start":"02:40.190 ","End":"02:45.740","Text":"10 minus 2 is 8, 9 minus 3 is 6, and so on."},{"Start":"02:45.830 ","End":"02:52.520","Text":"The norm is the square root of the product of this with itself."},{"Start":"02:52.520 ","End":"02:57.500","Text":"Then we go to the definition of the product,"},{"Start":"02:57.500 ","End":"03:02.790","Text":"which is the second 1 transpose times the first 1,"},{"Start":"03:02.790 ","End":"03:05.715","Text":"and the trace of that, of course."},{"Start":"03:05.715 ","End":"03:08.675","Text":"Let\u0027s see the transpose of this just right,"},{"Start":"03:08.675 ","End":"03:11.765","Text":"reverse rows and columns and we get this."},{"Start":"03:11.765 ","End":"03:13.940","Text":"Now, multiply these 2,"},{"Start":"03:13.940 ","End":"03:19.310","Text":"we\u0027re going to get a 3 by 3 matrix."},{"Start":"03:19.310 ","End":"03:22.320","Text":"But we don\u0027t care about the whole matrix,"},{"Start":"03:22.320 ","End":"03:25.100","Text":"we only care about the main diagonal"},{"Start":"03:25.100 ","End":"03:27.990","Text":"because we\u0027re going to take a trace."},{"Start":"03:28.190 ","End":"03:32.060","Text":"I put asterisks, meaning don\u0027t care,"},{"Start":"03:32.060 ","End":"03:33.290","Text":"we don\u0027t have to compute these,"},{"Start":"03:33.290 ","End":"03:35.630","Text":"we just have to compute the diagonal."},{"Start":"03:35.630 ","End":"03:41.370","Text":"Here is, we multiply the A2 by A2."},{"Start":"03:41.370 ","End":"03:45.380","Text":"We take times 8 is 64 plus 4,"},{"Start":"03:45.380 ","End":"03:50.180","Text":"and that makes it 68."},{"Start":"03:50.180 ","End":"03:56.280","Text":"Then 6, 0 with 6, 0, that gives 36,"},{"Start":"03:56.280 ","End":"04:00.210","Text":"and then 4 minus 2 with 4 minus 2,"},{"Start":"04:00.210 ","End":"04:05.980","Text":"so it\u0027s 16 plus 4, which is 20."},{"Start":"04:06.050 ","End":"04:09.740","Text":"Now we have to take the trace of this,"},{"Start":"04:09.740 ","End":"04:14.450","Text":"so it\u0027s 68 plus 36 plus 20."},{"Start":"04:14.450 ","End":"04:16.760","Text":"If we compute that,"},{"Start":"04:16.760 ","End":"04:19.555","Text":"I think it comes out to a 124,"},{"Start":"04:19.555 ","End":"04:22.760","Text":"so leave the answer like this."},{"Start":"04:22.760 ","End":"04:26.420","Text":"We don\u0027t need to do a numerical approximation,"},{"Start":"04:26.420 ","End":"04:30.155","Text":"square root of 124 a bit over 11."},{"Start":"04:30.155 ","End":"04:33.565","Text":"That\u0027s this example."},{"Start":"04:33.565 ","End":"04:39.590","Text":"Third example, the space is this,"},{"Start":"04:39.590 ","End":"04:45.440","Text":"which is the continuous functions on the interval 0, 1."},{"Start":"04:45.440 ","End":"04:51.530","Text":"The inner product of 2 such functions is the integral"},{"Start":"04:51.530 ","End":"04:55.680","Text":"from 0 to 1 of f times g."},{"Start":"04:55.680 ","End":"04:59.940","Text":"As examples, we\u0027ll take 2 functions,"},{"Start":"04:59.940 ","End":"05:06.585","Text":"we\u0027ll take a polynomials, p and q defined thus."},{"Start":"05:06.585 ","End":"05:08.030","Text":"They\u0027re defined everywhere"},{"Start":"05:08.030 ","End":"05:11.540","Text":"and they\u0027re continuous everywhere and in particular on 0, 1."},{"Start":"05:11.540 ","End":"05:14.185","Text":"We want the distance from p to q,"},{"Start":"05:14.185 ","End":"05:18.435","Text":"and we write that as d, p, q,"},{"Start":"05:18.435 ","End":"05:27.830","Text":"and that\u0027s defined to be the norm of p minus q, p minus q."},{"Start":"05:27.830 ","End":"05:32.180","Text":"Just x squared minus minus x squared is 2x squared x minus 4,"},{"Start":"05:32.180 ","End":"05:35.930","Text":"x minus 3, x1 minus minus 1 is 2."},{"Start":"05:35.930 ","End":"05:39.775","Text":"This is the norm of this,"},{"Start":"05:39.775 ","End":"05:46.030","Text":"and the norm is the square root of the inner product of the thing with itself,"},{"Start":"05:46.030 ","End":"05:48.930","Text":"and the square root, of course."},{"Start":"05:48.930 ","End":"05:51.785","Text":"The inner product of this with itself,"},{"Start":"05:51.785 ","End":"05:55.490","Text":"just using this formula is I multiply these 2,"},{"Start":"05:55.490 ","End":"05:58.670","Text":"well, it\u0027s the same 1 and multiply with itself"},{"Start":"05:58.670 ","End":"06:02.405","Text":"and then take the integral from 0 to 1 dx."},{"Start":"06:02.405 ","End":"06:05.420","Text":"Let\u0027s see how we\u0027ll do this integral."},{"Start":"06:05.420 ","End":"06:08.180","Text":"Didn\u0027t see any quick tricks I could use,"},{"Start":"06:08.180 ","End":"06:10.040","Text":"so just multiplied them out,"},{"Start":"06:10.040 ","End":"06:12.395","Text":"but I spared you the calculations,"},{"Start":"06:12.395 ","End":"06:14.960","Text":"and this is the product."},{"Start":"06:14.960 ","End":"06:18.620","Text":"It\u0027s a fourth degree polynomial makes sense,"},{"Start":"06:18.620 ","End":"06:22.039","Text":"and we want to do the integral from 0 to 1 of this."},{"Start":"06:22.039 ","End":"06:28.210","Text":"First we do the indefinite integral we raise the power by 1 and divide by the new power."},{"Start":"06:28.210 ","End":"06:31.175","Text":"In all of these, it\u0027s pretty straightforward."},{"Start":"06:31.175 ","End":"06:35.030","Text":"Now we have to substitute 1 and then substitute 0 and subtract."},{"Start":"06:35.030 ","End":"06:37.010","Text":"Well, of course when we substitute 0,"},{"Start":"06:37.010 ","End":"06:38.180","Text":"we don\u0027t get anything,"},{"Start":"06:38.180 ","End":"06:43.530","Text":"so we just have to substitute 1 in this and later take the square root."},{"Start":"06:43.580 ","End":"06:55.395","Text":"If x is 1, 4/5, 12/4 is 3. 97/3 doesn\u0027t divide. 132/2 is 66."},{"Start":"06:55.395 ","End":"06:57.510","Text":"We got this expression,"},{"Start":"06:57.510 ","End":"07:02.315","Text":"there\u0027s no point in continuing the actual numerical answer,"},{"Start":"07:02.315 ","End":"07:04.595","Text":"where in simplification is not important."},{"Start":"07:04.595 ","End":"07:07.895","Text":"This is acceptable as an answer."},{"Start":"07:07.895 ","End":"07:10.650","Text":"We are done."}],"ID":10145},{"Watched":false,"Name":"Exercise 1","Duration":"8m 12s","ChapterTopicVideoID":10008,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.530","Text":"In this exercise, we consider the inner product space R^3"},{"Start":"00:04.530 ","End":"00:08.320","Text":"with the standard inner product."},{"Start":"00:09.020 ","End":"00:11.400","Text":"Remember if I mentioned it or not,"},{"Start":"00:11.400 ","End":"00:15.540","Text":"but the standard inner product is also called the dot product."},{"Start":"00:15.540 ","End":"00:18.120","Text":"If you see u.v,"},{"Start":"00:18.120 ","End":"00:22.920","Text":"it\u0027s the same as inner product of u and v."},{"Start":"00:22.920 ","End":"00:26.345","Text":"That\u0027s only for the standard inner product."},{"Start":"00:26.345 ","End":"00:29.720","Text":"We have 3 vectors, u, v, and w"},{"Start":"00:29.720 ","End":"00:35.070","Text":"and then we have 10 different computations to perform."},{"Start":"00:35.630 ","End":"00:38.810","Text":"First 1, inner product,"},{"Start":"00:38.810 ","End":"00:40.790","Text":"but it\u0027s standard inner product,"},{"Start":"00:40.790 ","End":"00:42.500","Text":"so it\u0027s a dot product."},{"Start":"00:42.500 ","End":"00:45.620","Text":"Dot product we just take the first 1 times the first 1,"},{"Start":"00:45.620 ","End":"00:46.910","Text":"second 1 times the second 1,"},{"Start":"00:46.910 ","End":"00:48.020","Text":"third 1 times the third 1,"},{"Start":"00:48.020 ","End":"00:51.160","Text":"and add them and we get 19."},{"Start":"00:51.160 ","End":"00:55.130","Text":"Part 2 same thing just with u and w."},{"Start":"00:55.130 ","End":"00:57.470","Text":"Again, we have the dot product of 2 vectors."},{"Start":"00:57.470 ","End":"00:59.150","Text":"You multiply the first with the first,"},{"Start":"00:59.150 ","End":"01:00.230","Text":"second with the second,"},{"Start":"01:00.230 ","End":"01:01.130","Text":"third with the third,"},{"Start":"01:01.130 ","End":"01:05.280","Text":"and add and we get minus 5."},{"Start":"01:05.390 ","End":"01:10.250","Text":"Third 1 is again same idea,"},{"Start":"01:10.250 ","End":"01:13.310","Text":"just v with w."},{"Start":"01:13.310 ","End":"01:21.155","Text":"It\u0027s scrolled off, but v is this and w is this."},{"Start":"01:21.155 ","End":"01:22.910","Text":"Same computation."},{"Start":"01:22.910 ","End":"01:28.715","Text":"Dot product, multiply component-wise and add."},{"Start":"01:28.715 ","End":"01:34.010","Text":"Here we have something gets not as straightforward,"},{"Start":"01:34.010 ","End":"01:37.580","Text":"u plus v inner product with w."},{"Start":"01:37.580 ","End":"01:40.475","Text":"That\u0027s actually more than 1 way of doing it."},{"Start":"01:40.475 ","End":"01:47.700","Text":"The straightforward way is to add u plus v and if we add u plus v,"},{"Start":"01:47.700 ","End":"01:49.760","Text":"the scrolled off, but I can see them here."},{"Start":"01:49.760 ","End":"01:52.040","Text":"For example, 1 plus 3 is 4,"},{"Start":"01:52.040 ","End":"01:53.960","Text":"minus 2 and minus 2 is minus 4,"},{"Start":"01:53.960 ","End":"01:59.430","Text":"2 and 6 is 8 and w can borrow that from here."},{"Start":"01:59.430 ","End":"02:05.510","Text":"Now we just have a straightforward dot product with the 3 multiplications"},{"Start":"02:05.510 ","End":"02:10.725","Text":"and then an addition of the 3 and we get minus 8."},{"Start":"02:10.725 ","End":"02:12.785","Text":"There\u0027s another way of doing it."},{"Start":"02:12.785 ","End":"02:22.735","Text":"Remember there\u0027s linearity of the inner product,"},{"Start":"02:22.735 ","End":"02:29.870","Text":"that u plus v product with w is the same as u with w plus v with w."},{"Start":"02:29.870 ","End":"02:32.180","Text":"Normally I wouldn\u0027t do it this way,"},{"Start":"02:32.180 ","End":"02:36.175","Text":"but we\u0027ve already computed u,"},{"Start":"02:36.175 ","End":"02:40.095","Text":"w here, and we\u0027ve computed v, w here."},{"Start":"02:40.095 ","End":"02:43.055","Text":"We\u0027ll have to do is copy the minus 5 here,"},{"Start":"02:43.055 ","End":"02:44.765","Text":"the minus 3 here,"},{"Start":"02:44.765 ","End":"02:47.525","Text":"and then add them and I get minus 8,"},{"Start":"02:47.525 ","End":"02:54.325","Text":"which is good because we should get the same answer by 2 different methods."},{"Start":"02:54.325 ","End":"02:59.770","Text":"Let\u0027s get on to number 5."},{"Start":"02:59.770 ","End":"03:08.210","Text":"The norm of u by definition is the square root of the inner product of u with itself."},{"Start":"03:08.210 ","End":"03:10.190","Text":"The inner product, which is standard,"},{"Start":"03:10.190 ","End":"03:13.520","Text":"is the same as the dot product so it\u0027s this,"},{"Start":"03:13.520 ","End":"03:16.820","Text":"that\u0027s u dot with u again,"},{"Start":"03:16.820 ","End":"03:20.255","Text":"1 times 1 minus 2 times minus 2, 2 times 2."},{"Start":"03:20.255 ","End":"03:24.605","Text":"We add 1 plus 4 plus 4 is 9, square root is 3."},{"Start":"03:24.605 ","End":"03:28.985","Text":"Similarly with the norm of v the same idea."},{"Start":"03:28.985 ","End":"03:32.075","Text":"We take v dot product with v,"},{"Start":"03:32.075 ","End":"03:34.470","Text":"and at the end we take a square root"},{"Start":"03:34.470 ","End":"03:42.125","Text":"and this came out to be 3 times 3 is 9 plus 4 plus 36 came out to be 49,"},{"Start":"03:42.125 ","End":"03:45.550","Text":"which has a nice square root of 7."},{"Start":"03:45.550 ","End":"03:49.110","Text":"Number 7 was the norm of u plus v,"},{"Start":"03:49.110 ","End":"03:58.220","Text":"so it\u0027s u plus v with itself product and then square root u plus v,"},{"Start":"03:58.220 ","End":"04:00.055","Text":"if you do the computation,"},{"Start":"04:00.055 ","End":"04:03.380","Text":"is, we actually had it and we can copy it from here,"},{"Start":"04:03.380 ","End":"04:06.695","Text":"4 minus 4, 8 with itself,"},{"Start":"04:06.695 ","End":"04:12.020","Text":"so we get 4 squared is 16 plus 16 plus 8 squared is 64."},{"Start":"04:12.020 ","End":"04:15.470","Text":"That comes out to be 96."},{"Start":"04:15.470 ","End":"04:17.300","Text":"Square root of 96,"},{"Start":"04:17.300 ","End":"04:19.175","Text":"we\u0027ll leave it like that."},{"Start":"04:19.175 ","End":"04:22.295","Text":"Now there is another way of doing it."},{"Start":"04:22.295 ","End":"04:31.175","Text":"I\u0027d like to show you, we could from here multiply out using the linearity and symmetry."},{"Start":"04:31.175 ","End":"04:35.150","Text":"We can get that this is what\u0027s under the square root sign,"},{"Start":"04:35.150 ","End":"04:43.145","Text":"is u with u plus v with u plus u with v and v with v, like so."},{"Start":"04:43.145 ","End":"04:46.940","Text":"The inner product of u with u,"},{"Start":"04:46.940 ","End":"04:49.340","Text":"let\u0027s see, we can copy it from here."},{"Start":"04:49.340 ","End":"04:54.690","Text":"What\u0027s under the square root was 9 and we also have the last 1,"},{"Start":"04:54.690 ","End":"05:01.965","Text":"v with v plus whats under here that was 49 and now u with v,"},{"Start":"05:01.965 ","End":"05:03.680","Text":"I don\u0027t know if we have done anywhere,"},{"Start":"05:03.680 ","End":"05:05.850","Text":"but let\u0027s just see."},{"Start":"05:08.300 ","End":"05:10.745","Text":"Let\u0027s do it again."},{"Start":"05:10.745 ","End":"05:16.095","Text":"Where is u is say here and v is here,"},{"Start":"05:16.095 ","End":"05:20.570","Text":"so 1 times 3 minus 2 times minus 2,"},{"Start":"05:20.570 ","End":"05:28.830","Text":"then 2 times 6 altogether 3 and 4 and 12 is 19."},{"Start":"05:28.830 ","End":"05:31.170","Text":"Similarly be with you,"},{"Start":"05:31.170 ","End":"05:32.280","Text":"same thing as u,"},{"Start":"05:32.280 ","End":"05:35.040","Text":"v because of symmetry, it\u0027s also 19."},{"Start":"05:35.040 ","End":"05:38.400","Text":"If we add these 4 up, 9, 19, 19,"},{"Start":"05:38.400 ","End":"05:41.835","Text":"49, get 96, and there\u0027s a square root."},{"Start":"05:41.835 ","End":"05:48.540","Text":"Once again, 2 different methods should give us the same answer, of course."},{"Start":"05:48.820 ","End":"05:51.230","Text":"I just jumped to a new page."},{"Start":"05:51.230 ","End":"05:54.754","Text":"We still have 3 more to go, 8, 9, and 10."},{"Start":"05:54.754 ","End":"05:57.860","Text":"I copied u and v that we need."},{"Start":"05:57.860 ","End":"06:01.110","Text":"We won\u0027t be using w here I see."},{"Start":"06:01.790 ","End":"06:07.595","Text":"The distance from u to v is by definition,"},{"Start":"06:07.595 ","End":"06:10.580","Text":"the norm of u minus v."},{"Start":"06:10.580 ","End":"06:16.145","Text":"The norm is the square root of the inner product of this with itself."},{"Start":"06:16.145 ","End":"06:18.320","Text":"If you compute u minus v,"},{"Start":"06:18.320 ","End":"06:22.775","Text":"which is this minus this 1 minus 3 is minus 2."},{"Start":"06:22.775 ","End":"06:31.640","Text":"Similarly 0 minus 4 dot product with itself works out to 2 squared is 4,"},{"Start":"06:31.640 ","End":"06:34.130","Text":"0 squared is 0, 4 squared is 16,"},{"Start":"06:34.130 ","End":"06:35.525","Text":"4 and 16 is 20."},{"Start":"06:35.525 ","End":"06:39.050","Text":"Square root of 20 is the distance from u to v."},{"Start":"06:39.050 ","End":"06:44.270","Text":"An alternative method start out the same way."},{"Start":"06:44.270 ","End":"06:49.120","Text":"But now do expansion of this using linearity."},{"Start":"06:49.120 ","End":"06:53.655","Text":"We get u with u inner product minus v with u,"},{"Start":"06:53.655 ","End":"06:57.300","Text":"minus u with v, and plus v, v."},{"Start":"06:57.300 ","End":"07:01.595","Text":"If we compute each of these,"},{"Start":"07:01.595 ","End":"07:03.770","Text":"well, we already did them earlier."},{"Start":"07:03.770 ","End":"07:05.435","Text":"This 1 came out to be 9,"},{"Start":"07:05.435 ","End":"07:08.675","Text":"this came out 19, this came out to be 49."},{"Start":"07:08.675 ","End":"07:11.180","Text":"But there\u0027s a minus, minus here."},{"Start":"07:11.180 ","End":"07:13.025","Text":"If we do this computation,"},{"Start":"07:13.025 ","End":"07:14.780","Text":"9 and 49 is 58,"},{"Start":"07:14.780 ","End":"07:17.960","Text":"these 2 minus 38, so it\u0027s 20."},{"Start":"07:17.960 ","End":"07:24.535","Text":"So it\u0027s a good job that we got the same answer with 2 different methods."},{"Start":"07:24.535 ","End":"07:28.350","Text":"Next 1, number 9,"},{"Start":"07:28.350 ","End":"07:33.865","Text":"the hat or carat means the normalized u."},{"Start":"07:33.865 ","End":"07:39.110","Text":"We take u and divide by its norm and this will give us a unit vector."},{"Start":"07:39.110 ","End":"07:41.475","Text":"So u is 1 minus 2, 2."},{"Start":"07:41.475 ","End":"07:44.795","Text":"We already computed the norm of u to be 3."},{"Start":"07:44.795 ","End":"07:49.160","Text":"It\u0027s this over 3 and which is 1/3 of this."},{"Start":"07:49.160 ","End":"07:52.070","Text":"We can just multiply each component by 1/3."},{"Start":"07:52.070 ","End":"07:54.505","Text":"This would be the answer."},{"Start":"07:54.505 ","End":"07:56.860","Text":"Similarly with v,"},{"Start":"07:56.860 ","End":"08:00.230","Text":"to normalize v, we divide by its norm."},{"Start":"08:00.230 ","End":"08:03.980","Text":"This is v, we computed earlier that its norm is 7."},{"Start":"08:03.980 ","End":"08:08.960","Text":"Dividing this by 7 or multiplying by a 7, we get this"},{"Start":"08:08.960 ","End":"08:12.670","Text":"and that\u0027s the last 1, and we\u0027re done."}],"ID":10146},{"Watched":false,"Name":"Exercise 2","Duration":"12m 29s","ChapterTopicVideoID":10009,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.435","Text":"In this exercise, we\u0027re talking about the inner product space of matrices,"},{"Start":"00:06.435 ","End":"00:10.455","Text":"which are 2 by 3 with real entries,"},{"Start":"00:10.455 ","End":"00:14.235","Text":"and the inner product is defined here."},{"Start":"00:14.235 ","End":"00:18.240","Text":"A vector space with an inner product is an inner product space."},{"Start":"00:18.240 ","End":"00:21.960","Text":"We\u0027re given 3 elements,"},{"Start":"00:21.960 ","End":"00:24.795","Text":"you can think of them as vectors or matrices,"},{"Start":"00:24.795 ","End":"00:28.680","Text":"and we have to make various computations with them,"},{"Start":"00:28.680 ","End":"00:30.375","Text":"9 of them in fact."},{"Start":"00:30.375 ","End":"00:31.590","Text":"Let\u0027s get started."},{"Start":"00:31.590 ","End":"00:33.290","Text":"In the first 1,"},{"Start":"00:33.290 ","End":"00:36.235","Text":"we just have to compute a straightforward inner product."},{"Start":"00:36.235 ","End":"00:38.535","Text":"We have to look at the definition,"},{"Start":"00:38.535 ","End":"00:40.070","Text":"and we look at it here."},{"Start":"00:40.070 ","End":"00:46.265","Text":"It\u0027s the trace of the product of the transpose of the second with the first."},{"Start":"00:46.265 ","End":"00:48.530","Text":"They\u0027ve just scrolled off the screen,"},{"Start":"00:48.530 ","End":"00:51.455","Text":"but the transpose of B is this,"},{"Start":"00:51.455 ","End":"00:53.690","Text":"we just interchange rows and columns,"},{"Start":"00:53.690 ","End":"00:55.925","Text":"and A with this,"},{"Start":"00:55.925 ","End":"00:59.090","Text":"and we have to multiply them then take the trace."},{"Start":"00:59.090 ","End":"01:01.735","Text":"Let\u0027s do the product."},{"Start":"01:01.735 ","End":"01:04.710","Text":"The product is a 3 by 3 matrix,"},{"Start":"01:04.710 ","End":"01:07.170","Text":"but we don\u0027t need the whole 3 by 3,"},{"Start":"01:07.170 ","End":"01:09.710","Text":"we only need the entries on the diagonal."},{"Start":"01:09.710 ","End":"01:11.960","Text":"We don\u0027t care about the rest because"},{"Start":"01:11.960 ","End":"01:14.785","Text":"the trace is the sum of the elements on the diagonal."},{"Start":"01:14.785 ","End":"01:15.850","Text":"Let\u0027s see."},{"Start":"01:15.850 ","End":"01:17.505","Text":"First row, first column."},{"Start":"01:17.505 ","End":"01:20.720","Text":"We take the first row of this with the first column here,"},{"Start":"01:20.720 ","End":"01:25.940","Text":"2 times 10 plus 5 times 7 gives us 55."},{"Start":"01:25.940 ","End":"01:29.760","Text":"Then 3, 6 with 9, 8,"},{"Start":"01:29.760 ","End":"01:34.005","Text":"27 plus 36 is 63,"},{"Start":"01:34.005 ","End":"01:45.795","Text":"and then the 4, 7 with the 8, 5, 32 plus 35 is 67."},{"Start":"01:45.795 ","End":"01:47.970","Text":"The rest of them say we don\u0027t care,"},{"Start":"01:47.970 ","End":"01:50.280","Text":"the trace, we just add these up,"},{"Start":"01:50.280 ","End":"01:56.280","Text":"and that comes out to be 185."},{"Start":"01:56.280 ","End":"01:58.680","Text":"That\u0027s Part 1,"},{"Start":"01:58.680 ","End":"02:02.840","Text":"and Part 2 is very similar except that instead of A, B,"},{"Start":"02:02.840 ","End":"02:05.690","Text":"we have A, C in a product."},{"Start":"02:05.690 ","End":"02:08.990","Text":"It\u0027s the trace of C transpose times A."},{"Start":"02:08.990 ","End":"02:13.200","Text":"Here\u0027s C transpose, which is just like C,"},{"Start":"02:13.200 ","End":"02:14.865","Text":"but switching rows and columns,"},{"Start":"02:14.865 ","End":"02:18.165","Text":"and this was A which I copied."},{"Start":"02:18.165 ","End":"02:20.270","Text":"Once again, we\u0027re going to multiply,"},{"Start":"02:20.270 ","End":"02:23.980","Text":"but we only care about the diagonal."},{"Start":"02:23.980 ","End":"02:29.970","Text":"3, 1 with 10, 7, 30 plus 7, 37,"},{"Start":"02:29.970 ","End":"02:35.715","Text":"minus 5, 0 with 9, 6 just gives us minus 45,"},{"Start":"02:35.715 ","End":"02:40.110","Text":"and this with this you can check is minus 4,"},{"Start":"02:40.110 ","End":"02:41.850","Text":"it\u0027s 16 minus 20."},{"Start":"02:41.850 ","End":"02:45.185","Text":"Now we need to do an addition of these 3,"},{"Start":"02:45.185 ","End":"02:49.020","Text":"and it comes out to be minus 12."},{"Start":"02:49.310 ","End":"02:51.450","Text":"Next Number 3,"},{"Start":"02:51.450 ","End":"02:55.350","Text":"we had to compute A inner product with B plus C."},{"Start":"02:55.350 ","End":"03:01.055","Text":"Now if I hadn\u0027t seen questions of Parts 1 and 2,"},{"Start":"03:01.055 ","End":"03:04.640","Text":"I would have added B plus C and then an inner product."},{"Start":"03:04.640 ","End":"03:08.645","Text":"But because we know what A, B and A, C are,"},{"Start":"03:08.645 ","End":"03:14.705","Text":"we can use the linearity rule and write this as a sum."},{"Start":"03:14.705 ","End":"03:17.990","Text":"If you look back, this came out to be a 185,"},{"Start":"03:17.990 ","End":"03:22.530","Text":"this came out to be minus 12, and that\u0027s 173."},{"Start":"03:22.530 ","End":"03:23.810","Text":"Just to be pedantic,"},{"Start":"03:23.810 ","End":"03:25.850","Text":"the linearity rule, really,"},{"Start":"03:25.850 ","End":"03:27.620","Text":"we learned with the first argument,"},{"Start":"03:27.620 ","End":"03:29.290","Text":"the 1 to the left of the comma,"},{"Start":"03:29.290 ","End":"03:30.620","Text":"but because of symmetry,"},{"Start":"03:30.620 ","End":"03:35.755","Text":"it also works with the sum of something to the right of the comma, the second argument."},{"Start":"03:35.755 ","End":"03:37.665","Text":"That\u0027s just being pedantic."},{"Start":"03:37.665 ","End":"03:40.760","Text":"In Number 4, we have to compute inner products of B and C,"},{"Start":"03:40.760 ","End":"03:42.860","Text":"and this time I just copied them,"},{"Start":"03:42.860 ","End":"03:44.540","Text":"so we don\u0027t have to keep scrolling back."},{"Start":"03:44.540 ","End":"03:46.010","Text":"This is B, this is C."},{"Start":"03:46.010 ","End":"03:48.725","Text":"Definition of the inner product is"},{"Start":"03:48.725 ","End":"03:54.790","Text":"trace of the second transpose times the first, that\u0027s this."},{"Start":"03:56.150 ","End":"03:58.755","Text":"Here we have C transpose,"},{"Start":"03:58.755 ","End":"04:02.480","Text":"notice that it\u0027s gotten from C by interchanging rows and columns."},{"Start":"04:02.480 ","End":"04:04.400","Text":"Like first row 3 minus 5,"},{"Start":"04:04.400 ","End":"04:06.950","Text":"2 is first column 3 minus 5, 2,"},{"Start":"04:06.950 ","End":"04:12.170","Text":"and B just as is, and once again,"},{"Start":"04:12.170 ","End":"04:14.000","Text":"when we do the product, we\u0027re only going to care"},{"Start":"04:14.000 ","End":"04:16.820","Text":"about the diagonal because of the trace,"},{"Start":"04:16.820 ","End":"04:23.345","Text":"and this is what we get because 3, 1 with 2, 5 is 6, plus 5 is 11."},{"Start":"04:23.345 ","End":"04:26.485","Text":"I\u0027ll leave you to check the other 2."},{"Start":"04:26.485 ","End":"04:29.190","Text":"Now that we have the diagonal,"},{"Start":"04:29.190 ","End":"04:35.090","Text":"we can get the trace by adding the elements of the diagonal,"},{"Start":"04:35.090 ","End":"04:39.980","Text":"adding them up, and it comes out to minus 24."},{"Start":"04:39.980 ","End":"04:43.455","Text":"On to Number 5,"},{"Start":"04:43.455 ","End":"04:48.825","Text":"here we have to compute this expression."},{"Start":"04:48.825 ","End":"04:51.230","Text":"On the left, it\u0027s 4A plus 10B,"},{"Start":"04:51.230 ","End":"04:53.950","Text":"and the inner product of that with 11C,"},{"Start":"04:53.950 ","End":"04:59.210","Text":"and I\u0027m going to use the properties of linearity and homogeneity."},{"Start":"04:59.210 ","End":"05:06.245","Text":"First of all, I can split it up because of this plus 4A here and 10B here,"},{"Start":"05:06.245 ","End":"05:09.040","Text":"and the 11C goes with both of them."},{"Start":"05:09.040 ","End":"05:11.640","Text":"For this 1, I\u0027m going to do 2 steps in 1."},{"Start":"05:11.640 ","End":"05:15.770","Text":"I could just first take the 4 out and then the 11 out or vice versa."},{"Start":"05:15.770 ","End":"05:18.775","Text":"I\u0027ll just take them both out, 4 times 11."},{"Start":"05:18.775 ","End":"05:21.520","Text":"Also here, take the constants to the front, the 10,"},{"Start":"05:21.520 ","End":"05:25.184","Text":"and the 11, so it\u0027s 10 times 11 in front."},{"Start":"05:25.184 ","End":"05:29.630","Text":"Now I\u0027m relying on the previous parts of this question."},{"Start":"05:29.630 ","End":"05:32.050","Text":"We\u0027ve already computed A, C,"},{"Start":"05:32.050 ","End":"05:37.845","Text":"it came out to minus 12 and B, C, still see it here,"},{"Start":"05:37.845 ","End":"05:41.400","Text":"minus 24 because 4 times 11 is 44, 10 times 11,"},{"Start":"05:41.400 ","End":"05:45.300","Text":"110, and the result of this computation,"},{"Start":"05:45.300 ","End":"05:50.290","Text":"use a calculator or whatever, is minus 3,168."},{"Start":"05:50.410 ","End":"05:52.820","Text":"Next is Number 6,"},{"Start":"05:52.820 ","End":"05:55.804","Text":"and I just copied A and B to have them handy."},{"Start":"05:55.804 ","End":"05:59.375","Text":"Here I want the norm of A,"},{"Start":"05:59.375 ","End":"06:06.155","Text":"and the norm is the square root of the inner product of A with itself,"},{"Start":"06:06.155 ","End":"06:09.500","Text":"and just in case you forgot the inner product,"},{"Start":"06:09.500 ","End":"06:10.610","Text":"I just wrote it again,"},{"Start":"06:10.610 ","End":"06:12.745","Text":"the definition for that."},{"Start":"06:12.745 ","End":"06:19.980","Text":"What we get for the inner product is the trace of this A times this A."},{"Start":"06:20.570 ","End":"06:24.755","Text":"There\u0027s a shortcut trick I\u0027m going to use here rather than writing"},{"Start":"06:24.755 ","End":"06:30.260","Text":"A transpose alongside and figuring out the diagonal."},{"Start":"06:30.260 ","End":"06:34.025","Text":"What we can do, and this is how it works, is we can say,"},{"Start":"06:34.025 ","End":"06:38.740","Text":"10 squared plus 7 squared is 149, that\u0027s here."},{"Start":"06:38.740 ","End":"06:44.805","Text":"9 squared and 6 squared is 81 and 36, 117."},{"Start":"06:44.805 ","End":"06:50.985","Text":"Then 8 squared and 5 squared is 89,"},{"Start":"06:50.985 ","End":"06:52.920","Text":"and then add these 3 up,"},{"Start":"06:52.920 ","End":"06:57.950","Text":"it comes out to 355 under the square root sign."},{"Start":"06:57.950 ","End":"07:00.800","Text":"But if you\u0027re not happy with this shortcut,"},{"Start":"07:00.800 ","End":"07:04.315","Text":"let me show you how I got it the long way."},{"Start":"07:04.315 ","End":"07:06.960","Text":"Here we are."},{"Start":"07:06.960 ","End":"07:10.410","Text":"I wrote the A transpose out."},{"Start":"07:10.410 ","End":"07:13.040","Text":"This was A, so this is A transpose."},{"Start":"07:13.040 ","End":"07:14.360","Text":"The first row is the first column,"},{"Start":"07:14.360 ","End":"07:16.310","Text":"second row is second column,"},{"Start":"07:16.310 ","End":"07:20.400","Text":"and we get the 3 by 3,"},{"Start":"07:20.400 ","End":"07:21.960","Text":"but we only want the diagonal."},{"Start":"07:21.960 ","End":"07:25.315","Text":"The top-left is the first row with the first column"},{"Start":"07:25.315 ","End":"07:28.505","Text":"which is 10 times 10 plus 7 times 7,"},{"Start":"07:28.505 ","End":"07:34.640","Text":"and like I said, we could have just done straight away 10 squared plus 7 squared is 149."},{"Start":"07:34.640 ","End":"07:39.310","Text":"This entry here comes from 9, 6 times 9, 6,"},{"Start":"07:39.310 ","End":"07:42.840","Text":"and thus we say that\u0027s 117, and 8,"},{"Start":"07:42.840 ","End":"07:47.400","Text":"5 with 8, 5 again, 64 and 25 is 89."},{"Start":"07:47.400 ","End":"07:50.250","Text":"That\u0027s the reason this trick works,"},{"Start":"07:50.250 ","End":"07:52.200","Text":"but you can\u0027t use it."},{"Start":"07:52.200 ","End":"07:54.690","Text":"Yeah. Once again, if you just had this,"},{"Start":"07:54.690 ","End":"07:57.200","Text":"we do 10 squared plus 7 squared."},{"Start":"07:57.200 ","End":"07:58.800","Text":"Actually, you can just keep going,"},{"Start":"07:58.800 ","End":"08:01.910","Text":"plus 9 squared plus 6 squared plus 8 squared plus 5 squared,"},{"Start":"08:01.910 ","End":"08:04.925","Text":"and you would have just got 355."},{"Start":"08:04.925 ","End":"08:07.690","Text":"That\u0027s for those who like shortcuts."},{"Start":"08:07.690 ","End":"08:10.790","Text":"That was 6 and after 6 comes 7."},{"Start":"08:10.790 ","End":"08:13.085","Text":"Let\u0027s see what we have here."},{"Start":"08:13.085 ","End":"08:15.605","Text":"This time, we want the norm of B."},{"Start":"08:15.605 ","End":"08:17.420","Text":"Very similar to the previous exercise,"},{"Start":"08:17.420 ","End":"08:19.765","Text":"just with B instead of A."},{"Start":"08:19.765 ","End":"08:24.710","Text":"It\u0027s the square root of the trace of B transpose times B,"},{"Start":"08:24.710 ","End":"08:27.695","Text":"and if we use that shortcut I showed you,"},{"Start":"08:27.695 ","End":"08:29.240","Text":"this is what we\u0027d get, but you know what?"},{"Start":"08:29.240 ","End":"08:31.465","Text":"Let\u0027s do it the long way."},{"Start":"08:31.465 ","End":"08:33.630","Text":"Here\u0027s B transpose,"},{"Start":"08:33.630 ","End":"08:36.900","Text":"here\u0027s B, we just want the diagonal."},{"Start":"08:36.900 ","End":"08:40.770","Text":"2, 5 with 2, 5 is 4 and 25 is 29."},{"Start":"08:40.770 ","End":"08:42.735","Text":"I\u0027ll leave you to check the rest."},{"Start":"08:42.735 ","End":"08:47.855","Text":"Then we have to add up 29 and 45 and 65,"},{"Start":"08:47.855 ","End":"08:49.160","Text":"and that\u0027s 139,"},{"Start":"08:49.160 ","End":"08:57.530","Text":"so our answer is the square root of 139."},{"Start":"08:57.530 ","End":"09:01.820","Text":"Now Part 8, you want the distance between A and B."},{"Start":"09:01.820 ","End":"09:04.940","Text":"They are matrices, but we can also consider them as vectors."},{"Start":"09:04.940 ","End":"09:10.125","Text":"By definition, it\u0027s the norm of A minus B."},{"Start":"09:10.125 ","End":"09:12.750","Text":"Actually, it would be B minus A,"},{"Start":"09:12.750 ","End":"09:16.410","Text":"but the order doesn\u0027t matter with the norm."},{"Start":"09:16.410 ","End":"09:21.120","Text":"For convenience, I copied the A and B here."},{"Start":"09:21.120 ","End":"09:26.180","Text":"Definition of norm, the square root of the inner product with itself."},{"Start":"09:26.180 ","End":"09:28.895","Text":"But of course, we need A minus B."},{"Start":"09:28.895 ","End":"09:34.550","Text":"Looking here, 10 minus 2 is 8, 9 minus 3 is 6."},{"Start":"09:34.550 ","End":"09:36.575","Text":"I\u0027ll leave you to check the rest."},{"Start":"09:36.575 ","End":"09:39.215","Text":"This is the matrix we wanted,"},{"Start":"09:39.215 ","End":"09:42.350","Text":"transposed and then multiplied with itself,"},{"Start":"09:42.350 ","End":"09:45.800","Text":"and then take the trace and then take the square root."},{"Start":"09:45.800 ","End":"09:51.390","Text":"Here, I did the transpose part."},{"Start":"09:52.270 ","End":"09:55.629","Text":"Next, the product, and we just need the diagonal."},{"Start":"09:55.629 ","End":"09:58.250","Text":"This we don\u0027t care about, don\u0027t care."},{"Start":"09:59.900 ","End":"10:07.875","Text":"Yeah. Here 68 would be 8, 2 with 8, 2, 64 plus 4, 68."},{"Start":"10:07.875 ","End":"10:10.060","Text":"Check the other 2."},{"Start":"10:10.450 ","End":"10:16.640","Text":"Adding these up, 68 and 36 and twenty is 124,"},{"Start":"10:16.640 ","End":"10:20.095","Text":"the answer is the square root of a 124."},{"Start":"10:20.095 ","End":"10:22.570","Text":"I\u0027m starting with this exercise for a moment"},{"Start":"10:22.570 ","End":"10:25.060","Text":"because I want to show you another way of doing it."},{"Start":"10:25.060 ","End":"10:28.040","Text":"We start out the same,"},{"Start":"10:28.500 ","End":"10:33.430","Text":"the distance being the norm of the difference,"},{"Start":"10:33.430 ","End":"10:35.595","Text":"and then we have the inner product."},{"Start":"10:35.595 ","End":"10:38.335","Text":"But instead of computing A minus B,"},{"Start":"10:38.335 ","End":"10:41.455","Text":"we could use the linearity,"},{"Start":"10:41.455 ","End":"10:44.130","Text":"this is pretty much like with algebra."},{"Start":"10:44.130 ","End":"10:46.900","Text":"If you had A minus B times A minus B,"},{"Start":"10:46.900 ","End":"10:48.605","Text":"and we have to keep the order."},{"Start":"10:48.605 ","End":"10:51.345","Text":"Here we have an A, B and here we have a B, A."},{"Start":"10:51.345 ","End":"10:54.770","Text":"This with this, minus this with this, and so on,"},{"Start":"10:54.770 ","End":"10:57.900","Text":"we get 4 pieces."},{"Start":"10:58.090 ","End":"11:02.030","Text":"I wouldn\u0027t normally do it by this method unless"},{"Start":"11:02.030 ","End":"11:03.950","Text":"I already had all these values,"},{"Start":"11:03.950 ","End":"11:05.855","Text":"which actually we do."},{"Start":"11:05.855 ","End":"11:09.170","Text":"When we computed the norm of A,"},{"Start":"11:09.170 ","End":"11:12.180","Text":"and we got it to be square root of 355,"},{"Start":"11:12.180 ","End":"11:15.140","Text":"before we took the square root, if you look back,"},{"Start":"11:15.140 ","End":"11:18.950","Text":"we did inner product of A with A was 355."},{"Start":"11:18.950 ","End":"11:22.055","Text":"Similarly, in the computation of the norm of B,"},{"Start":"11:22.055 ","End":"11:24.515","Text":"we got this 139."},{"Start":"11:24.515 ","End":"11:25.730","Text":"Again, looking back,"},{"Start":"11:25.730 ","End":"11:26.920","Text":"this we\u0027ve already done,"},{"Start":"11:26.920 ","End":"11:30.935","Text":"it was 185 and this is the same thing,"},{"Start":"11:30.935 ","End":"11:33.290","Text":"different order by symmetry."},{"Start":"11:33.290 ","End":"11:36.800","Text":"We just have to do this arithmetical computation."},{"Start":"11:36.800 ","End":"11:41.520","Text":"It comes out to be 124 square root, of course."},{"Start":"11:41.520 ","End":"11:45.350","Text":"It\u0027s good to know that by the 2 different methods,"},{"Start":"11:45.350 ","End":"11:47.910","Text":"we get the same answer."},{"Start":"11:47.950 ","End":"11:53.880","Text":"We come to the last 1, Part 9."},{"Start":"11:54.050 ","End":"11:57.640","Text":"A with a hat on it, a caret,"},{"Start":"11:57.640 ","End":"12:06.045","Text":"means the unit vector we get by normalizing the vector A or the matrix A,"},{"Start":"12:06.045 ","End":"12:10.615","Text":"and the way we normalize is to divide by the norm."},{"Start":"12:10.615 ","End":"12:15.785","Text":"We already computed the norm of A as the square root of 355."},{"Start":"12:15.785 ","End":"12:19.745","Text":"So the answer is 1 over square root of 355 times this."},{"Start":"12:19.745 ","End":"12:23.450","Text":"If you wanted to, you could multiply this by each of the 6 entries."},{"Start":"12:23.450 ","End":"12:24.650","Text":"It will just look messy,"},{"Start":"12:24.650 ","End":"12:27.515","Text":"I think it\u0027s best to leave this outside."},{"Start":"12:27.515 ","End":"12:30.810","Text":"That\u0027s the last part and we\u0027re done."}],"ID":10147},{"Watched":false,"Name":"Exercise 3","Duration":"12m 14s","ChapterTopicVideoID":10007,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.440 ","End":"00:03.675","Text":"I won\u0027t just read this exercise, I\u0027ll explain it."},{"Start":"00:03.675 ","End":"00:06.495","Text":"We have the IPS, inner product space,"},{"Start":"00:06.495 ","End":"00:09.300","Text":"and that means a vector space with an inner product."},{"Start":"00:09.300 ","End":"00:10.935","Text":"This is the vector space,"},{"Start":"00:10.935 ","End":"00:14.550","Text":"continuous functions on the interval 0-1."},{"Start":"00:14.550 ","End":"00:18.045","Text":"The inner product is defined like this,"},{"Start":"00:18.045 ","End":"00:22.980","Text":"the integral from 0-1 of the product of the functions."},{"Start":"00:22.980 ","End":"00:25.845","Text":"This is an inner product space."},{"Start":"00:25.845 ","End":"00:29.085","Text":"Now we\u0027re given 3 functions."},{"Start":"00:29.085 ","End":"00:30.435","Text":"They\u0027re all polynomials."},{"Start":"00:30.435 ","End":"00:36.510","Text":"But anyway, their functions and they are continuous everywhere, p, q, and r,"},{"Start":"00:36.510 ","End":"00:39.950","Text":"we have to compute 6 different things"},{"Start":"00:39.950 ","End":"00:43.175","Text":"and we\u0027ll take them 1 at a time."},{"Start":"00:43.175 ","End":"00:49.165","Text":"The first 1 inner product with p and q."},{"Start":"00:49.165 ","End":"00:51.975","Text":"Just look at the definition."},{"Start":"00:51.975 ","End":"00:53.970","Text":"Here\u0027s generally f and g,"},{"Start":"00:53.970 ","End":"00:56.655","Text":"I just replace them with p and q,"},{"Start":"00:56.655 ","End":"00:58.620","Text":"and I\u0027ve got this."},{"Start":"00:58.620 ","End":"01:02.020","Text":"Now, p and q are written here."},{"Start":"01:02.240 ","End":"01:06.754","Text":"Substituting, this is the integral I have to perform."},{"Start":"01:06.754 ","End":"01:13.880","Text":"Just do a product of polynomials x times 3x is 3x squared."},{"Start":"01:13.880 ","End":"01:17.300","Text":"From here we get 9x from here another x is 10x,"},{"Start":"01:17.300 ","End":"01:19.310","Text":"3 times 1 is 3."},{"Start":"01:19.310 ","End":"01:22.085","Text":"Next, the indefinite integral,"},{"Start":"01:22.085 ","End":"01:23.780","Text":"and we don\u0027t need a constant."},{"Start":"01:23.780 ","End":"01:26.810","Text":"3x squared gives us x cubed and so on."},{"Start":"01:26.810 ","End":"01:29.990","Text":"This 1 and 0 means we have to substitute 1,"},{"Start":"01:29.990 ","End":"01:32.300","Text":"substitute 0 and subtract."},{"Start":"01:32.300 ","End":"01:35.255","Text":"Now obviously when I plug in 0, I just get 0,"},{"Start":"01:35.255 ","End":"01:37.040","Text":"I only need to put the 1s in,"},{"Start":"01:37.040 ","End":"01:41.680","Text":"which means I get 1 plus 5 plus 3, which is 9."},{"Start":"01:41.680 ","End":"01:43.730","Text":"Onto the next 1."},{"Start":"01:43.730 ","End":"01:46.430","Text":"Exactly the same principle."},{"Start":"01:46.430 ","End":"01:48.410","Text":"Just instead of q,"},{"Start":"01:48.410 ","End":"01:54.115","Text":"I have r just a different function in the second place."},{"Start":"01:54.115 ","End":"01:58.280","Text":"Here\u0027s p of x, and this 1 here is r of x."},{"Start":"01:58.280 ","End":"02:02.480","Text":"Now we do the multiplication and I won\u0027t go into the details,"},{"Start":"02:02.480 ","End":"02:04.735","Text":"I\u0027ll leave you to check that."},{"Start":"02:04.735 ","End":"02:09.900","Text":"The integration x cubed gives us x^4/ 4,"},{"Start":"02:09.900 ","End":"02:14.415","Text":"x squared is x cubed over 3, and so on."},{"Start":"02:14.415 ","End":"02:17.085","Text":"Simple integration, I\u0027m sure."},{"Start":"02:17.085 ","End":"02:20.210","Text":"Again, we want to plug in 0 and 1"},{"Start":"02:20.210 ","End":"02:22.085","Text":"and subtract the lower from the upper."},{"Start":"02:22.085 ","End":"02:27.090","Text":"But if I plug in 0, I get 0s everywhere so I just plug in 1."},{"Start":"02:27.350 ","End":"02:29.550","Text":"This is what we get,"},{"Start":"02:29.550 ","End":"02:30.560","Text":"and if we compute it,"},{"Start":"02:30.560 ","End":"02:33.190","Text":"well, depends if you like fractions or decimals."},{"Start":"02:33.190 ","End":"02:35.960","Text":"The decimals actually continue indefinitely,"},{"Start":"02:35.960 ","End":"02:37.340","Text":"or if you want it as a fraction,"},{"Start":"02:37.340 ","End":"02:43.650","Text":"it\u0027s minus 9 and 7/12, either 1."},{"Start":"02:44.200 ","End":"02:47.360","Text":"Onto the next 1."},{"Start":"02:47.360 ","End":"02:52.520","Text":"This 1 asks us to compute the inner product of p with q plus r."},{"Start":"02:52.520 ","End":"02:56.810","Text":"Just to remind us, I copied p, q, and r,"},{"Start":"02:56.810 ","End":"03:00.020","Text":"Now, I\u0027m not going to do the obvious thing."},{"Start":"03:00.020 ","End":"03:02.540","Text":"The obvious thing is to add q plus r"},{"Start":"03:02.540 ","End":"03:04.505","Text":"and take the inner product of p with that."},{"Start":"03:04.505 ","End":"03:08.810","Text":"But because of the results of exercise 1 and 2,"},{"Start":"03:08.810 ","End":"03:11.340","Text":"I can reuse them."},{"Start":"03:11.560 ","End":"03:20.435","Text":"I\u0027m going to use the laws of inner product linearity to split this up into 2,"},{"Start":"03:20.435 ","End":"03:23.360","Text":"p with q plus p with r."},{"Start":"03:23.360 ","End":"03:26.515","Text":"Because this I did in the first exercise,"},{"Start":"03:26.515 ","End":"03:29.280","Text":"and this in the second exercise."},{"Start":"03:29.280 ","End":"03:32.264","Text":"Using the decimal version,"},{"Start":"03:32.264 ","End":"03:37.605","Text":"we have 9 plus this minus 9.5,"},{"Start":"03:37.605 ","End":"03:40.144","Text":"and this is our answer."},{"Start":"03:40.144 ","End":"03:46.560","Text":"But if you prefer fractions."},{"Start":"03:47.870 ","End":"03:51.350","Text":"But I\u0027m going to do it again without this trick"},{"Start":"03:51.350 ","End":"03:55.307","Text":"or say we didn\u0027t have the results to this and this."},{"Start":"03:55.307 ","End":"03:57.545","Text":"Then we would do."},{"Start":"03:57.545 ","End":"04:01.040","Text":"The obvious way would be to add q plus r."},{"Start":"04:01.040 ","End":"04:03.630","Text":"If you add these 2 functions up,"},{"Start":"04:03.630 ","End":"04:07.840","Text":"we get x squared and 3x minus 4x is minus x,"},{"Start":"04:07.840 ","End":"04:12.620","Text":"and 1 minus 1 is 0, and there\u0027s p."},{"Start":"04:12.620 ","End":"04:14.825","Text":"This is what we have."},{"Start":"04:14.825 ","End":"04:20.120","Text":"This inner product gives us this integral."},{"Start":"04:20.120 ","End":"04:23.675","Text":"I multiplied the 2 polynomials,"},{"Start":"04:23.675 ","End":"04:26.485","Text":"not showing the details."},{"Start":"04:26.485 ","End":"04:30.615","Text":"This is the result of the integration;"},{"Start":"04:30.615 ","End":"04:35.025","Text":"x cubed is x^4/4, and so on."},{"Start":"04:35.025 ","End":"04:37.120","Text":"Now we substitute 0 and 1,"},{"Start":"04:37.120 ","End":"04:42.040","Text":"we just have to substitute the 1 because the 0 gives 0."},{"Start":"04:42.470 ","End":"04:45.350","Text":"If we do the computation,"},{"Start":"04:45.350 ","End":"04:49.055","Text":"we get minus 7/12 or in decimal."},{"Start":"04:49.055 ","End":"04:52.625","Text":"This is what we had before."},{"Start":"04:52.625 ","End":"04:56.185","Text":"It\u0027s good to know that we got the same answer."},{"Start":"04:56.185 ","End":"05:02.945","Text":"Onto the next exercise where we have to compute the norm of p."},{"Start":"05:02.945 ","End":"05:10.229","Text":"The norm is a square root of the inner product with itself."},{"Start":"05:10.900 ","End":"05:15.710","Text":"We write the inner product as an integral,"},{"Start":"05:15.710 ","End":"05:18.200","Text":"p of x is x plus 3,"},{"Start":"05:18.200 ","End":"05:22.190","Text":"and times itself just means I write it\u0027s squared."},{"Start":"05:22.190 ","End":"05:27.355","Text":"The integral of x plus 3 squared is similar to the integral of x squared."},{"Start":"05:27.355 ","End":"05:30.385","Text":"If it was x square I would write x squared over 3."},{"Start":"05:30.385 ","End":"05:33.670","Text":"The x plus 3 doesn\u0027t really change it much"},{"Start":"05:33.670 ","End":"05:36.290","Text":"because the inner derivative is 1."},{"Start":"05:36.290 ","End":"05:39.060","Text":"If it wasn\u0027t a coefficient 1 here,"},{"Start":"05:39.060 ","End":"05:40.650","Text":"you have to divide by it."},{"Start":"05:40.650 ","End":"05:45.370","Text":"This is the result of the indefinite integral."},{"Start":"05:45.370 ","End":"05:49.605","Text":"Now I\u0027ve to plug in 1 and plug in 0 and subtract,"},{"Start":"05:49.605 ","End":"05:52.210","Text":"we plug in 1, 1 plus 3 is 4,"},{"Start":"05:52.210 ","End":"05:54.085","Text":"so it\u0027s 4 cubed over 3."},{"Start":"05:54.085 ","End":"05:59.150","Text":"Plug in 0, we get 3 cubed over 3."},{"Start":"05:59.250 ","End":"06:05.770","Text":"This is 64/3 minus 27/3,"},{"Start":"06:05.770 ","End":"06:10.645","Text":"that\u0027s 37/3 with the square root, of course."},{"Start":"06:10.645 ","End":"06:13.520","Text":"Or we could write it as,"},{"Start":"06:13.520 ","End":"06:17.059","Text":"if you prefer mixed numbers,"},{"Start":"06:17.059 ","End":"06:21.090","Text":"that would be what? 12 and 1/3."},{"Start":"06:21.200 ","End":"06:25.865","Text":"Next, we have to compute a distance between p and q."},{"Start":"06:25.865 ","End":"06:31.070","Text":"Originally we defined it as the second minus the first q minus p."},{"Start":"06:31.070 ","End":"06:34.850","Text":"But doesn\u0027t matter the order,"},{"Start":"06:34.850 ","End":"06:40.810","Text":"the norm doesn\u0027t make a difference, p minus q."},{"Start":"06:40.810 ","End":"06:43.020","Text":"There was quick subtraction."},{"Start":"06:43.020 ","End":"06:45.045","Text":"X minus 3x is minus 2x,"},{"Start":"06:45.045 ","End":"06:46.725","Text":"3 minus 1 is 2."},{"Start":"06:46.725 ","End":"06:48.825","Text":"We have the norm of this,"},{"Start":"06:48.825 ","End":"06:53.290","Text":"and the norm is the square root of the inner product with itself."},{"Start":"06:53.290 ","End":"06:58.780","Text":"I guess I could have written an extra stage as the square root of"},{"Start":"06:58.780 ","End":"07:06.740","Text":"the inner product of minus 2x plus 2 with minus 2x plus 2."},{"Start":"07:07.310 ","End":"07:12.255","Text":"Then the inner product is the integral,"},{"Start":"07:12.255 ","End":"07:17.110","Text":"please stop skipping steps when you get good at this."},{"Start":"07:17.760 ","End":"07:21.130","Text":"Also, I could have written it straight away as squared"},{"Start":"07:21.130 ","End":"07:24.340","Text":"rather than multiplying it with itself."},{"Start":"07:24.340 ","End":"07:29.125","Text":"Now, if this was just x squared,"},{"Start":"07:29.125 ","End":"07:33.550","Text":"the indefinite integral would be x cubed over 3."},{"Start":"07:33.550 ","End":"07:37.535","Text":"I start off with this thing cubed over 3."},{"Start":"07:37.535 ","End":"07:44.080","Text":"But then I have to remember there\u0027s an inner internal derivative,"},{"Start":"07:44.080 ","End":"07:46.540","Text":"which is this minus 2 here,"},{"Start":"07:46.540 ","End":"07:50.545","Text":"and I have to divide by that minus 2."},{"Start":"07:50.545 ","End":"07:54.815","Text":"Now I have to substitute 1 and 0."},{"Start":"07:54.815 ","End":"07:58.720","Text":"Substituting 1 is the easier part that just gives us 0"},{"Start":"07:58.720 ","End":"08:02.545","Text":"because minus 2x plus 2 is 0 when x is 1."},{"Start":"08:02.545 ","End":"08:07.340","Text":"We just need the 0, which gives us 1 over minus 2."},{"Start":"08:07.340 ","End":"08:10.740","Text":"This minus 2x disappears."},{"Start":"08:10.740 ","End":"08:16.380","Text":"It\u0027s just 2 cubed over 3, which is 8/3."},{"Start":"08:16.380 ","End":"08:19.900","Text":"All together the minus cancels with the minus,"},{"Start":"08:19.900 ","End":"08:23.195","Text":"2 into 8 goes 4 times,"},{"Start":"08:23.195 ","End":"08:26.290","Text":"so square root of 4/3."},{"Start":"08:26.290 ","End":"08:28.220","Text":"I\u0027d like to show you another way"},{"Start":"08:28.220 ","End":"08:34.390","Text":"we could have done this and this point we got to earlier."},{"Start":"08:34.390 ","End":"08:41.315","Text":"As before, we have to do the inner product of p minus q with itself."},{"Start":"08:41.315 ","End":"08:44.780","Text":"But this time we use the linearity,"},{"Start":"08:44.780 ","End":"08:49.945","Text":"the properties of the inner product to expand this p with p,"},{"Start":"08:49.945 ","End":"08:53.445","Text":"minus q with p, and minus p with q,"},{"Start":"08:53.445 ","End":"08:55.680","Text":"and plus q, q."},{"Start":"08:55.680 ","End":"09:02.015","Text":"The reason I can do this way or why it\u0027s good to do it this way is already,"},{"Start":"09:02.015 ","End":"09:05.710","Text":"have done all these computations earlier."},{"Start":"09:05.710 ","End":"09:11.855","Text":"Inner product of p with p we\u0027ve computed during our calculation of the norm of p,"},{"Start":"09:11.855 ","End":"09:13.570","Text":"and here during the norm of q,"},{"Start":"09:13.570 ","End":"09:17.405","Text":"and somewhere back, we also computed this."},{"Start":"09:17.405 ","End":"09:19.100","Text":"Now the first 1,"},{"Start":"09:19.100 ","End":"09:21.590","Text":"I remember it came out 37/3,"},{"Start":"09:21.590 ","End":"09:24.745","Text":"and I said it could also be written as 12.33,"},{"Start":"09:24.745 ","End":"09:27.375","Text":"this 1 came out to be 9."},{"Start":"09:27.375 ","End":"09:28.740","Text":"So if this is 9,"},{"Start":"09:28.740 ","End":"09:30.665","Text":"so is this by symmetry,"},{"Start":"09:30.665 ","End":"09:33.690","Text":"and this was 7."},{"Start":"09:33.690 ","End":"09:36.680","Text":"We just have to do the subtraction"},{"Start":"09:36.680 ","End":"09:42.360","Text":"and this comes out as 4/3 under the square root sign."},{"Start":"09:42.580 ","End":"09:47.900","Text":"We got the same answer as before, which is good."},{"Start":"09:47.980 ","End":"09:51.724","Text":"This is the 6th and last."},{"Start":"09:51.724 ","End":"09:56.310","Text":"We want to compute r with a hat on it or caret."},{"Start":"09:56.310 ","End":"10:01.625","Text":"This means the unit vector that we get by normalizing"},{"Start":"10:01.625 ","End":"10:07.700","Text":"r. What we want to do is take r and divide it by its norm."},{"Start":"10:07.700 ","End":"10:10.885","Text":"That\u0027s what we mean by normalizing it."},{"Start":"10:10.885 ","End":"10:16.010","Text":"Now here\u0027s r and we haven\u0027t computed the norm of r,"},{"Start":"10:16.010 ","End":"10:18.770","Text":"so we have to do that now."},{"Start":"10:21.500 ","End":"10:24.140","Text":"Without the square root,"},{"Start":"10:24.140 ","End":"10:28.669","Text":"it would just be the integral of r times itself."},{"Start":"10:28.669 ","End":"10:31.720","Text":"I could have written it as r squared right away."},{"Start":"10:31.720 ","End":"10:34.665","Text":"R, I copy from here,"},{"Start":"10:34.665 ","End":"10:37.520","Text":"and you have to do a bit of algebra"},{"Start":"10:37.520 ","End":"10:43.830","Text":"to multiply a trinomial expression by another trinomial expression."},{"Start":"10:43.940 ","End":"10:47.280","Text":"There\u0027s actually 9 products here,"},{"Start":"10:47.280 ","End":"10:52.240","Text":"x squared with all of these 3 then minus 4x with all of these 3."},{"Start":"10:52.240 ","End":"10:56.245","Text":"You could pause and check the calculations."},{"Start":"10:56.245 ","End":"11:01.560","Text":"After collecting like terms that\u0027s what we get."},{"Start":"11:01.560 ","End":"11:06.565","Text":"We have this integral to compute."},{"Start":"11:06.565 ","End":"11:09.570","Text":"Integral of x^4, x^5/5,"},{"Start":"11:09.570 ","End":"11:14.320","Text":"x cubed is x^4/4, and so on."},{"Start":"11:14.320 ","End":"11:17.450","Text":"Then we just have to substitute 1 in this"},{"Start":"11:17.450 ","End":"11:20.000","Text":"because 0 doesn\u0027t give us anything."},{"Start":"11:20.000 ","End":"11:27.740","Text":"It\u0027s 1/5 minus 8/4 is 2 14/3, 8/2 is 4."},{"Start":"11:27.740 ","End":"11:29.075","Text":"From here 1,"},{"Start":"11:29.075 ","End":"11:32.250","Text":"and it comes out to be 7.86."},{"Start":"11:33.080 ","End":"11:38.690","Text":"But what we computed here was the inner product of r with itself."},{"Start":"11:38.690 ","End":"11:41.150","Text":"The norm of r we still have to take the square root."},{"Start":"11:41.150 ","End":"11:43.655","Text":"It\u0027s the square root of this."},{"Start":"11:43.655 ","End":"11:48.030","Text":"The final step for this normalized r,"},{"Start":"11:48.030 ","End":"11:50.960","Text":"the unit vector is to take the vector r."},{"Start":"11:50.960 ","End":"11:53.765","Text":"Vector is a polynomial function in this case,"},{"Start":"11:53.765 ","End":"11:55.220","Text":"and divide it by r."},{"Start":"11:55.220 ","End":"12:00.730","Text":"Here\u0027s a function r and here\u0027s the norm."},{"Start":"12:00.730 ","End":"12:04.150","Text":"Just leave it like this."},{"Start":"12:04.430 ","End":"12:08.480","Text":"If you try to divide each coefficient by this,"},{"Start":"12:08.480 ","End":"12:09.470","Text":"it would just look messy,"},{"Start":"12:09.470 ","End":"12:11.300","Text":"so we\u0027ll leave, this as the answer."},{"Start":"12:11.300 ","End":"12:14.820","Text":"We are done."}],"ID":10148},{"Watched":false,"Name":"Exercise 4","Duration":"1m 39s","ChapterTopicVideoID":9636,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.680","Text":"In this exercise, we have to prove this identity."},{"Start":"00:04.680 ","End":"00:07.950","Text":"We assume that we\u0027re in an inner product space,"},{"Start":"00:07.950 ","End":"00:12.120","Text":"and that this is the norm and that u and v are any 2 vectors."},{"Start":"00:12.120 ","End":"00:14.505","Text":"We have to show that this holds."},{"Start":"00:14.505 ","End":"00:18.180","Text":"Now we\u0027ll start with the left-hand side and reach the right-hand side."},{"Start":"00:18.180 ","End":"00:20.220","Text":"Now there\u0027s something that can help us here,"},{"Start":"00:20.220 ","End":"00:24.465","Text":"that\u0027s true in general, for every vector,"},{"Start":"00:24.465 ","End":"00:28.820","Text":"let\u0027s say x, and I want its norm squared"},{"Start":"00:28.820 ","End":"00:33.985","Text":"that is equal to the inner product of x with itself."},{"Start":"00:33.985 ","End":"00:37.760","Text":"That\u0027s because we define the norm to be the square root of this."},{"Start":"00:37.760 ","End":"00:40.445","Text":"So if we squared it, we can remove the square root."},{"Start":"00:40.445 ","End":"00:43.500","Text":"This is used an awful lot."},{"Start":"00:43.730 ","End":"00:48.500","Text":"In particular, I can apply it where x is u plus v."},{"Start":"00:48.500 ","End":"00:52.615","Text":"The norm squared is this with itself."},{"Start":"00:52.615 ","End":"00:55.890","Text":"Now we expand this using linearity"},{"Start":"00:55.890 ","End":"00:58.425","Text":"and we\u0027ve done this thing before so,"},{"Start":"00:58.425 ","End":"01:01.080","Text":"I won\u0027t go into the details."},{"Start":"01:01.080 ","End":"01:04.015","Text":"But look, the first and the last terms,"},{"Start":"01:04.015 ","End":"01:07.685","Text":"I can use this formula again only from right to left."},{"Start":"01:07.685 ","End":"01:10.790","Text":"These 2 I can collect together because of symmetry,"},{"Start":"01:10.790 ","End":"01:13.240","Text":"u, v, and v, u are the same thing."},{"Start":"01:13.240 ","End":"01:18.800","Text":"Yeah, I put in an extra step just to replace v, u with u, v."},{"Start":"01:18.800 ","End":"01:22.220","Text":"Okay, now what I said there earlier,"},{"Start":"01:22.220 ","End":"01:26.825","Text":"u with u from this formula is norm of u squared."},{"Start":"01:26.825 ","End":"01:29.780","Text":"The last 1 is norm of v squared."},{"Start":"01:29.780 ","End":"01:33.235","Text":"This plus itself it\u0027s just twice this."},{"Start":"01:33.235 ","End":"01:36.200","Text":"That is exactly what is on the right-hand side"},{"Start":"01:36.200 ","End":"01:39.630","Text":"and so we\u0027ve proved it, and we are done."}],"ID":10149},{"Watched":false,"Name":"Exercise 5","Duration":"1m 10s","ChapterTopicVideoID":9637,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.980","Text":"In this exercise, presumably we\u0027re in an inner product space"},{"Start":"00:04.980 ","End":"00:07.755","Text":"where we have a norm and an inner product,"},{"Start":"00:07.755 ","End":"00:10.860","Text":"we have to show that this holds for every u and v."},{"Start":"00:10.860 ","End":"00:14.070","Text":"Let\u0027s start with the left-hand side."},{"Start":"00:14.070 ","End":"00:17.520","Text":"I want to remind you that there\u0027s a useful formula that"},{"Start":"00:17.520 ","End":"00:24.210","Text":"the norm of something squared is just the inner product of it with itself."},{"Start":"00:24.210 ","End":"00:26.500","Text":"We\u0027ve seen this before."},{"Start":"00:26.660 ","End":"00:31.200","Text":"Here we have the inner product of u minus v with itself."},{"Start":"00:31.200 ","End":"00:34.665","Text":"Now we\u0027re going to use linearity to expand."},{"Start":"00:34.665 ","End":"00:38.030","Text":"I think we\u0027ve seen this more than once before,"},{"Start":"00:38.030 ","End":"00:41.420","Text":"so I\u0027ll just leave this result."},{"Start":"00:41.420 ","End":"00:44.570","Text":"Of course, u, v, and v, u,"},{"Start":"00:44.570 ","End":"00:46.690","Text":"are the same by symmetry."},{"Start":"00:46.690 ","End":"00:49.310","Text":"Now I\u0027m going to do 3 things, u, with u"},{"Start":"00:49.310 ","End":"00:52.145","Text":"using this formula will be norm of u squared,"},{"Start":"00:52.145 ","End":"00:53.885","Text":"similarly with v,"},{"Start":"00:53.885 ","End":"00:55.970","Text":"and in the middle I\u0027ll just combine them,"},{"Start":"00:55.970 ","End":"00:57.920","Text":"because there\u0027s 2 of these."},{"Start":"00:57.920 ","End":"01:00.305","Text":"We end up getting this,"},{"Start":"01:00.305 ","End":"01:01.985","Text":"which if you see,"},{"Start":"01:01.985 ","End":"01:06.680","Text":"is the same as the right-hand side of what we had to prove."},{"Start":"01:06.680 ","End":"01:11.100","Text":"We started from the left we reached the right and so we are done."}],"ID":10150},{"Watched":false,"Name":"Exercise 6","Duration":"1m 6s","ChapterTopicVideoID":9638,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.720","Text":"In this exercise, we have to prove this identity"},{"Start":"00:03.720 ","End":"00:08.385","Text":"where u and v are any 2 vectors in an inner product space,"},{"Start":"00:08.385 ","End":"00:12.195","Text":"so it makes sense to take inner products and norms."},{"Start":"00:12.195 ","End":"00:17.260","Text":"Let\u0027s start with the left-hand side and see if we can reach the right-hand side."},{"Start":"00:17.270 ","End":"00:21.060","Text":"On the left, it\u0027s u minus v, u plus v,"},{"Start":"00:21.060 ","End":"00:21.900","Text":"I just copied it."},{"Start":"00:21.900 ","End":"00:27.930","Text":"Now we\u0027ll expand this using linearity will get u with u minus v with u"},{"Start":"00:27.930 ","End":"00:30.750","Text":"plus u with v minus v with v."},{"Start":"00:30.750 ","End":"00:37.425","Text":"By symmetry, in a product of u with v is the same as the inner product of v with u."},{"Start":"00:37.425 ","End":"00:41.385","Text":"So I can get this. I just switched the order on these 2."},{"Start":"00:41.385 ","End":"00:50.600","Text":"Now this cancels with this and what we\u0027re left with is u with u,"},{"Start":"00:50.600 ","End":"00:53.750","Text":"which is the same as the norm of u squared."},{"Start":"00:53.750 ","End":"00:56.105","Text":"We discussed this before."},{"Start":"00:56.105 ","End":"01:00.365","Text":"Similarly, v with itself is the norm of v squared,"},{"Start":"01:00.365 ","End":"01:03.560","Text":"and that\u0027s exactly what we needed for the right-hand side."},{"Start":"01:03.560 ","End":"01:05.820","Text":"So we are done."}],"ID":10151},{"Watched":false,"Name":"Exercise 7","Duration":"3m 41s","ChapterTopicVideoID":9639,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:01.620","Text":"In this exercise,"},{"Start":"00:01.620 ","End":"00:04.920","Text":"we have to prove the following identity"},{"Start":"00:04.920 ","End":"00:10.470","Text":"where u and v are vectors in an inner product space"},{"Start":"00:10.470 ","End":"00:14.085","Text":"and why the norm makes sense."},{"Start":"00:14.085 ","End":"00:17.550","Text":"We prove this and then after that,"},{"Start":"00:17.550 ","End":"00:19.680","Text":"we\u0027ll give a geometric interpretation."},{"Start":"00:19.680 ","End":"00:25.454","Text":"I want to remind you of a useful formula that in an inner product space,"},{"Start":"00:25.454 ","End":"00:31.665","Text":"the norm of some vector squared is the inner product of it with itself."},{"Start":"00:31.665 ","End":"00:33.840","Text":"We\u0027ll start with the left-hand side,"},{"Start":"00:33.840 ","End":"00:35.970","Text":"which I copied here."},{"Start":"00:35.970 ","End":"00:41.800","Text":"I\u0027m going to use the results of the 2 previous exercises."},{"Start":"00:41.800 ","End":"00:44.490","Text":"Maybe it wasn\u0027t exactly the last 2 exercises,"},{"Start":"00:44.490 ","End":"00:49.480","Text":"but 2 of the previous exercises in this set were this and this."},{"Start":"00:49.480 ","End":"00:51.810","Text":"If we add them the 2 u, v"},{"Start":"00:51.810 ","End":"00:53.840","Text":"will cancel out with minus 2 u, v"},{"Start":"00:53.840 ","End":"00:58.705","Text":"and I can group together twice this and twice this."},{"Start":"00:58.705 ","End":"01:02.030","Text":"Here we are which is the right-hand side."},{"Start":"01:02.030 ","End":"01:06.530","Text":"That was straightforward because we had 2 previous exercises."},{"Start":"01:06.530 ","End":"01:12.515","Text":"If not we would have expanded each 1 of these according to this formula."},{"Start":"01:12.515 ","End":"01:19.890","Text":"Now, let\u0027s give a geometric interpretation to what this is."},{"Start":"01:19.930 ","End":"01:26.540","Text":"I brought a sketch which I found on the Internet."},{"Start":"01:26.540 ","End":"01:28.520","Text":"I\u0027ll give you the sketch in a moment."},{"Start":"01:28.520 ","End":"01:30.080","Text":"I just copied what we proved"},{"Start":"01:30.080 ","End":"01:34.900","Text":"and what it\u0027s called in geometry is the parallelogram law."},{"Start":"01:34.900 ","End":"01:40.925","Text":"The reason this is called the parallelogram law, here\u0027s the sketch."},{"Start":"01:40.925 ","End":"01:46.610","Text":"We\u0027ll see why and what it actually states in words."},{"Start":"01:46.610 ","End":"01:48.605","Text":"First of all, let\u0027s look at this picture."},{"Start":"01:48.605 ","End":"01:52.700","Text":"These 2 vectors here will be our x and y."},{"Start":"01:52.700 ","End":"01:57.105","Text":"I\u0027m going to take vectors in the plane,"},{"Start":"01:57.105 ","End":"02:01.990","Text":"maybe in Euclidean space."},{"Start":"02:02.270 ","End":"02:05.345","Text":"They have to be in the same plane."},{"Start":"02:05.345 ","End":"02:09.650","Text":"X plus y is this 1 here these are red,"},{"Start":"02:09.650 ","End":"02:11.765","Text":"that would be the purple 1."},{"Start":"02:11.765 ","End":"02:17.010","Text":"The way we sum vectors is by completing the parallelogram."},{"Start":"02:17.330 ","End":"02:22.115","Text":"X minus y is this vector here."},{"Start":"02:22.115 ","End":"02:28.070","Text":"The 1 that you subtract is at the tip and the other 1 is at the tail."},{"Start":"02:28.070 ","End":"02:29.810","Text":"The x is here, y is here."},{"Start":"02:29.810 ","End":"02:32.675","Text":"This is the difference x minus y."},{"Start":"02:32.675 ","End":"02:37.215","Text":"Now, the norm is the length."},{"Start":"02:37.215 ","End":"02:42.005","Text":"What we have on the left-hand side is the length of the purple 1 squared"},{"Start":"02:42.005 ","End":"02:47.150","Text":"plus the length of the pink magenta 1 squared."},{"Start":"02:47.150 ","End":"02:49.340","Text":"On the right-hand side,"},{"Start":"02:49.340 ","End":"02:52.040","Text":"we have twice x squared,"},{"Start":"02:52.040 ","End":"02:56.360","Text":"which I can also look at as this squared plus this squared."},{"Start":"02:56.360 ","End":"03:00.890","Text":"When I say this, I mean its length plus this squared, plus this squared."},{"Start":"03:00.890 ","End":"03:02.900","Text":"In other words, the right-hand side I could say"},{"Start":"03:02.900 ","End":"03:08.695","Text":"is the sum of the squares of all the 4 sides of the parallelogram."},{"Start":"03:08.695 ","End":"03:11.000","Text":"If I unbreak them in groups,"},{"Start":"03:11.000 ","End":"03:12.485","Text":"I know this is equal to this,"},{"Start":"03:12.485 ","End":"03:14.410","Text":"but that\u0027s what we have."},{"Start":"03:14.410 ","End":"03:17.645","Text":"Here we have the sum of the squares of the 2 diagonals."},{"Start":"03:17.645 ","End":"03:20.210","Text":"Let me just write that in words."},{"Start":"03:20.210 ","End":"03:22.320","Text":"Here it is."},{"Start":"03:22.320 ","End":"03:26.570","Text":"The sum of the squares of the lengths of the 4 sides of a parallelogram"},{"Start":"03:26.570 ","End":"03:31.825","Text":"equals the sum of the squares of the lengths of the 2 diagonals."},{"Start":"03:31.825 ","End":"03:38.670","Text":"We\u0027ve proved it at least for 2D geometry."},{"Start":"03:38.670 ","End":"03:41.590","Text":"We are done."}],"ID":10152},{"Watched":false,"Name":"Exercise 8","Duration":"1m 15s","ChapterTopicVideoID":9640,"CourseChapterTopicPlaylistID":7309,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.900","Text":"In this exercise, we have to prove this identity"},{"Start":"00:03.900 ","End":"00:06.780","Text":"where we are in an inner product space"},{"Start":"00:06.780 ","End":"00:11.130","Text":"and we have to show that for all u and v, this equality holds."},{"Start":"00:11.130 ","End":"00:13.875","Text":"This is inner product and this is norm."},{"Start":"00:13.875 ","End":"00:15.870","Text":"Let\u0027s start with the left-hand side"},{"Start":"00:15.870 ","End":"00:18.705","Text":"and see if we can reach the right-hand side."},{"Start":"00:18.705 ","End":"00:23.040","Text":"Now, I\u0027m going to use some of the results from previous exercise."},{"Start":"00:23.040 ","End":"00:27.405","Text":"We had the previous exercise which was to expand this,"},{"Start":"00:27.405 ","End":"00:33.205","Text":"and we also had an exercise to expand u minus v norm squared."},{"Start":"00:33.205 ","End":"00:38.665","Text":"If you check, what we got for this was this expression here."},{"Start":"00:38.665 ","End":"00:43.460","Text":"What we got for this was this expression here."},{"Start":"00:43.460 ","End":"00:46.280","Text":"Now look, a lot of stuff cancels."},{"Start":"00:46.280 ","End":"00:50.285","Text":"Like norm of u squared minus norm of u squared,"},{"Start":"00:50.285 ","End":"00:54.140","Text":"norm of v squared minus norm of v squared."},{"Start":"00:54.140 ","End":"00:58.790","Text":"These 2 combine as a minus minus 2,"},{"Start":"00:58.790 ","End":"01:01.090","Text":"so I get 4 of these."},{"Start":"01:01.090 ","End":"01:06.600","Text":"All we\u0027re left with is the 1/4 and then 4 of inner product of u with v."},{"Start":"01:06.600 ","End":"01:11.360","Text":"The 1/4 and the 4 cancel and we\u0027re left with inner product of u with v,"},{"Start":"01:11.360 ","End":"01:12.980","Text":"which is what we\u0027re supposed to get,"},{"Start":"01:12.980 ","End":"01:15.570","Text":"and so we are done."}],"ID":10153}],"Thumbnail":null,"ID":7309},{"Name":"Cauchy–Schwarz Inequality","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Lesson 1 - Cauchy–Schwarz Inequality","Duration":"8m 23s","ChapterTopicVideoID":10010,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.220","Text":"In this clip, we\u0027re going to learn about"},{"Start":"00:02.220 ","End":"00:04.980","Text":"a very important inequality in mathematics"},{"Start":"00:04.980 ","End":"00:08.065","Text":"called the Cauchy-Schwarz inequality,"},{"Start":"00:08.065 ","End":"00:10.350","Text":"is just 2 different mathematicians."},{"Start":"00:10.350 ","End":"00:12.720","Text":"I\u0027m sorry, I don\u0027t know much about the history."},{"Start":"00:12.720 ","End":"00:15.990","Text":"But this is the name of the inequality."},{"Start":"00:15.990 ","End":"00:20.400","Text":"As usual, we are in an inner product space,"},{"Start":"00:20.400 ","End":"00:22.680","Text":"always over the reals."},{"Start":"00:22.680 ","End":"00:24.060","Text":"In this course, though,"},{"Start":"00:24.060 ","End":"00:27.435","Text":"there are inner product spaces over the complex numbers."},{"Start":"00:27.435 ","End":"00:33.945","Text":"We\u0027ll assume that u and v are 2 vectors in an inner product space."},{"Start":"00:33.945 ","End":"00:37.960","Text":"It\u0027s always true that this inequality holds."},{"Start":"00:37.960 ","End":"00:39.924","Text":"Now let\u0027s look at it closely."},{"Start":"00:39.924 ","End":"00:43.130","Text":"This is the inner product of u with v."},{"Start":"00:43.130 ","End":"00:46.655","Text":"The inner product of 2 vectors gives us a scalar,"},{"Start":"00:46.655 ","End":"00:50.795","Text":"a number, and as a number we can take its absolute value."},{"Start":"00:50.795 ","End":"00:53.495","Text":"So it\u0027ll be a non-negative number here."},{"Start":"00:53.495 ","End":"00:55.595","Text":"On the right-hand side,"},{"Start":"00:55.595 ","End":"00:58.970","Text":"the norm of a vector is also a scalar,"},{"Start":"00:58.970 ","End":"01:03.300","Text":"a number, happens to be also non-negative,"},{"Start":"01:03.300 ","End":"01:04.500","Text":"but then here,"},{"Start":"01:04.500 ","End":"01:06.995","Text":"event we have a number times a number."},{"Start":"01:06.995 ","End":"01:10.010","Text":"So this is just an inequality that 1 number"},{"Start":"01:10.010 ","End":"01:13.940","Text":"is less than or equal to the product of these 2 numbers."},{"Start":"01:14.570 ","End":"01:21.720","Text":"I think we\u0027re going to prove it, I mean, to say, let\u0027s prove it."},{"Start":"01:21.720 ","End":"01:24.220","Text":"Here goes."},{"Start":"01:24.220 ","End":"01:32.995","Text":"Now, we\u0027ll start off with assuming that u here is not equal to 0 because if u equals 0,"},{"Start":"01:32.995 ","End":"01:37.180","Text":"then the left-hand side is 0 because 0 inner product with anything is"},{"Start":"01:37.180 ","End":"01:41.140","Text":"0 and the norm of the 0 vector is 0."},{"Start":"01:41.140 ","End":"01:45.410","Text":"We\u0027ll just get 0 less than or equal to 0, which is true."},{"Start":"01:45.650 ","End":"01:49.095","Text":"This case is dealt with separately."},{"Start":"01:49.095 ","End":"01:54.265","Text":"For the rest of the proof, I\u0027m assuming that u is not the 0 vector."},{"Start":"01:54.265 ","End":"01:58.570","Text":"Now let\u0027s say that x is a scalar,"},{"Start":"01:58.570 ","End":"02:02.350","Text":"that is a real number, like we said."},{"Start":"02:02.350 ","End":"02:08.440","Text":"Then I claim that this thing squared bigger or equal to 0,"},{"Start":"02:08.440 ","End":"02:13.465","Text":"basically because anything squared is bigger or equal to 0,"},{"Start":"02:13.465 ","End":"02:14.890","Text":"so this is a number,"},{"Start":"02:14.890 ","End":"02:18.145","Text":"the norm squared is bigger or equal to 0."},{"Start":"02:18.145 ","End":"02:22.090","Text":"Now remember that the norm of anything squared"},{"Start":"02:22.090 ","End":"02:24.775","Text":"is the inner product of it with itself."},{"Start":"02:24.775 ","End":"02:27.385","Text":"In case you forgot, I\u0027ll remind you."},{"Start":"02:27.385 ","End":"02:31.765","Text":"Because the norm of any vector, say v,"},{"Start":"02:31.765 ","End":"02:34.435","Text":"not necessarily that v,"},{"Start":"02:34.435 ","End":"02:41.004","Text":"is defined to be the square root of the inner product of v with itself."},{"Start":"02:41.004 ","End":"02:44.410","Text":"If you just raise both sides to the power of 2,"},{"Start":"02:44.410 ","End":"02:48.020","Text":"I could put a 2 here and get rid of the square root."},{"Start":"02:48.470 ","End":"02:53.570","Text":"Then we\u0027d get this equality for any vector v."},{"Start":"02:53.570 ","End":"02:57.110","Text":"Back here,"},{"Start":"02:57.110 ","End":"03:00.785","Text":"the norm squared is the inner product of it with itself."},{"Start":"03:00.785 ","End":"03:03.410","Text":"Because of bigger or equal to 0,"},{"Start":"03:03.410 ","End":"03:05.495","Text":"this is bigger or equal to 0."},{"Start":"03:05.495 ","End":"03:07.760","Text":"So far so good."},{"Start":"03:07.760 ","End":"03:12.410","Text":"At this point, we expand this using linearity homogeneity."},{"Start":"03:12.410 ","End":"03:14.720","Text":"We\u0027ve done this thing before,"},{"Start":"03:14.720 ","End":"03:17.239","Text":"so I won\u0027t dwell on the details."},{"Start":"03:17.239 ","End":"03:24.420","Text":"Basically, we have xu with xu, xu with v, v with xu, v with v,"},{"Start":"03:24.420 ","End":"03:26.540","Text":"4 inner product we add them."},{"Start":"03:26.540 ","End":"03:30.320","Text":"We also take the constants, the scalars outside."},{"Start":"03:30.320 ","End":"03:34.480","Text":"This gives us this expression."},{"Start":"03:34.480 ","End":"03:40.410","Text":"The middle 2 are the same because of the symmetry property."},{"Start":"03:40.420 ","End":"03:46.805","Text":"These 2 combine to give twice u, v with x."},{"Start":"03:46.805 ","End":"03:50.780","Text":"Also remember that the inner product of something"},{"Start":"03:50.780 ","End":"03:52.910","Text":"with itself is the norm squared."},{"Start":"03:52.910 ","End":"03:56.195","Text":"I did that and put it in front of the x squared."},{"Start":"03:56.195 ","End":"04:02.240","Text":"Here also the inner product of v with v is the norm of v squared."},{"Start":"04:02.240 ","End":"04:05.455","Text":"This is the place where we\u0027re at."},{"Start":"04:05.455 ","End":"04:13.590","Text":"What I have on the left is a quadratic expression or quadratic function of x."},{"Start":"04:15.080 ","End":"04:19.800","Text":"We can call this Part A, this Part B, and this Part C."},{"Start":"04:19.800 ","End":"04:23.525","Text":"Then we have Ax squared plus Bx plus C,"},{"Start":"04:23.525 ","End":"04:28.700","Text":"and it\u0027s bigger or equal to 0 for all x."},{"Start":"04:28.700 ","End":"04:33.215","Text":"Now I hope you remember about quadratic functions."},{"Start":"04:33.215 ","End":"04:40.249","Text":"What I want to do is draw a rough sketch of this quadratic function,"},{"Start":"04:40.249 ","End":"04:51.580","Text":"which is that y equals Ax squared plus Bx plus C, the left-hand side."},{"Start":"04:51.580 ","End":"04:58.505","Text":"Now notice that A is bigger than 0 because,"},{"Start":"04:58.505 ","End":"05:00.840","Text":"well, it\u0027s something squared."},{"Start":"05:00.840 ","End":"05:05.760","Text":"Also u is not 0, so A is not 0,"},{"Start":"05:05.760 ","End":"05:07.745","Text":"so it has to be positive,"},{"Start":"05:07.745 ","End":"05:13.475","Text":"which means that this is an upward facing parabola."},{"Start":"05:13.475 ","End":"05:15.725","Text":"Now there are 3 possibilities."},{"Start":"05:15.725 ","End":"05:21.830","Text":"Either the parabola is totally above the x axis,"},{"Start":"05:21.830 ","End":"05:25.595","Text":"so I\u0027ll label them x and y."},{"Start":"05:25.595 ","End":"05:31.975","Text":"The other possibility is that it just grazes the axis."},{"Start":"05:31.975 ","End":"05:36.060","Text":"Just grazes at 1 point."},{"Start":"05:36.060 ","End":"05:42.920","Text":"This corresponds to the case where there\u0027s 1 solution to the equation equals 0."},{"Start":"05:42.920 ","End":"05:45.424","Text":"This corresponds to no solutions."},{"Start":"05:45.424 ","End":"05:47.990","Text":"There is a third case which can\u0027t be,"},{"Start":"05:47.990 ","End":"05:54.770","Text":"which is where the parabola actually crosses the x axis."},{"Start":"05:54.770 ","End":"05:59.420","Text":"But that can\u0027t be because it\u0027s got to be always bigger or equal to 0,"},{"Start":"05:59.420 ","End":"06:01.430","Text":"so I\u0027ll erase this possibility."},{"Start":"06:01.430 ","End":"06:04.490","Text":"I want to remind you that the something called"},{"Start":"06:04.490 ","End":"06:08.545","Text":"the discriminant of the quadratic expression,"},{"Start":"06:08.545 ","End":"06:13.800","Text":"Delta usually, and it\u0027s B squared minus 4AC."},{"Start":"06:13.800 ","End":"06:18.545","Text":"We can tell which kind of parabola we have according to Delta."},{"Start":"06:18.545 ","End":"06:22.054","Text":"If it\u0027s totally above the x axis,"},{"Start":"06:22.054 ","End":"06:26.825","Text":"that corresponds to Delta less than 0."},{"Start":"06:26.825 ","End":"06:30.062","Text":"If it just touches at 1 place,"},{"Start":"06:30.062 ","End":"06:33.650","Text":"then Delta is equal to 0 and the third case"},{"Start":"06:33.650 ","End":"06:36.890","Text":"which I threw out was Delta bigger than 0."},{"Start":"06:36.890 ","End":"06:38.510","Text":"We only have 2 possibilities,"},{"Start":"06:38.510 ","End":"06:45.230","Text":"less than 0 or equal to 0 and so combining these cases,"},{"Start":"06:45.230 ","End":"06:50.405","Text":"we conclude that the discriminant has to be less than or equal to 0."},{"Start":"06:50.405 ","End":"06:55.040","Text":"Now let\u0027s see, we know what B, A, and C are in terms of u and v."},{"Start":"06:55.040 ","End":"07:00.210","Text":"B squared is 2 squared,"},{"Start":"07:00.210 ","End":"07:04.965","Text":"which is 4 times the inner product squared minus is minus 4 is 4."},{"Start":"07:04.965 ","End":"07:09.320","Text":"Then A is the norm of u squared and C is"},{"Start":"07:09.320 ","End":"07:14.610","Text":"the norm of v squared so this is the expression we get."},{"Start":"07:15.110 ","End":"07:18.560","Text":"Of course we can divide both sides by 4,"},{"Start":"07:18.560 ","End":"07:25.360","Text":"so that gets rid of that and we can bring this to the other side of the inequality."},{"Start":"07:25.360 ","End":"07:27.335","Text":"This is what we get."},{"Start":"07:27.335 ","End":"07:31.730","Text":"Now I just want to remind you of something that in algebra,"},{"Start":"07:31.730 ","End":"07:35.225","Text":"the square root of x squared,"},{"Start":"07:35.225 ","End":"07:37.520","Text":"not the same x, is any x,"},{"Start":"07:37.520 ","End":"07:39.865","Text":"you might say it\u0027s equal to x."},{"Start":"07:39.865 ","End":"07:43.550","Text":"Not so, remember it\u0027s equal to the absolute value of x"},{"Start":"07:43.550 ","End":"07:47.210","Text":"because the square root is always non-negative."},{"Start":"07:47.210 ","End":"07:48.815","Text":"With this in mind,"},{"Start":"07:48.815 ","End":"07:51.290","Text":"I take the square root of both sides here"},{"Start":"07:51.290 ","End":"07:56.630","Text":"and we get on the left the absolute value of the inner product of u and v."},{"Start":"07:56.630 ","End":"07:59.015","Text":"That\u0027s this absolute value."},{"Start":"07:59.015 ","End":"08:02.820","Text":"On the right, we put the norm of u because"},{"Start":"08:02.820 ","End":"08:06.710","Text":"the norm of u is non-negative and the norm of v is non-negative."},{"Start":"08:06.710 ","End":"08:08.450","Text":"The norm is always non-negative."},{"Start":"08:08.450 ","End":"08:12.630","Text":"So there\u0027s no need for any extra absolute values here."},{"Start":"08:13.310 ","End":"08:17.990","Text":"What we have here now is the Cauchy-Schwarz inequality."},{"Start":"08:17.990 ","End":"08:19.835","Text":"If you go back to the beginning, you\u0027ll see."},{"Start":"08:19.835 ","End":"08:23.910","Text":"So we are done with the proof and that\u0027s the end of the clip."}],"ID":10123},{"Watched":false,"Name":"Lesson 2 -The Triangle Inequality","Duration":"4m 32s","ChapterTopicVideoID":10042,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.050 ","End":"00:04.395","Text":"In this clip, we\u0027ll learn about another inequality."},{"Start":"00:04.395 ","End":"00:07.845","Text":"This follows the Cauchy-Schwarz inequality,"},{"Start":"00:07.845 ","End":"00:10.410","Text":"and it goes as follows."},{"Start":"00:10.410 ","End":"00:13.130","Text":"That I\u0027ll paraphrase what it says here,"},{"Start":"00:13.130 ","End":"00:14.490","Text":"If we take any 2 vectors,"},{"Start":"00:14.490 ","End":"00:17.354","Text":"u and v in an inner product space,"},{"Start":"00:17.354 ","End":"00:20.640","Text":"then we have the following, that the norm"},{"Start":"00:20.640 ","End":"00:25.140","Text":"or the length of u plus v is less than or equal to"},{"Start":"00:25.140 ","End":"00:29.652","Text":"the sum of the lengths of u and of v."},{"Start":"00:29.652 ","End":"00:35.320","Text":"I\u0027d like to tell you why it\u0027s called the triangle inequality."},{"Start":"00:35.320 ","End":"00:40.670","Text":"The name comes from Euclidean geometry or plane geometry,"},{"Start":"00:40.670 ","End":"00:43.820","Text":"and there, there is also a triangle inequality,"},{"Start":"00:43.820 ","End":"00:45.605","Text":"and what it says,"},{"Start":"00:45.605 ","End":"00:47.090","Text":"and I\u0027m paraphrasing is,"},{"Start":"00:47.090 ","End":"00:48.485","Text":"\u0027\u0027If you have a triangle,"},{"Start":"00:48.485 ","End":"00:52.535","Text":"if I take the length of 1 of the sides,"},{"Start":"00:52.535 ","End":"00:54.380","Text":"it\u0027s going to be less,"},{"Start":"00:54.380 ","End":"00:59.105","Text":"possibly equal to the sum of the lengths of the other 2 sides.\u0027\u0027"},{"Start":"00:59.105 ","End":"01:00.920","Text":"Normally it\u0027s less than,"},{"Start":"01:00.920 ","End":"01:02.810","Text":"but there is a degenerate case"},{"Start":"01:02.810 ","End":"01:05.975","Text":"where this point is actually on this side,"},{"Start":"01:05.975 ","End":"01:07.410","Text":"and then you get equality."},{"Start":"01:07.410 ","End":"01:10.395","Text":"Let\u0027s not get bogged down with these technicalities."},{"Start":"01:10.395 ","End":"01:13.190","Text":"This is true in geometry."},{"Start":"01:13.190 ","End":"01:18.170","Text":"How does it relate to our inner product space in vectors?"},{"Start":"01:18.170 ","End":"01:20.100","Text":"Well, if you think of this,"},{"Start":"01:20.100 ","End":"01:24.005","Text":"and I\u0027ve colored it not just for decoration,"},{"Start":"01:24.005 ","End":"01:29.435","Text":"that if this is my vector u from here to here,"},{"Start":"01:29.435 ","End":"01:32.345","Text":"and this is my vector v,"},{"Start":"01:32.345 ","End":"01:40.340","Text":"then the sum of vectors is gotten by putting the tail of 1 to the head of the other."},{"Start":"01:40.340 ","End":"01:44.360","Text":"This vector here would be u plus v."},{"Start":"01:44.360 ","End":"01:46.160","Text":"The colors correspond."},{"Start":"01:46.160 ","End":"01:47.450","Text":"The red 1 is this,"},{"Start":"01:47.450 ","End":"01:50.460","Text":"the green 1 is this, blue 1 is this,"},{"Start":"01:50.460 ","End":"01:55.210","Text":"and that\u0027s analogous to the geometry."},{"Start":"01:55.730 ","End":"02:04.260","Text":"Of course, the lengths come in because the norm of a vector is its length."},{"Start":"02:05.210 ","End":"02:08.990","Text":"Yeah, more accurately when we put the lengths or norms,"},{"Start":"02:08.990 ","End":"02:14.015","Text":"then this is less than or equal to this plus this."},{"Start":"02:14.015 ","End":"02:17.420","Text":"In geometry, it\u0027s easy to prove basically"},{"Start":"02:17.420 ","End":"02:20.510","Text":"because the straight line is the shortest distance between 2 points,"},{"Start":"02:20.510 ","End":"02:22.190","Text":"and if I make a detour,"},{"Start":"02:22.190 ","End":"02:24.905","Text":"then it\u0027s going to only get longer."},{"Start":"02:24.905 ","End":"02:29.855","Text":"But I want to show you the proof in general in an inner product space."},{"Start":"02:29.855 ","End":"02:32.750","Text":"We\u0027re going to start with this equality."},{"Start":"02:32.750 ","End":"02:37.100","Text":"Any vector, if I take its norm squared"},{"Start":"02:37.100 ","End":"02:39.800","Text":"is equal to the inner product of the vector with itself."},{"Start":"02:39.800 ","End":"02:42.155","Text":"We\u0027ve seen this several times."},{"Start":"02:42.155 ","End":"02:45.829","Text":"Then we expand using the linearity property."},{"Start":"02:45.829 ","End":"02:47.555","Text":"We\u0027ve done this sort of thing before."},{"Start":"02:47.555 ","End":"02:52.325","Text":"Each 1 of these, with each 1 of these u with u and then u with v,"},{"Start":"02:52.325 ","End":"02:56.199","Text":"v with u, and v with v, like so."},{"Start":"02:56.199 ","End":"03:01.960","Text":"Now, the inner product of u with itself is the norm of u squared."},{"Start":"03:01.960 ","End":"03:07.370","Text":"Similarly, inner product of v with v is norm of v squared,"},{"Start":"03:07.370 ","End":"03:10.250","Text":"u, v, and v, u are the same by the symmetry,"},{"Start":"03:10.250 ","End":"03:13.580","Text":"so it\u0027s just twice this."},{"Start":"03:13.580 ","End":"03:16.355","Text":"Now here\u0027s the important part."},{"Start":"03:16.355 ","End":"03:22.310","Text":"We\u0027re going to use the Cauchy-Schwarz inequality to get from this to this."},{"Start":"03:22.310 ","End":"03:23.850","Text":"Let us call it C-S,"},{"Start":"03:23.850 ","End":"03:25.805","Text":"the Cauchy-Schwarz, if you look it up,"},{"Start":"03:25.805 ","End":"03:30.260","Text":"just says that the inner product of u with v is less than or equal to"},{"Start":"03:30.260 ","End":"03:32.990","Text":"the product of the norm of u times the norm of v."},{"Start":"03:32.990 ","End":"03:35.495","Text":"That\u0027s the 2 in front,"},{"Start":"03:35.495 ","End":"03:36.920","Text":"doesn\u0027t make any difference."},{"Start":"03:36.920 ","End":"03:39.125","Text":"Up till now, we\u0027ve got equal, equal, equal,"},{"Start":"03:39.125 ","End":"03:42.660","Text":"now we have a less than or equal to."},{"Start":"03:42.740 ","End":"03:45.230","Text":"Then we can write it like this."},{"Start":"03:45.230 ","End":"03:46.775","Text":"This is just basic algebra."},{"Start":"03:46.775 ","End":"03:52.985","Text":"Perhaps if you saw it like a squared plus b squared plus 2ab,"},{"Start":"03:52.985 ","End":"03:59.910","Text":"then you would immediately recognize that this is a plus b squared."},{"Start":"04:00.460 ","End":"04:05.300","Text":"That is actually what we wanted to prove."},{"Start":"04:05.300 ","End":"04:08.060","Text":"Well, not quite, we\u0027re almost there."},{"Start":"04:08.060 ","End":"04:12.710","Text":"What we have is that this equal, equal, equal, equal."},{"Start":"04:12.710 ","End":"04:16.835","Text":"We have 1 less than or equal to this."},{"Start":"04:16.835 ","End":"04:21.370","Text":"If I cancel these 2 with these 2,"},{"Start":"04:21.370 ","End":"04:27.950","Text":"then we get that norm of u plus v is less than or equal to norm of u plus norm of v,"},{"Start":"04:27.950 ","End":"04:29.720","Text":"which is what is written here,"},{"Start":"04:29.720 ","End":"04:32.880","Text":"and so we are done."}],"ID":10124},{"Watched":false,"Name":"Lesson 3 - Angle Between Vectors","Duration":"3m 58s","ChapterTopicVideoID":10041,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.540","Text":"In this clip, we\u0027ll define the concept of an angle between vectors."},{"Start":"00:06.540 ","End":"00:11.445","Text":"We\u0027ll do this for any 2 non-zero vectors in an inner product space."},{"Start":"00:11.445 ","End":"00:15.584","Text":"If 1 of the vectors is 0 the angle is not defined."},{"Start":"00:15.584 ","End":"00:17.520","Text":"I\u0027ve given the definition,"},{"Start":"00:17.520 ","End":"00:20.265","Text":"I\u0027ll explain it and see why it makes sense."},{"Start":"00:20.265 ","End":"00:26.655","Text":"The angle between u and v is defined to be the angle and just give it a name,"},{"Start":"00:26.655 ","End":"00:32.820","Text":"Theta between 0 and Pi radians that is"},{"Start":"00:32.820 ","End":"00:40.595","Text":"such that the cosine of this angle Theta is equal to this expression."},{"Start":"00:40.595 ","End":"00:46.070","Text":"The inner product of u with v over norm of u times norm of v."},{"Start":"00:46.070 ","End":"00:49.175","Text":"Now, what could go wrong?"},{"Start":"00:49.175 ","End":"00:52.600","Text":"What we have here is an equation in Theta."},{"Start":"00:52.600 ","End":"00:56.570","Text":"We wanted it to have exactly 1 solution."},{"Start":"00:56.570 ","End":"00:58.325","Text":"If it doesn\u0027t have a solution,"},{"Start":"00:58.325 ","End":"00:59.900","Text":"then the angle is not defined,"},{"Start":"00:59.900 ","End":"01:01.760","Text":"and if it has more than 1 solution,"},{"Start":"01:01.760 ","End":"01:04.520","Text":"then we\u0027re ambiguous was still not defined."},{"Start":"01:04.520 ","End":"01:06.995","Text":"That\u0027s what we have to show."},{"Start":"01:06.995 ","End":"01:11.630","Text":"I rephrase that we have to show why Theta exists and is unique."},{"Start":"01:11.630 ","End":"01:17.205","Text":"Let\u0027s take this whole expression just for short, we\u0027ll call it x."},{"Start":"01:17.205 ","End":"01:20.620","Text":"Take the absolute value of both sides."},{"Start":"01:20.620 ","End":"01:25.655","Text":"The denominator here is positive so we just take the absolute value of the numerator."},{"Start":"01:25.655 ","End":"01:30.590","Text":"Now we\u0027re going to call in the Cauchy-Schwarz inequality."},{"Start":"01:32.610 ","End":"01:36.640","Text":"The inner product of u with v,"},{"Start":"01:36.640 ","End":"01:41.080","Text":"because the numerator is less than or equal to norm of u, norm of v,"},{"Start":"01:41.080 ","End":"01:43.180","Text":"which is the denominator."},{"Start":"01:43.180 ","End":"01:46.194","Text":"If this inequality is true,"},{"Start":"01:46.194 ","End":"01:49.795","Text":"and note that the right-hand side is not zero"},{"Start":"01:49.795 ","End":"01:54.729","Text":"because our vectors are non-0 so their norms are non-zero."},{"Start":"01:54.729 ","End":"01:58.295","Text":"I could divide by the right-hand side."},{"Start":"01:58.295 ","End":"02:03.125","Text":"This quotient is less than or equal to 1."},{"Start":"02:03.125 ","End":"02:07.220","Text":"Because if this is less than or equal to this and this is positive, we divide by it,"},{"Start":"02:07.220 ","End":"02:08.900","Text":"we get less than or equal to 1,"},{"Start":"02:08.900 ","End":"02:12.930","Text":"and that is also the absolute value of x."},{"Start":"02:12.930 ","End":"02:14.390","Text":"We\u0027re getting there."},{"Start":"02:14.390 ","End":"02:19.910","Text":"The absolute value of a number is less than or equal to 1,"},{"Start":"02:19.910 ","End":"02:24.750","Text":"when the numbers between minus 1 and 1."},{"Start":"02:25.010 ","End":"02:29.460","Text":"I\u0027m going to call in some trigonometry."},{"Start":"02:29.460 ","End":"02:39.805","Text":"From trigonometry we know that if we have a number x between minus 1 and 1,"},{"Start":"02:39.805 ","End":"02:47.450","Text":"then there is a unique Theta from 0 to Pi such that cosine Theta is x."},{"Start":"02:47.450 ","End":"02:52.940","Text":"Basically what happens is that as Theta travels from 0 to Pi,"},{"Start":"02:52.940 ","End":"02:59.795","Text":"the cosine starts off at 1 and ends up at minus 1 and is constantly decreasing."},{"Start":"02:59.795 ","End":"03:04.715","Text":"If I go from 1 to minus 1 and traverse it just once,"},{"Start":"03:04.715 ","End":"03:07.070","Text":"then it\u0027s going to be a solution."},{"Start":"03:07.070 ","End":"03:13.630","Text":"Is going to be some Theta where it\u0027s exactly equal to any number between 0 and 1."},{"Start":"03:13.630 ","End":"03:19.260","Text":"Our x, of course, is just to remind you, is this expression."},{"Start":"03:20.030 ","End":"03:22.620","Text":"If you\u0027re not clear about this,"},{"Start":"03:22.620 ","End":"03:24.320","Text":"it\u0027s just going back to trigonometry"},{"Start":"03:24.320 ","End":"03:28.980","Text":"or draw the graph of cosine from 0 to Pi and you\u0027ll see."},{"Start":"03:29.210 ","End":"03:33.170","Text":"This is what we wanted to show that there is a unique Theta"},{"Start":"03:33.170 ","End":"03:35.060","Text":"which satisfies this equation"},{"Start":"03:35.060 ","End":"03:39.230","Text":"and that Theta is the angle between u and v."},{"Start":"03:39.230 ","End":"03:44.810","Text":"Just a note, sometimes you want to work in degrees rather than radians,"},{"Start":"03:44.810 ","End":"03:46.670","Text":"and in that case,"},{"Start":"03:46.670 ","End":"03:50.135","Text":"you look for the angles that are between 0 and Pi."},{"Start":"03:50.135 ","End":"03:54.780","Text":"You look for it between 0 and 180 degrees."},{"Start":"03:55.390 ","End":"03:58.980","Text":"That ends this clip."}],"ID":10125},{"Watched":false,"Name":"Exercise 1","Duration":"4m 31s","ChapterTopicVideoID":9701,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.430","Text":"In this exercise,"},{"Start":"00:02.430 ","End":"00:06.090","Text":"well, I should\u0027ve said that we are in an inner product space,"},{"Start":"00:06.090 ","End":"00:07.680","Text":"but that\u0027s the chapter we\u0027re in."},{"Start":"00:07.680 ","End":"00:09.510","Text":"In an inner product space,"},{"Start":"00:09.510 ","End":"00:14.385","Text":"if u and v are linearly independent vectors,"},{"Start":"00:14.385 ","End":"00:18.270","Text":"then we have this equality."},{"Start":"00:18.270 ","End":"00:20.265","Text":"Now, if you look at it,"},{"Start":"00:20.265 ","End":"00:25.470","Text":"it looks very similar to the Cauchy-Schwarz inequality"},{"Start":"00:25.470 ","End":"00:27.960","Text":"where it was less than or equal to."},{"Start":"00:27.960 ","End":"00:32.040","Text":"We have a condition now that we can replace the less than or equal to"},{"Start":"00:32.040 ","End":"00:38.940","Text":"by equality if the 2 vectors are linearly dependent."},{"Start":"00:38.940 ","End":"00:44.165","Text":"Note that if v happens to be the 0 vector,"},{"Start":"00:44.165 ","End":"00:46.650","Text":"then both sides are 0,"},{"Start":"00:46.650 ","End":"00:49.035","Text":"so the equality is true."},{"Start":"00:49.035 ","End":"00:52.540","Text":"I just have to take care of the cases where v is not 0."},{"Start":"00:52.540 ","End":"00:55.240","Text":"You\u0027ll see why I want this in a moment."},{"Start":"00:55.240 ","End":"00:58.550","Text":"Now, given that u and v are linearly dependent,"},{"Start":"00:58.550 ","End":"01:01.520","Text":"I\u0027d like to say that u equals kv."},{"Start":"01:01.520 ","End":"01:03.815","Text":"This is not always true."},{"Start":"01:03.815 ","End":"01:06.800","Text":"It\u0027s true provided v is not equal to 0,"},{"Start":"01:06.800 ","End":"01:08.660","Text":"and we\u0027ve taken care of that separately."},{"Start":"01:08.660 ","End":"01:12.050","Text":"Because v is not 0, the linear dependence means that"},{"Start":"01:12.050 ","End":"01:15.800","Text":"u is some multiple some scalar times v."},{"Start":"01:15.800 ","End":"01:22.230","Text":"Going to start from the left and work my way gradually towards the right-hand side."},{"Start":"01:22.230 ","End":"01:25.525","Text":"The first step, because u is equal to kv,"},{"Start":"01:25.525 ","End":"01:30.935","Text":"I can replace it here and write kv instead of u."},{"Start":"01:30.935 ","End":"01:34.590","Text":"Now I\u0027ve done 2 steps in 1."},{"Start":"01:34.590 ","End":"01:37.660","Text":"Because of linearity or homogeneity,"},{"Start":"01:37.660 ","End":"01:42.460","Text":"I can take the k out of the inner product and put it here."},{"Start":"01:42.460 ","End":"01:47.005","Text":"But by the properties of the absolute value of a product,"},{"Start":"01:47.005 ","End":"01:49.600","Text":"I can bring it all the way out to the front"},{"Start":"01:49.600 ","End":"01:57.090","Text":"so I get absolute value of k times absolute value of inner product v, v."},{"Start":"01:57.090 ","End":"02:04.690","Text":"Now, I claim I can just throw out the absolute value on this inner product v, v."},{"Start":"02:04.690 ","End":"02:08.090","Text":"I expose the next line so you can see"},{"Start":"02:08.090 ","End":"02:12.350","Text":"why you might have remembered that the inner product of a vector"},{"Start":"02:12.350 ","End":"02:14.450","Text":"with itself is always non-negative."},{"Start":"02:14.450 ","End":"02:19.190","Text":"In fact, it\u0027s equal to the norm of this vector squared."},{"Start":"02:19.190 ","End":"02:21.080","Text":"X squared bigger or equal to 0,"},{"Start":"02:21.080 ","End":"02:24.035","Text":"and that\u0027s why I could drop the absolute value."},{"Start":"02:24.035 ","End":"02:26.300","Text":"But I also wanted to get to this point."},{"Start":"02:26.300 ","End":"02:28.160","Text":"That\u0027s good."},{"Start":"02:28.160 ","End":"02:34.545","Text":"Now, I\u0027m going to break up the square into this times itself."},{"Start":"02:34.545 ","End":"02:37.760","Text":"A squared is a times a in general,"},{"Start":"02:37.760 ","End":"02:39.700","Text":"so it\u0027s a trivial."},{"Start":"02:39.700 ","End":"02:43.730","Text":"There\u0027s a reason why I\u0027ve colored some of it."},{"Start":"02:44.390 ","End":"02:50.410","Text":"Now, the next step is not immediately clear,"},{"Start":"02:50.410 ","End":"02:58.510","Text":"but I claim that absolute value of k times the norm of v is the norm of kv."},{"Start":"02:58.510 ","End":"03:01.570","Text":"I\u0027ll show this at the end."},{"Start":"03:01.570 ","End":"03:03.605","Text":"That\u0027s what this asterisk is,"},{"Start":"03:03.605 ","End":"03:05.590","Text":"not to interrupt the flow."},{"Start":"03:05.590 ","End":"03:11.110","Text":"I owe you the explanation of why this is equal to this."},{"Start":"03:11.110 ","End":"03:16.465","Text":"At this point, we remember that u equals kv, or kv equals u,"},{"Start":"03:16.465 ","End":"03:19.895","Text":"and so we get to this."},{"Start":"03:19.895 ","End":"03:24.040","Text":"If you check this is the right-hand side of what we had to prove,"},{"Start":"03:24.040 ","End":"03:25.420","Text":"and so we\u0027re there."},{"Start":"03:25.420 ","End":"03:32.540","Text":"We\u0027ve done it, except that I still owe you an explanation of why this is true."},{"Start":"03:32.540 ","End":"03:35.130","Text":"I\u0027ve written it all at once."},{"Start":"03:35.130 ","End":"03:40.340","Text":"It actually suits me to prove that this is equal to this."},{"Start":"03:40.340 ","End":"03:42.545","Text":"I\u0027ll start here and end here."},{"Start":"03:42.545 ","End":"03:43.820","Text":"Could do it both ways,"},{"Start":"03:43.820 ","End":"03:45.290","Text":"but it\u0027s easier this way."},{"Start":"03:45.290 ","End":"03:47.540","Text":"The norm of kv."},{"Start":"03:47.540 ","End":"03:50.555","Text":"Remember the definition of a norm,"},{"Start":"03:50.555 ","End":"03:54.335","Text":"the square root of the inner product of this with itself."},{"Start":"03:54.335 ","End":"03:59.300","Text":"Now, we can take k out by the homogeneity,"},{"Start":"03:59.300 ","End":"04:01.100","Text":"or we can take it out of here,"},{"Start":"04:01.100 ","End":"04:05.150","Text":"and we can take it out of here and we get k times k is k squared."},{"Start":"04:05.150 ","End":"04:07.760","Text":"It\u0027s 2 steps in 1, I think you can follow."},{"Start":"04:07.760 ","End":"04:12.110","Text":"Now, the square root of a product is the product of the square root."},{"Start":"04:12.110 ","End":"04:15.230","Text":"Square root of k squared times the square root of v, v."},{"Start":"04:15.230 ","End":"04:19.730","Text":"The square root of k squared is the absolute value of k."},{"Start":"04:19.730 ","End":"04:23.930","Text":"This is exactly the definition of the norm of v."},{"Start":"04:23.930 ","End":"04:25.640","Text":"There we are."},{"Start":"04:25.640 ","End":"04:27.545","Text":"We started from here and ended here."},{"Start":"04:27.545 ","End":"04:29.300","Text":"That\u0027s my debt to you."},{"Start":"04:29.300 ","End":"04:32.280","Text":"Now we are done."}],"ID":10126},{"Watched":false,"Name":"Exercise 2","Duration":"2m ","ChapterTopicVideoID":9702,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.845","Text":"In this exercise, we\u0027re given 2 sets of numbers,"},{"Start":"00:04.845 ","End":"00:07.820","Text":"x_1 through x_n and y_1 through y_n."},{"Start":"00:07.820 ","End":"00:12.645","Text":"They\u0027re real numbers and we have to prove the following inequality."},{"Start":"00:12.645 ","End":"00:17.205","Text":"There\u0027s no mention here of inner product spaces or vectors or anything."},{"Start":"00:17.205 ","End":"00:19.785","Text":"It looks like we have to introduce them."},{"Start":"00:19.785 ","End":"00:25.920","Text":"I think it\u0027s fairly intuitive to look at the space R^n and look at 2 vectors,"},{"Start":"00:25.920 ","End":"00:27.660","Text":"1 of them with the x\u0027s and 1 of them with the"},{"Start":"00:27.660 ","End":"00:31.935","Text":"y\u0027s and then this looks like the standard inner product."},{"Start":"00:31.935 ","End":"00:35.250","Text":"Let\u0027s do that. These numbers,"},{"Start":"00:35.250 ","End":"00:37.795","Text":"we\u0027ll put them into a vector and call it x."},{"Start":"00:37.795 ","End":"00:40.835","Text":"These will be the components of a vector y."},{"Start":"00:40.835 ","End":"00:44.060","Text":"Now these are vectors in R^n and we\u0027ll take"},{"Start":"00:44.060 ","End":"00:47.450","Text":"the standard inner product and so we can already"},{"Start":"00:47.450 ","End":"00:55.025","Text":"see that what\u0027s written here is the inner product of x and y only there is a squared."},{"Start":"00:55.025 ","End":"01:00.260","Text":"The inequality hints at Cauchy-Schwarz and that\u0027s what we"},{"Start":"01:00.260 ","End":"01:05.135","Text":"are going to use and what it says is what is here."},{"Start":"01:05.135 ","End":"01:10.610","Text":"But it\u0027s going to be better if we square this because we can see"},{"Start":"01:10.610 ","End":"01:19.990","Text":"here that this is this squared and so this is what we get after the squaring."},{"Start":"01:20.210 ","End":"01:24.800","Text":"Basically everything falls into place now with the standard inner product."},{"Start":"01:24.800 ","End":"01:28.490","Text":"As I said, x inner product with y gives"},{"Start":"01:28.490 ","End":"01:33.425","Text":"us this expression and the squared is just the squared."},{"Start":"01:33.425 ","End":"01:40.740","Text":"The norm of x squared is just x_1 with x_1,"},{"Start":"01:40.740 ","End":"01:42.015","Text":"x_2 with x_2,"},{"Start":"01:42.015 ","End":"01:47.295","Text":"it\u0027s just the sum of the squares and the norm of"},{"Start":"01:47.295 ","End":"01:53.460","Text":"y squared is y_1 squared plus y_2 squared plus y_n squared."},{"Start":"01:53.460 ","End":"01:59.260","Text":"There\u0027s really nothing more to it. That\u0027s it."}],"ID":10127},{"Watched":false,"Name":"Exercise 3","Duration":"1m 51s","ChapterTopicVideoID":9703,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.020 ","End":"00:04.230","Text":"In this exercise, we\u0027re given 2 functions,"},{"Start":"00:04.230 ","End":"00:09.585","Text":"f and g, continuous on the closed interval from a to b."},{"Start":"00:09.585 ","End":"00:13.890","Text":"We have to prove the following inequality."},{"Start":"00:14.390 ","End":"00:16.545","Text":"When you look at it,"},{"Start":"00:16.545 ","End":"00:22.590","Text":"you should immediately think of the inner product space of continuous functions on a, b."},{"Start":"00:22.590 ","End":"00:26.070","Text":"This looks like the definition of the inner product that we had."},{"Start":"00:26.070 ","End":"00:27.990","Text":"Let\u0027s do that."},{"Start":"00:27.990 ","End":"00:32.730","Text":"Our inner product space is continuous function on a, b."},{"Start":"00:32.730 ","End":"00:36.645","Text":"This is the familiar, the usual,"},{"Start":"00:36.645 ","End":"00:40.070","Text":"I call it the integral and the product where you just multiply"},{"Start":"00:40.070 ","End":"00:45.270","Text":"the 2 functions and take the integral from a to b."},{"Start":"00:47.060 ","End":"00:50.420","Text":"We\u0027re going to use the Cauchy-Schwarz inequality."},{"Start":"00:50.420 ","End":"00:52.145","Text":"You get a feel for these things."},{"Start":"00:52.145 ","End":"00:55.840","Text":"If not, we\u0027re just in the chapter on Cauchy-Schwarz."},{"Start":"00:55.840 ","End":"00:59.535","Text":"In general, this is what it is."},{"Start":"00:59.535 ","End":"01:06.490","Text":"If we apply this where x and y or f and g and our particular inner product space,"},{"Start":"01:06.490 ","End":"01:12.394","Text":"what we get is the absolute value of the inner product would be this."},{"Start":"01:12.394 ","End":"01:20.260","Text":"The norm of x is the square root of the inner product of f with itself or f squared."},{"Start":"01:20.260 ","End":"01:24.310","Text":"Similarly, instead of y we have g,"},{"Start":"01:24.310 ","End":"01:28.820","Text":"the product of g with itself integral then square root."},{"Start":"01:28.820 ","End":"01:36.185","Text":"Now, all we have to do to get from here to here is to square both sides."},{"Start":"01:36.185 ","End":"01:37.960","Text":"I haven\u0027t even written it, just square it."},{"Start":"01:37.960 ","End":"01:39.635","Text":"Instead of the absolute value,"},{"Start":"01:39.635 ","End":"01:41.405","Text":"we get the squared."},{"Start":"01:41.405 ","End":"01:43.910","Text":"Here, the square root drops off."},{"Start":"01:43.910 ","End":"01:47.910","Text":"We get this, and here the square root drops off, and we get this."},{"Start":"01:47.910 ","End":"01:49.865","Text":"We square this, we get this,"},{"Start":"01:49.865 ","End":"01:52.170","Text":"and we are done."}],"ID":10128},{"Watched":false,"Name":"Exercise 4","Duration":"1m 55s","ChapterTopicVideoID":9704,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.470","Text":"Here\u0027s an exercise where we have to compute the angle between vectors."},{"Start":"00:05.510 ","End":"00:09.495","Text":"We are in the inner product space R^3,"},{"Start":"00:09.495 ","End":"00:12.360","Text":"with the standard inner product."},{"Start":"00:12.360 ","End":"00:17.430","Text":"Here is the formula for the cosine of the angle between them."},{"Start":"00:17.430 ","End":"00:20.640","Text":"Once we have the cosine we\u0027ll find the angle."},{"Start":"00:20.640 ","End":"00:24.000","Text":"Now the standard inner product in RN"},{"Start":"00:24.000 ","End":"00:28.305","Text":"or in our case, R^3, is just the dot-product."},{"Start":"00:28.305 ","End":"00:31.035","Text":"This is dot from the dot-product."},{"Start":"00:31.035 ","End":"00:34.515","Text":"This is just regular multiplication of numbers."},{"Start":"00:34.515 ","End":"00:38.580","Text":"Remember the dot-product means you multiply component wise"},{"Start":"00:38.580 ","End":"00:45.450","Text":"and add the 1 with the minus 2, 2 with 1, 2 with 2"},{"Start":"00:45.450 ","End":"00:49.290","Text":"and we add them all together and the result is 4."},{"Start":"00:49.290 ","End":"00:54.510","Text":"The normal view, is the inner product or dot-product of u with itself,"},{"Start":"00:54.510 ","End":"00:56.510","Text":"but then we have to take a square root."},{"Start":"00:56.510 ","End":"00:58.580","Text":"It\u0027s 1 times 1, 2 times 2,"},{"Start":"00:58.580 ","End":"01:03.215","Text":"2 times 2, 1 and 4 and 4 is 9, square root of 9 is 3."},{"Start":"01:03.215 ","End":"01:09.350","Text":"Practically the same thing or the same idea with v. We also get that the norm of v is 3."},{"Start":"01:09.350 ","End":"01:14.420","Text":"Now we can substitute and we get that the cosine of Theta"},{"Start":"01:14.420 ","End":"01:18.710","Text":"we\u0027ve computed the numerator is 4 and this is 3 times 3."},{"Start":"01:18.710 ","End":"01:21.965","Text":"If you want you can write it as 4/9."},{"Start":"01:21.965 ","End":"01:26.010","Text":"Now we do this on the calculator."},{"Start":"01:26.410 ","End":"01:30.800","Text":"Do it with inverse cosine or shift cosine,"},{"Start":"01:30.800 ","End":"01:34.520","Text":"we just have to make sure that our answer is between 0 and Pi"},{"Start":"01:34.520 ","End":"01:38.420","Text":"if we\u0027re in radians or 0 and 180 if we\u0027re in degrees."},{"Start":"01:38.420 ","End":"01:44.360","Text":"I set my calculator for degrees and took it to 2 decimal places,"},{"Start":"01:44.360 ","End":"01:52.130","Text":"and I got 63.61 degrees is the inverse cosine or at cosine of 4/9."},{"Start":"01:52.130 ","End":"01:55.920","Text":"That\u0027s it. We\u0027re done."}],"ID":10129},{"Watched":false,"Name":"Exercise 5","Duration":"2m 27s","ChapterTopicVideoID":9705,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.630","Text":"In this exercise, we have to compute the angle"},{"Start":"00:03.630 ","End":"00:07.770","Text":"between these 2 vectors, 3,4,and 1,2."},{"Start":"00:07.770 ","End":"00:10.530","Text":"Now we\u0027re in the inner product space R^2,"},{"Start":"00:10.530 ","End":"00:12.975","Text":"but not with the standard inner product."},{"Start":"00:12.975 ","End":"00:15.450","Text":"The inner product defined as follows."},{"Start":"00:15.450 ","End":"00:18.030","Text":"The inner product of x_1, x_2"},{"Start":"00:18.030 ","End":"00:21.624","Text":"with y_1, y_2 is given by this formula."},{"Start":"00:21.624 ","End":"00:25.865","Text":"You just have to take my word for it that this is an inner product."},{"Start":"00:25.865 ","End":"00:27.170","Text":"There is a test, remember,"},{"Start":"00:27.170 ","End":"00:29.795","Text":"with positive definite matrices,"},{"Start":"00:29.795 ","End":"00:32.330","Text":"we\u0027ll just assume that it\u0027s someone checked."},{"Start":"00:32.330 ","End":"00:36.145","Text":"I\u0027ve checked it is an inner product."},{"Start":"00:36.145 ","End":"00:39.575","Text":"Here\u0027s the standard formula for the angle,"},{"Start":"00:39.575 ","End":"00:41.510","Text":"or rather the cosine of the angle."},{"Start":"00:41.510 ","End":"00:44.615","Text":"First we find the cosine of the angle Theta,"},{"Start":"00:44.615 ","End":"00:46.525","Text":"and then we\u0027ll find Theta."},{"Start":"00:46.525 ","End":"00:49.605","Text":"Substitute u and v from here."},{"Start":"00:49.605 ","End":"00:51.260","Text":"This is the expression we have."},{"Start":"00:51.260 ","End":"00:52.520","Text":"We have 3 computations,"},{"Start":"00:52.520 ","End":"00:55.870","Text":"this inner product, this norm and this norm."},{"Start":"00:55.870 ","End":"00:58.335","Text":"First, the inner product,"},{"Start":"00:58.335 ","End":"01:03.990","Text":"I label them x_1, x_2, y_1, y_2, and just use this formula."},{"Start":"01:03.990 ","End":"01:06.350","Text":"I won\u0027t go into it anymore details."},{"Start":"01:06.350 ","End":"01:08.020","Text":"You can pause and check."},{"Start":"01:08.020 ","End":"01:11.140","Text":"This is what we get, is 17."},{"Start":"01:11.140 ","End":"01:13.110","Text":"That\u0027s the numerator."},{"Start":"01:13.110 ","End":"01:16.025","Text":"Next we do something very similar,"},{"Start":"01:16.025 ","End":"01:18.995","Text":"but with 3,4 and 3,4,"},{"Start":"01:18.995 ","End":"01:21.770","Text":"and at the end we also need to take the square root."},{"Start":"01:21.770 ","End":"01:22.760","Text":"That\u0027s what the norm is."},{"Start":"01:22.760 ","End":"01:26.615","Text":"It\u0027s the square root of the inner product of 3,4 with itself."},{"Start":"01:26.615 ","End":"01:30.380","Text":"This computation also based on the same formula."},{"Start":"01:30.380 ","End":"01:34.430","Text":"I\u0027ll leave you to pause and follow the details if you want,"},{"Start":"01:34.430 ","End":"01:37.130","Text":"comes out 33, so square root of 33."},{"Start":"01:37.130 ","End":"01:39.980","Text":"Then we have to do 1,2 with itself"},{"Start":"01:39.980 ","End":"01:43.620","Text":"and using the same formula above for the inner product."},{"Start":"01:43.620 ","End":"01:46.325","Text":"Don\u0027t forget to take the square root at the end."},{"Start":"01:46.325 ","End":"01:49.970","Text":"We get square root of 9 is 3."},{"Start":"01:49.970 ","End":"01:56.715","Text":"Now we have what we need to plug into the formula for cosine Theta."},{"Start":"01:56.715 ","End":"02:02.495","Text":"We get the 17 from here and then the square root of 33 from here."},{"Start":"02:02.495 ","End":"02:05.220","Text":"This 3 goes here."},{"Start":"02:06.550 ","End":"02:09.230","Text":"We get this for the cosine"},{"Start":"02:09.230 ","End":"02:15.570","Text":"and then I set my calculator for degrees"},{"Start":"02:15.570 ","End":"02:19.955","Text":"and got the answer 9.44 degrees."},{"Start":"02:19.955 ","End":"02:22.520","Text":"As long as it\u0027s between 0 and 180,"},{"Start":"02:22.520 ","End":"02:27.600","Text":"we know we\u0027re okay, and that\u0027s it."}],"ID":10130},{"Watched":false,"Name":"Exercise 6","Duration":"3m 21s","ChapterTopicVideoID":9699,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.370","Text":"We have another one of those exercises,"},{"Start":"00:02.370 ","End":"00:08.600","Text":"where we have to compute the angle between 2 functions that to be polynomials."},{"Start":"00:08.600 ","End":"00:15.360","Text":"The inner product space is this continuous functions on the interval from 0-1,"},{"Start":"00:15.360 ","End":"00:17.099","Text":"and the inner product,"},{"Start":"00:17.099 ","End":"00:19.605","Text":"what we usually do,"},{"Start":"00:19.605 ","End":"00:25.590","Text":"the integral over the interval of the product of the functions."},{"Start":"00:25.590 ","End":"00:29.265","Text":"First, we find the cosine with the usual formula,"},{"Start":"00:29.265 ","End":"00:35.090","Text":"the inner product of the 2 divided by the norm of the 1 times the norm of the other."},{"Start":"00:35.090 ","End":"00:38.045","Text":"We have 3 integrals to compute."},{"Start":"00:38.045 ","End":"00:43.670","Text":"First of all the top p, q integral from 0-1,"},{"Start":"00:43.670 ","End":"00:48.560","Text":"p times q, which is 2x minus 1 times x squared."},{"Start":"00:48.560 ","End":"00:50.840","Text":"Open brackets."},{"Start":"00:50.840 ","End":"00:53.560","Text":"We get this."},{"Start":"00:53.560 ","End":"00:56.480","Text":"Presumably, you know how to do integration,"},{"Start":"00:56.480 ","End":"00:57.920","Text":"you raise the power of 1."},{"Start":"00:57.920 ","End":"01:01.295","Text":"We get 4, 2 over 4 is like 1/2."},{"Start":"01:01.295 ","End":"01:05.990","Text":"Here we get a 1/3x cubed plug-in 1, plug-in 0,"},{"Start":"01:05.990 ","End":"01:08.170","Text":"and subtract 0 doesn\u0027t give us anything,"},{"Start":"01:08.170 ","End":"01:14.390","Text":"so we got 1/2 minus 1/3, which is a 1/6 In."},{"Start":"01:14.390 ","End":"01:17.570","Text":"In the case of a norm, it\u0027s also an integral,"},{"Start":"01:17.570 ","End":"01:20.540","Text":"but we just have to take a square root at the end."},{"Start":"01:20.540 ","End":"01:22.160","Text":"P with itself,"},{"Start":"01:22.160 ","End":"01:26.445","Text":"so the integral of 2x minus 1 times 2x minus 1,"},{"Start":"01:26.445 ","End":"01:29.440","Text":"in other words, 2x minus 1 squared."},{"Start":"01:30.740 ","End":"01:33.440","Text":"It\u0027s just like it was x squared,"},{"Start":"01:33.440 ","End":"01:35.180","Text":"it would be x cubed over 3."},{"Start":"01:35.180 ","End":"01:37.790","Text":"But because it\u0027s 2x minus 1 instead of x,"},{"Start":"01:37.790 ","End":"01:41.315","Text":"we divide by the inner derivative, which is the 2."},{"Start":"01:41.315 ","End":"01:50.370","Text":"Plugging in 1 we get twice 1 minus 1 is 1 cubed over 3 times 2, it\u0027s a 1/6."},{"Start":"01:50.370 ","End":"01:52.740","Text":"Minus 1 comes up,"},{"Start":"01:52.740 ","End":"01:54.600","Text":"minus a 1/6 that we subtract it,"},{"Start":"01:54.600 ","End":"01:56.145","Text":"so it\u0027s also plus a 1/6,"},{"Start":"01:56.145 ","End":"01:58.875","Text":"so we get the square root of 1/3."},{"Start":"01:58.875 ","End":"02:01.730","Text":"In the case of q, we also have the square root of"},{"Start":"02:01.730 ","End":"02:05.570","Text":"an inner product this time x squared with itself so x squared squared,"},{"Start":"02:05.570 ","End":"02:10.270","Text":"which is x^4 and the integral is x^5 over 5."},{"Start":"02:10.270 ","End":"02:11.980","Text":"Plugin 1, you get a 1/5,"},{"Start":"02:11.980 ","End":"02:13.925","Text":"plug-in 0, you get nothing."},{"Start":"02:13.925 ","End":"02:16.715","Text":"We end up with square root of 1/5."},{"Start":"02:16.715 ","End":"02:19.810","Text":"Now we can plug-in to here."},{"Start":"02:19.810 ","End":"02:29.160","Text":"We get that the cosine is this 1/6 over root 1/3 root 1/5."},{"Start":"02:31.490 ","End":"02:34.550","Text":"This actually simplifies to this, though."},{"Start":"02:34.550 ","End":"02:35.750","Text":"You wouldn\u0027t have to simplify it."},{"Start":"02:35.750 ","End":"02:36.800","Text":"Let me just show you how I did that."},{"Start":"02:36.800 ","End":"02:40.870","Text":"1 over the square root of 1/3 is the square root of 3."},{"Start":"02:40.870 ","End":"02:44.995","Text":"1 over the square root of 1/5 is the square root of 5."},{"Start":"02:44.995 ","End":"02:48.860","Text":"1/6 is like putting a 6 in the denominator so that 6,"},{"Start":"02:48.860 ","End":"02:51.005","Text":"I can write square root of 36."},{"Start":"02:51.005 ","End":"02:56.535","Text":"Now I\u0027ve got the square root of 3 times 5 over 36."},{"Start":"02:56.535 ","End":"02:59.975","Text":"But 3 into 36 goes 12 times."},{"Start":"02:59.975 ","End":"03:02.105","Text":"That gives us this."},{"Start":"03:02.105 ","End":"03:05.300","Text":"Sometimes we\u0027re asked just the cosine of the angle"},{"Start":"03:05.300 ","End":"03:06.365","Text":"and that would be this."},{"Start":"03:06.365 ","End":"03:08.930","Text":"If not, you\u0027d continue on the calculator"},{"Start":"03:08.930 ","End":"03:12.145","Text":"and then do the inverse cosine of this."},{"Start":"03:12.145 ","End":"03:13.790","Text":"I did it in degrees."},{"Start":"03:13.790 ","End":"03:16.790","Text":"The question didn\u0027t say radians or degrees,"},{"Start":"03:16.790 ","End":"03:19.490","Text":"so I chose degrees and this is the answer,"},{"Start":"03:19.490 ","End":"03:22.050","Text":"and we are done."}],"ID":10131},{"Watched":false,"Name":"Exercise 7","Duration":"3m 43s","ChapterTopicVideoID":9700,"CourseChapterTopicPlaylistID":7310,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.050 ","End":"00:03.645","Text":"In this exercise, you have to compute the angle,"},{"Start":"00:03.645 ","End":"00:06.270","Text":"Theta, between A and B."},{"Start":"00:06.270 ","End":"00:09.150","Text":"They\u0027re matrices, but there are also vectors."},{"Start":"00:09.150 ","End":"00:14.320","Text":"If we take the inner product space of 2 by 2 real matrices,"},{"Start":"00:14.320 ","End":"00:18.150","Text":"and usual product there is that"},{"Start":"00:18.150 ","End":"00:25.050","Text":"we take the trace of the transpose of the second one times the first one."},{"Start":"00:25.050 ","End":"00:29.640","Text":"This is the formula for the cosine of the angle,"},{"Start":"00:29.640 ","End":"00:35.390","Text":"the inner product over the norm of the first norm of the second."},{"Start":"00:35.390 ","End":"00:37.280","Text":"Let\u0027s go for the numerator,"},{"Start":"00:37.280 ","End":"00:39.485","Text":"the inner product of A and B."},{"Start":"00:39.485 ","End":"00:42.740","Text":"It\u0027s the trace of B transpose,"},{"Start":"00:42.740 ","End":"00:47.515","Text":"which is like B but rows instead of columns."},{"Start":"00:47.515 ","End":"00:51.495","Text":"That\u0027s what we get an A as is."},{"Start":"00:51.495 ","End":"00:54.180","Text":"We only care about the diagonal,"},{"Start":"00:54.180 ","End":"00:58.640","Text":"so we just have to do this times this gives us this."},{"Start":"00:58.640 ","End":"01:01.345","Text":"It\u0027s 0 times 2 plus 2 times 3."},{"Start":"01:01.345 ","End":"01:04.770","Text":"We get this from this times this,"},{"Start":"01:04.770 ","End":"01:11.900","Text":"which is 3 times minus 1 altogether minus 4."},{"Start":"01:11.900 ","End":"01:14.690","Text":"The trace is we add the diagonal,"},{"Start":"01:14.690 ","End":"01:17.120","Text":"6 and minus 4, and we get 2."},{"Start":"01:17.120 ","End":"01:21.655","Text":"Next, let\u0027s compute the norm of A which is the square root."},{"Start":"01:21.655 ","End":"01:26.320","Text":"Inner product of A with A is the trace of A transpose times A."},{"Start":"01:26.320 ","End":"01:27.650","Text":"This is like this,"},{"Start":"01:27.650 ","End":"01:32.285","Text":"but transpose like the Row 2, 1 is the Column 2, 1 and so on."},{"Start":"01:32.285 ","End":"01:38.925","Text":"As before, we just need the diagonal, the rest don\u0027t matter."},{"Start":"01:38.925 ","End":"01:42.870","Text":"We get this top-left first row, first column"},{"Start":"01:42.870 ","End":"01:46.025","Text":"from the first row here with the first column here,"},{"Start":"01:46.025 ","End":"01:49.040","Text":"2 times 2 plus 3 times 3 is 13."},{"Start":"01:49.040 ","End":"01:54.305","Text":"Then this we get from the second row times the second column,"},{"Start":"01:54.305 ","End":"01:56.870","Text":"we get 1 times 1 is 1,"},{"Start":"01:56.870 ","End":"01:58.550","Text":"minus 1 times minus 1 is also 1,"},{"Start":"01:58.550 ","End":"02:02.480","Text":"1 and 1 is 2, 13 and 2 is 15."},{"Start":"02:02.480 ","End":"02:04.580","Text":"But then we have the square root,"},{"Start":"02:04.580 ","End":"02:07.340","Text":"so we\u0027ve got square root of 15."},{"Start":"02:07.340 ","End":"02:09.530","Text":"That\u0027s 2 of the 3 quantities."},{"Start":"02:09.530 ","End":"02:11.975","Text":"Next, we need norm of B."},{"Start":"02:11.975 ","End":"02:14.870","Text":"Exactly the same idea just with B."},{"Start":"02:14.870 ","End":"02:16.700","Text":"This is B transpose."},{"Start":"02:16.700 ","End":"02:22.640","Text":"B transpose means that I have 0 minus 1 as the row,"},{"Start":"02:22.640 ","End":"02:25.550","Text":"we have 0 minus 1 and the column 2, 3, 2, 3."},{"Start":"02:25.550 ","End":"02:27.725","Text":"Again, we just do the diagonals."},{"Start":"02:27.725 ","End":"02:31.355","Text":"This top-left is 0, 2 with 0, 2 is 4."},{"Start":"02:31.355 ","End":"02:34.820","Text":"This 1 comes from minus 1,3 with minus 1, 3, 1"},{"Start":"02:34.820 ","End":"02:38.525","Text":"and 9 is 10, square root of 14."},{"Start":"02:38.525 ","End":"02:42.410","Text":"Now, we have all the ingredients to put in here."},{"Start":"02:42.410 ","End":"02:48.380","Text":"The cosine becomes 2 over this times this."},{"Start":"02:48.380 ","End":"02:51.640","Text":"Use the calculator,"},{"Start":"02:51.640 ","End":"02:55.170","Text":"this actually simplifies to this but it\u0027s not really important."},{"Start":"02:55.170 ","End":"02:58.580","Text":"We could have written it as the square root of 4"},{"Start":"02:58.580 ","End":"03:03.650","Text":"over the square root of 15 times 14 is 210,"},{"Start":"03:03.650 ","End":"03:09.080","Text":"and then the square root of 4 over 210"},{"Start":"03:09.080 ","End":"03:14.525","Text":"and divide top and bottom by 2 so it\u0027s 2 over 105,"},{"Start":"03:14.525 ","End":"03:17.995","Text":"not so important if you\u0027re using a calculator."},{"Start":"03:17.995 ","End":"03:20.565","Text":"If we just wanted the cosine,"},{"Start":"03:20.565 ","End":"03:24.695","Text":"we would do this over this square root and end up here."},{"Start":"03:24.695 ","End":"03:29.810","Text":"If we also have to find the angle itself, say in degrees,"},{"Start":"03:29.810 ","End":"03:32.810","Text":"that we need to find the arccosine of this on the calculator,"},{"Start":"03:32.810 ","End":"03:37.560","Text":"it\u0027s either shift cosine or inverse cosine or something like that,"},{"Start":"03:37.560 ","End":"03:41.340","Text":"and the answer comes out 89.97 degrees."},{"Start":"03:41.340 ","End":"03:43.600","Text":"We\u0027re done."}],"ID":10132}],"Thumbnail":null,"ID":7310},{"Name":"Orthogonality","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Theory and Examples","Duration":"7m 2s","ChapterTopicVideoID":13519,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:07.305","Text":"In this clip, we\u0027ll learn a new concept orthogonality in an inner product space."},{"Start":"00:07.305 ","End":"00:10.185","Text":"If you have 2 vectors, u and v,"},{"Start":"00:10.185 ","End":"00:12.780","Text":"and we\u0027re in an inner product space,"},{"Start":"00:12.780 ","End":"00:18.240","Text":"then they\u0027re called orthogonal or orthogonal to each other"},{"Start":"00:18.240 ","End":"00:24.205","Text":"or mutually orthogonal if their inner product is 0,"},{"Start":"00:24.205 ","End":"00:26.775","Text":"and there\u0027s a way of writing that."},{"Start":"00:26.775 ","End":"00:31.900","Text":"We say u is orthogonal to v. This is the symbol."},{"Start":"00:32.630 ","End":"00:35.775","Text":"There\u0027s another word for orthogonal,"},{"Start":"00:35.775 ","End":"00:37.850","Text":"you can say perpendicular."},{"Start":"00:37.850 ","End":"00:43.190","Text":"I could say u and v are perpendicular, mutually perpendicular,"},{"Start":"00:43.190 ","End":"00:49.170","Text":"perpendicular to each other, so that\u0027s equivalent."},{"Start":"00:49.170 ","End":"00:54.285","Text":"There\u0027s a reason why it\u0027s called perpendicular, we\u0027ll soon see."},{"Start":"00:54.285 ","End":"00:57.300","Text":"Next I\u0027ll show you a few examples."},{"Start":"00:57.300 ","End":"00:59.030","Text":"In the first example,"},{"Start":"00:59.030 ","End":"01:02.255","Text":"we\u0027ll take the 2 vectors, 1, 2,"},{"Start":"01:02.255 ","End":"01:07.325","Text":"and 4 minus 2 in the inner product space R^2."},{"Start":"01:07.325 ","End":"01:09.680","Text":"If I don\u0027t say otherwise,"},{"Start":"01:09.680 ","End":"01:12.785","Text":"it means this standard inner product."},{"Start":"01:12.785 ","End":"01:15.890","Text":"To show that u and v are orthogonal,"},{"Start":"01:15.890 ","End":"01:21.245","Text":"we have to look at the inner product of u and v and see if we can get 0."},{"Start":"01:21.245 ","End":"01:24.710","Text":"Well, the inner product is the same as the dot product"},{"Start":"01:24.710 ","End":"01:28.250","Text":"because we\u0027re using the standard inner product,"},{"Start":"01:28.250 ","End":"01:31.620","Text":"and the dot product of u and v,"},{"Start":"01:31.620 ","End":"01:32.715","Text":"well u is this,"},{"Start":"01:32.715 ","End":"01:37.890","Text":"v is that, and the way we do that is multiply component wise and add."},{"Start":"01:37.890 ","End":"01:39.920","Text":"1 times 4 is 4,"},{"Start":"01:39.920 ","End":"01:42.365","Text":"2 times minus 2 is minus 4,"},{"Start":"01:42.365 ","End":"01:44.830","Text":"4 minus 4 is 0."},{"Start":"01:44.830 ","End":"01:48.245","Text":"If the inner product of u and v is 0,"},{"Start":"01:48.245 ","End":"01:51.455","Text":"then they are orthogonal or perpendicular."},{"Start":"01:51.455 ","End":"01:56.105","Text":"Now if you were to actually sketch the vectors 1, 2,"},{"Start":"01:56.105 ","End":"01:58.175","Text":"and 4 minus 2,"},{"Start":"01:58.175 ","End":"02:02.375","Text":"you would find that they are actually at 90 degrees."},{"Start":"02:02.375 ","End":"02:05.930","Text":"They are perpendicular to each other in the plane"},{"Start":"02:05.930 ","End":"02:09.965","Text":"and that\u0027s where the word perpendicular comes from."},{"Start":"02:09.965 ","End":"02:11.885","Text":"I mean, that\u0027s why we also use that."},{"Start":"02:11.885 ","End":"02:16.360","Text":"This works in R^2 and in R^3."},{"Start":"02:16.360 ","End":"02:17.950","Text":"But other than that,"},{"Start":"02:17.950 ","End":"02:19.640","Text":"we can say perpendicular,"},{"Start":"02:19.640 ","End":"02:26.780","Text":"but it doesn\u0027t have geometric meaning other than in the 2D space and 3D space."},{"Start":"02:26.970 ","End":"02:34.845","Text":"Now let\u0027s take an example in a 4 dimensional euclidean space."},{"Start":"02:34.845 ","End":"02:37.455","Text":"These are the 2 vectors."},{"Start":"02:37.455 ","End":"02:40.295","Text":"I\u0027m going to show that they\u0027re orthogonal."},{"Start":"02:40.295 ","End":"02:43.500","Text":"It\u0027s going to be the standard inner product."},{"Start":"02:43.500 ","End":"02:51.320","Text":"As before, we look at the inner product of u and v and see if we can get it to be 0."},{"Start":"02:51.320 ","End":"02:56.445","Text":"It\u0027s equal to u dot v that\u0027s the standard inner product."},{"Start":"02:56.445 ","End":"02:58.640","Text":"If you take the 2 vectors,"},{"Start":"02:58.640 ","End":"03:02.975","Text":"what it means is that you have to multiply component wise,"},{"Start":"03:02.975 ","End":"03:08.350","Text":"1 with 4, 2 with 1, and then add what you get."},{"Start":"03:08.350 ","End":"03:13.560","Text":"We get what? 4 and 2 and 1 and minus 7,"},{"Start":"03:13.560 ","End":"03:15.885","Text":"and all together we get 0,"},{"Start":"03:15.885 ","End":"03:18.320","Text":"and if the inner product is 0,"},{"Start":"03:18.320 ","End":"03:20.465","Text":"then they are orthogonal."},{"Start":"03:20.465 ","End":"03:23.160","Text":"We could highlight that."},{"Start":"03:24.320 ","End":"03:28.840","Text":"Let\u0027s go for something different now."},{"Start":"03:28.840 ","End":"03:35.615","Text":"This time we\u0027ll take the inner product space of 2 by 2 real matrices,"},{"Start":"03:35.615 ","End":"03:41.640","Text":"and these are our 2 matrices which can be viewed as vectors also."},{"Start":"03:41.900 ","End":"03:50.470","Text":"The claim is that they are orthogonal and orthogonal means that the inner product is 0."},{"Start":"03:50.660 ","End":"03:58.490","Text":"If it doesn\u0027t say otherwise there is a usual inner product in the space of matrices."},{"Start":"03:58.490 ","End":"04:03.080","Text":"We take the trace of the second 1 transpose times the"},{"Start":"04:03.080 ","End":"04:09.525","Text":"first and in our case it\u0027s B transpose times A."},{"Start":"04:09.525 ","End":"04:12.135","Text":"B transpose is this,"},{"Start":"04:12.135 ","End":"04:15.800","Text":"it\u0027s what we get when we swap rows and columns from B,"},{"Start":"04:15.800 ","End":"04:18.590","Text":"A, as is here."},{"Start":"04:18.590 ","End":"04:21.200","Text":"We want the trace of the product."},{"Start":"04:21.200 ","End":"04:23.900","Text":"We don\u0027t need the whole product,"},{"Start":"04:23.900 ","End":"04:25.880","Text":"we just need the diagonal."},{"Start":"04:25.880 ","End":"04:28.430","Text":"This minus 2, first row,"},{"Start":"04:28.430 ","End":"04:31.895","Text":"first column comes from this and this,"},{"Start":"04:31.895 ","End":"04:36.650","Text":"minus 8 plus 6 is minus 2."},{"Start":"04:36.650 ","End":"04:39.440","Text":"This 2 second row,"},{"Start":"04:39.440 ","End":"04:43.790","Text":"second column comes from taking this row times this column,"},{"Start":"04:43.790 ","End":"04:47.105","Text":"minus 1 times 2 plus 1 times 4 is 2,"},{"Start":"04:47.105 ","End":"04:51.000","Text":"and then minus 2 and 2 is 0."},{"Start":"04:51.190 ","End":"04:53.660","Text":"The inner product is 0,"},{"Start":"04:53.660 ","End":"04:56.845","Text":"which means that they are orthogonal."},{"Start":"04:56.845 ","End":"05:00.210","Text":"One more example, a different kind of space."},{"Start":"05:00.210 ","End":"05:05.300","Text":"We\u0027ll take the space of polynomials or functions."},{"Start":"05:05.300 ","End":"05:12.520","Text":"We\u0027ll take the space continuous functions on the interval from minus 1 to 1."},{"Start":"05:12.520 ","End":"05:15.800","Text":"Notice that this is not our usual."},{"Start":"05:15.800 ","End":"05:17.780","Text":"Usually we take 0 to 1,"},{"Start":"05:17.780 ","End":"05:23.140","Text":"but it could be any 2, end points."},{"Start":"05:23.390 ","End":"05:27.020","Text":"These are the 2 functions,"},{"Start":"05:27.020 ","End":"05:29.740","Text":"x squared plus 1 and x cubed plus 4x"},{"Start":"05:29.740 ","End":"05:33.860","Text":"and the claim is that they are orthogonal in this inner product space."},{"Start":"05:33.860 ","End":"05:36.815","Text":"There is the usual inner product,"},{"Start":"05:36.815 ","End":"05:38.745","Text":"the integral inner product."},{"Start":"05:38.745 ","End":"05:42.830","Text":"If I have 2 functions that are continuous on minus 1, 1,"},{"Start":"05:42.830 ","End":"05:47.240","Text":"we take the integral from 1 endpoint to the other of their product."},{"Start":"05:47.240 ","End":"05:50.045","Text":"Let\u0027s check what happens with p and q."},{"Start":"05:50.045 ","End":"05:53.465","Text":"The aim is to show that the inner product is 0."},{"Start":"05:53.465 ","End":"05:56.510","Text":"Now the inner product according to this formula is the integral from"},{"Start":"05:56.510 ","End":"06:01.390","Text":"minus 1 to 1 of p of x times q of x, dx."},{"Start":"06:01.390 ","End":"06:05.450","Text":"Next as a substitute what p of x and q of x are,"},{"Start":"06:05.450 ","End":"06:06.695","Text":"x squared plus 1,"},{"Start":"06:06.695 ","End":"06:08.570","Text":"x cubed plus 4x."},{"Start":"06:08.570 ","End":"06:13.760","Text":"Then just multiply the polynomials and this is the result."},{"Start":"06:13.760 ","End":"06:18.275","Text":"The integral of this comes out to be this,"},{"Start":"06:18.275 ","End":"06:23.810","Text":"and then we have to plug in 1 and minus 1 and subtract."},{"Start":"06:23.810 ","End":"06:27.770","Text":"I happened to notice that this is an even function."},{"Start":"06:27.770 ","End":"06:30.320","Text":"All the powers are even."},{"Start":"06:30.320 ","End":"06:32.720","Text":"If I plug in 1 or minus 1,"},{"Start":"06:32.720 ","End":"06:39.270","Text":"I\u0027ll get the same thing and the answer comes out to be 0."},{"Start":"06:39.270 ","End":"06:42.830","Text":"If the inner product of p and q is 0,"},{"Start":"06:42.830 ","End":"06:48.850","Text":"then we can say that they are indeed orthogonal or perpendicular."},{"Start":"06:48.850 ","End":"06:51.330","Text":"That\u0027s the last example."},{"Start":"06:51.330 ","End":"06:55.020","Text":"We did some euclidean space,"},{"Start":"06:55.020 ","End":"06:58.680","Text":"matrices, functions,"},{"Start":"06:58.680 ","End":"07:02.950","Text":"and that\u0027s it for the introduction."}],"ID":14159},{"Watched":false,"Name":"Exercise 1","Duration":"1m 18s","ChapterTopicVideoID":9706,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.780","Text":"In this exercise, we\u0027re given 2 vectors,"},{"Start":"00:03.780 ","End":"00:07.230","Text":"u and v in R^3,"},{"Start":"00:07.230 ","End":"00:11.710","Text":"and we have to prove that they are orthogonal."},{"Start":"00:11.720 ","End":"00:15.780","Text":"Remember orthogonal, which is written like this,"},{"Start":"00:15.780 ","End":"00:18.600","Text":"means that the inner product is 0."},{"Start":"00:18.600 ","End":"00:21.735","Text":"Didn\u0027t say here what the inner product is,"},{"Start":"00:21.735 ","End":"00:26.890","Text":"unless otherwise specified in R^n or in R^3,"},{"Start":"00:26.890 ","End":"00:32.865","Text":"in our case, it\u0027s the standard inner product or the dot product."},{"Start":"00:32.865 ","End":"00:36.030","Text":"We start with the inner product of u and v,"},{"Start":"00:36.030 ","End":"00:38.235","Text":"like I said, it\u0027s the dot product."},{"Start":"00:38.235 ","End":"00:40.260","Text":"Here\u0027s u, here\u0027s v."},{"Start":"00:40.260 ","End":"00:43.800","Text":"The dot product means you multiply component-wise"},{"Start":"00:43.800 ","End":"00:52.220","Text":"and add 1 times 4 plus 2 times 7 plus 3 times negative 6, and we get 0."},{"Start":"00:52.220 ","End":"00:54.650","Text":"If the inner product is 0,"},{"Start":"00:54.650 ","End":"00:57.370","Text":"then they are orthogonal."},{"Start":"00:57.370 ","End":"00:59.630","Text":"By the way, remember,"},{"Start":"00:59.630 ","End":"01:02.360","Text":"orthogonal is also called perpendicular."},{"Start":"01:02.360 ","End":"01:04.685","Text":"In the case of R^2 and R^3,"},{"Start":"01:04.685 ","End":"01:07.760","Text":"perpendicular really means perpendicular."},{"Start":"01:07.760 ","End":"01:11.540","Text":"If you sketch these 2 vectors,"},{"Start":"01:11.540 ","End":"01:14.930","Text":"they would come out at 90 degrees to each other."},{"Start":"01:14.930 ","End":"01:18.660","Text":"They are perpendicular in the geometrical sense."}],"ID":10180},{"Watched":false,"Name":"Exercise 2","Duration":"1m 11s","ChapterTopicVideoID":9707,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.860","Text":"In this exercise, we have to find the value of the parameter k"},{"Start":"00:04.860 ","End":"00:12.735","Text":"for which these 2 vectors u and v are orthogonal in R^3."},{"Start":"00:12.735 ","End":"00:16.590","Text":"K is here, I colored it so you could see it."},{"Start":"00:16.590 ","End":"00:18.945","Text":"This is the parameter."},{"Start":"00:18.945 ","End":"00:23.100","Text":"It doesn\u0027t say explicitly what the inner product is"},{"Start":"00:23.100 ","End":"00:28.365","Text":"so we assume it\u0027s the standard inner product also known as the dot product."},{"Start":"00:28.365 ","End":"00:33.540","Text":"To say that u and v are orthogonal or perpendicular,"},{"Start":"00:33.540 ","End":"00:38.505","Text":"is the same as saying that their inner product is 0."},{"Start":"00:38.505 ","End":"00:41.790","Text":"The standard inner product is the dot product"},{"Start":"00:41.790 ","End":"00:43.970","Text":"which means that we take the 2 vectors"},{"Start":"00:43.970 ","End":"00:45.680","Text":"and multiply component-wise"},{"Start":"00:45.680 ","End":"00:53.565","Text":"and add 1 times 4, k times 7, and 3 times minus 6."},{"Start":"00:53.565 ","End":"00:58.680","Text":"This gives us an equation in k because we want this to equal 0."},{"Start":"00:58.680 ","End":"01:01.890","Text":"Bring the numbers to the right-hand side,"},{"Start":"01:01.890 ","End":"01:04.200","Text":"we get 7k equals 14"},{"Start":"01:04.200 ","End":"01:12.400","Text":"and so the answer is k equals 2 and we\u0027re done."}],"ID":10181},{"Watched":false,"Name":"Exercise 3","Duration":"4m 14s","ChapterTopicVideoID":9708,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.135","Text":"In this exercise, we have to find a unit vector perpendicular"},{"Start":"00:06.135 ","End":"00:13.330","Text":"or orthogonal to the vectors u and v as follows in R^3."},{"Start":"00:13.880 ","End":"00:19.330","Text":"We take the standard inner product unless it says otherwise, which it doesn\u0027t."},{"Start":"00:19.330 ","End":"00:23.260","Text":"Standard inner product or dot product."},{"Start":"00:23.260 ","End":"00:26.850","Text":"To say that 2 vectors are perpendicular,"},{"Start":"00:26.850 ","End":"00:29.220","Text":"I should have used different letters not this u and v."},{"Start":"00:29.220 ","End":"00:33.690","Text":"General u and v, they are perpendicular if their inner product is 0,"},{"Start":"00:33.690 ","End":"00:36.495","Text":"and in this case it means the dot product is 0."},{"Start":"00:36.495 ","End":"00:40.320","Text":"Let\u0027s forget it first about the unit vector."},{"Start":"00:40.320 ","End":"00:45.135","Text":"Just look for any vector which is perpendicular to these 2."},{"Start":"00:45.135 ","End":"00:49.875","Text":"At the end we\u0027ll deal with making it unit."},{"Start":"00:49.875 ","End":"00:55.960","Text":"Let\u0027s take a vector w with components a, b, c,"},{"Start":"00:55.960 ","End":"00:59.500","Text":"and introduce the requirements that w is going to be"},{"Start":"00:59.500 ","End":"01:01.570","Text":"perpendicular to u and to v."},{"Start":"01:01.570 ","End":"01:04.240","Text":"First of all, it\u0027s perpendicular to u,"},{"Start":"01:04.240 ","End":"01:06.489","Text":"so the dot product is 0,"},{"Start":"01:06.489 ","End":"01:12.310","Text":"which means that this is w and this is u from here."},{"Start":"01:12.310 ","End":"01:15.130","Text":"Dot product is 0,"},{"Start":"01:15.130 ","End":"01:17.580","Text":"should have put a dot here."},{"Start":"01:17.580 ","End":"01:24.520","Text":"That means that a times 1 plus b times 2 plus c times 3 is 0."},{"Start":"01:24.520 ","End":"01:28.070","Text":"W is also perpendicular to v,"},{"Start":"01:28.070 ","End":"01:31.490","Text":"which means that the dot product of w with v is 0."},{"Start":"01:31.490 ","End":"01:33.050","Text":"This vector here is v."},{"Start":"01:33.050 ","End":"01:39.450","Text":"That gives us a second equation in a, b, c, which is this 1."},{"Start":"01:39.560 ","End":"01:46.615","Text":"What we have now is 2 equations and 3 unknowns."},{"Start":"01:46.615 ","End":"01:48.770","Text":"We could do it with a matrix,"},{"Start":"01:48.770 ","End":"01:53.100","Text":"but let\u0027s just do it as is."},{"Start":"01:54.200 ","End":"01:59.845","Text":"Like in a matrix, this would be Row 1 and this would be Row 2."},{"Start":"01:59.845 ","End":"02:07.925","Text":"What I\u0027m going to do is subtract twice Row 1 from Row 2."},{"Start":"02:07.925 ","End":"02:09.605","Text":"I\u0027ll write that down."},{"Start":"02:09.605 ","End":"02:16.840","Text":"Row 2 minus twice Row 1 into Row 2."},{"Start":"02:16.840 ","End":"02:20.765","Text":"That will get rid of the a in the second equation,"},{"Start":"02:20.765 ","End":"02:23.375","Text":"this minus twice this."},{"Start":"02:23.375 ","End":"02:26.855","Text":"What we get, we get Row 1 as it is,"},{"Start":"02:26.855 ","End":"02:31.973","Text":"and this is the Row 2 minus 2 Row 1."},{"Start":"02:31.973 ","End":"02:34.355","Text":"Look at it again."},{"Start":"02:34.355 ","End":"02:37.895","Text":"2a minus twice a is nothing."},{"Start":"02:37.895 ","End":"02:41.290","Text":"5b minus twice 2b is b."},{"Start":"02:41.290 ","End":"02:47.265","Text":"7b minus twice 3c is c equals 0."},{"Start":"02:47.265 ","End":"02:50.210","Text":"We have more unknowns and equations."},{"Start":"02:50.210 ","End":"02:54.830","Text":"In this case, c is going to be a free variable."},{"Start":"02:54.830 ","End":"03:00.045","Text":"Using our system, we called it the wandering 1s."},{"Start":"03:00.045 ","End":"03:03.045","Text":"Just let c equals 1."},{"Start":"03:03.045 ","End":"03:06.365","Text":"Could be anything but 1 is most convenient."},{"Start":"03:06.365 ","End":"03:08.300","Text":"Anything not 0, that is."},{"Start":"03:08.300 ","End":"03:13.700","Text":"If c is 1 and plug it in here, b is minus 1."},{"Start":"03:13.700 ","End":"03:17.270","Text":"Then if c is 1 and b is minus 1,"},{"Start":"03:17.270 ","End":"03:20.855","Text":"then that gives us that a is minus 1."},{"Start":"03:20.855 ","End":"03:25.710","Text":"We have a, b, and c, and so we have w."},{"Start":"03:25.710 ","End":"03:31.160","Text":"Now, w is a vector which is perpendicular to u and to v."},{"Start":"03:31.160 ","End":"03:37.165","Text":"The only thing that\u0027s missing is that it\u0027s not or may not be a unit vector."},{"Start":"03:37.165 ","End":"03:42.890","Text":"We can easily take care of that by normalizing it."},{"Start":"03:42.890 ","End":"03:47.090","Text":"Normalizing it means dividing by its norm"},{"Start":"03:47.090 ","End":"03:49.310","Text":"and that will give us a unit vector."},{"Start":"03:49.310 ","End":"03:50.750","Text":"So here\u0027s w."},{"Start":"03:50.750 ","End":"03:58.940","Text":"The norm of w is the square root of the dot product of w with itself."},{"Start":"03:58.940 ","End":"04:01.675","Text":"You\u0027ll see it\u0027s minus 1 times 1 is 1."},{"Start":"04:01.675 ","End":"04:04.050","Text":"Again, and 1 times 1 is 1."},{"Start":"04:04.050 ","End":"04:07.275","Text":"In short, this comes out to be root 3."},{"Start":"04:07.275 ","End":"04:10.200","Text":"That can divide each of the 3 components by root 3."},{"Start":"04:10.200 ","End":"04:14.110","Text":"That\u0027s the answer."}],"ID":10182},{"Watched":false,"Name":"Exercise 4","Duration":"1m 28s","ChapterTopicVideoID":9709,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.790","Text":"In this exercise, we have to show that"},{"Start":"00:02.790 ","End":"00:08.340","Text":"these 2 polynomials or functions p and q as follows,"},{"Start":"00:08.340 ","End":"00:13.800","Text":"are orthogonal, and the inner product space is this,"},{"Start":"00:13.800 ","End":"00:19.740","Text":"which is the continuous functions on the interval from 0-1."},{"Start":"00:19.740 ","End":"00:21.870","Text":"C usual in the product,"},{"Start":"00:21.870 ","End":"00:26.310","Text":"the integral in a product where it\u0027s defined as follows."},{"Start":"00:26.310 ","End":"00:28.275","Text":"You take the 2 functions,"},{"Start":"00:28.275 ","End":"00:34.390","Text":"multiply them and integrate on the interval 0-1."},{"Start":"00:34.480 ","End":"00:38.690","Text":"In our case, we want the inner product of p and q,"},{"Start":"00:38.690 ","End":"00:43.525","Text":"and we want to show that it comes out to be 0."},{"Start":"00:43.525 ","End":"00:46.130","Text":"By the definition, it\u0027s this,"},{"Start":"00:46.130 ","End":"00:48.475","Text":"the integral of the product."},{"Start":"00:48.475 ","End":"00:53.580","Text":"Then just replacing p and q with the definitions from here,"},{"Start":"00:53.580 ","End":"00:59.885","Text":"do a bit of algebra and multiply these 2 out and we get this."},{"Start":"00:59.885 ","End":"01:03.120","Text":"Now, the integral is this."},{"Start":"01:03.120 ","End":"01:04.335","Text":"I\u0027ll leave you to check."},{"Start":"01:04.335 ","End":"01:05.720","Text":"It\u0027s a definite integral,"},{"Start":"01:05.720 ","End":"01:09.080","Text":"so we have to plug in 0 and 1 and do a subtraction."},{"Start":"01:09.080 ","End":"01:11.780","Text":"0 doesn\u0027t give anything, so we just plug in 1,"},{"Start":"01:11.780 ","End":"01:14.575","Text":"we get 3 minus 6 plus 4 minus 1,"},{"Start":"01:14.575 ","End":"01:18.390","Text":"and this comes out to be 0."},{"Start":"01:18.390 ","End":"01:23.330","Text":"We\u0027ve shown that the inner product of p with q is 0"},{"Start":"01:23.330 ","End":"01:26.930","Text":"and that means they are orthogonal or perpendicular."},{"Start":"01:26.930 ","End":"01:29.130","Text":"We\u0027re done."}],"ID":10183},{"Watched":false,"Name":"Exercise 5","Duration":"3m 41s","ChapterTopicVideoID":9710,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:09.210","Text":"In this exercise, we take the vector space P_n of R,"},{"Start":"00:09.210 ","End":"00:12.690","Text":"which is to remind you polynomials of degree"},{"Start":"00:12.690 ","End":"00:16.830","Text":"less than or equal to n over the real numbers."},{"Start":"00:16.830 ","End":"00:19.020","Text":"Do you want to make it an inner product space,"},{"Start":"00:19.020 ","End":"00:22.020","Text":"so we have to define an inner product."},{"Start":"00:22.020 ","End":"00:24.645","Text":"We define it as follows."},{"Start":"00:24.645 ","End":"00:30.180","Text":"The product with p and q is the sum as n plus 1 terms,"},{"Start":"00:30.180 ","End":"00:34.440","Text":"k goes from 0 up to n, p of k,"},{"Start":"00:34.440 ","End":"00:40.230","Text":"q of k. What is p of 0 times q of 0 plus p of 1 times q of 1,"},{"Start":"00:40.230 ","End":"00:41.490","Text":"and so on dot, dot,"},{"Start":"00:41.490 ","End":"00:50.465","Text":"dot up to p of n times q of n. That\u0027s for any natural number n. Now,"},{"Start":"00:50.465 ","End":"00:53.920","Text":"we have 2 polynomials,"},{"Start":"00:53.920 ","End":"00:56.835","Text":"p and q as follows,"},{"Start":"00:56.835 ","End":"00:58.850","Text":"and we\u0027ll take n equals 7."},{"Start":"00:58.850 ","End":"01:02.855","Text":"In other words, we are in the space P7 of R,"},{"Start":"01:02.855 ","End":"01:07.525","Text":"certainly each of these is degree less than or equal to 7."},{"Start":"01:07.525 ","End":"01:11.325","Text":"In fact, they\u0027re each of degree 4."},{"Start":"01:11.325 ","End":"01:18.010","Text":"We want to show that these 2 are orthogonal with this definition of the inner product."},{"Start":"01:18.010 ","End":"01:23.675","Text":"Now just adopting this definition to the case where n is 7,"},{"Start":"01:23.675 ","End":"01:28.745","Text":"the inner product of p and q is the sum from 0 to 7."},{"Start":"01:28.745 ","End":"01:35.505","Text":"It\u0027s from p of 0 times q of 0 up to p of 7 times q of 7,"},{"Start":"01:35.505 ","End":"01:38.200","Text":"as 8 terms all together."},{"Start":"01:39.490 ","End":"01:41.630","Text":"Perhaps I should mention,"},{"Start":"01:41.630 ","End":"01:45.230","Text":"I didn\u0027t prove that this is an inner product,"},{"Start":"01:45.230 ","End":"01:50.820","Text":"but it is known and it is an inner product and you just take it on faith."},{"Start":"01:50.820 ","End":"01:55.070","Text":"I just copied p and q again,"},{"Start":"01:55.070 ","End":"01:58.260","Text":"just to have them handy."},{"Start":"01:58.400 ","End":"02:05.970","Text":"Notice that p has 4 roots, 0, 2, 4,"},{"Start":"02:05.970 ","End":"02:14.949","Text":"and 6, meaning that p of 0 equals p of 2 equals p of 4 equals p of 6 equals 0."},{"Start":"02:18.200 ","End":"02:22.310","Text":"In the case of q, if I plug in 1, 3,"},{"Start":"02:22.310 ","End":"02:27.260","Text":"5, or 7, I\u0027ll get 0 and that\u0027s what I wrote here."},{"Start":"02:27.710 ","End":"02:32.210","Text":"Just copy this. Let\u0027s compute the inner product of p with q."},{"Start":"02:32.210 ","End":"02:35.240","Text":"It\u0027s the sum of 8 terms,"},{"Start":"02:35.240 ","End":"02:38.810","Text":"and this is what it\u0027s equal to."},{"Start":"02:38.810 ","End":"02:41.480","Text":"I could have written all 8 terms out anyway,"},{"Start":"02:41.480 ","End":"02:42.815","Text":"I wrote a few of them,"},{"Start":"02:42.815 ","End":"02:47.435","Text":"p of 0 times q of 0 up to p of 7 times q of 7."},{"Start":"02:47.435 ","End":"02:51.890","Text":"Now notice on the even numbers, p is 0,"},{"Start":"02:51.890 ","End":"02:54.020","Text":"so that this is 0,"},{"Start":"02:54.020 ","End":"03:01.720","Text":"this is 0, this is 0 and also p of 6."},{"Start":"03:02.150 ","End":"03:04.410","Text":"In the case of q,"},{"Start":"03:04.410 ","End":"03:06.330","Text":"the odd ones are 0,"},{"Start":"03:06.330 ","End":"03:08.895","Text":"so q of 1 is 0,"},{"Start":"03:08.895 ","End":"03:11.640","Text":"q of 3 is 0,"},{"Start":"03:11.640 ","End":"03:14.535","Text":"q of 7 is 0."},{"Start":"03:14.535 ","End":"03:21.450","Text":"In each pair, at least 1 of them is 0."},{"Start":"03:21.450 ","End":"03:23.835","Text":"Each of these is 0,"},{"Start":"03:23.835 ","End":"03:28.140","Text":"and so the sum is also 0."},{"Start":"03:28.140 ","End":"03:33.065","Text":"The inner product of p with q is 0,"},{"Start":"03:33.065 ","End":"03:41.280","Text":"which means that p and q are orthogonal. We\u0027re done."}],"ID":10184},{"Watched":false,"Name":"Exercise 6","Duration":"2m 9s","ChapterTopicVideoID":9711,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.430","Text":"In this exercise, we\u0027re dealing with"},{"Start":"00:02.430 ","End":"00:08.070","Text":"the inner product space of 2 by 2 real matrices,"},{"Start":"00:08.070 ","End":"00:12.240","Text":"where the product is the usual for matrices."},{"Start":"00:12.240 ","End":"00:18.240","Text":"The inner product of X with Y is the transpose of Y times X,"},{"Start":"00:18.240 ","End":"00:20.220","Text":"and then You take the trace of that."},{"Start":"00:20.220 ","End":"00:22.530","Text":"We have 2 matrices,"},{"Start":"00:22.530 ","End":"00:24.780","Text":"A and B in this space."},{"Start":"00:24.780 ","End":"00:28.455","Text":"Notice that A has a parameter k in it."},{"Start":"00:28.455 ","End":"00:30.285","Text":"Now the question is,"},{"Start":"00:30.285 ","End":"00:32.970","Text":"for which values of this parameter k,"},{"Start":"00:32.970 ","End":"00:37.215","Text":"are these matrices A and B, orthogonal?"},{"Start":"00:37.215 ","End":"00:41.910","Text":"In any space, orthogonal means that the inner product is 0,"},{"Start":"00:41.910 ","End":"00:50.285","Text":"and in our case it comes out to B transpose times A and then trace of that."},{"Start":"00:50.285 ","End":"00:53.400","Text":"This is going to be equal to 0."},{"Start":"00:53.660 ","End":"00:56.180","Text":"B transpose is here."},{"Start":"00:56.180 ","End":"00:59.150","Text":"We got it from B just by flipping it along the diagonal"},{"Start":"00:59.150 ","End":"01:01.085","Text":"or interchanging rows and columns."},{"Start":"01:01.085 ","End":"01:03.770","Text":"A, we take as is,"},{"Start":"01:03.770 ","End":"01:07.234","Text":"I want the trace of the product to be 0."},{"Start":"01:07.234 ","End":"01:11.570","Text":"For the trace, I just have to compute the diagonal of the product."},{"Start":"01:11.570 ","End":"01:14.000","Text":"Now, this entry first row,"},{"Start":"01:14.000 ","End":"01:17.810","Text":"first column comes from multiplying this with this,"},{"Start":"01:17.810 ","End":"01:21.750","Text":"0, k plus 2 times 3,"},{"Start":"01:21.750 ","End":"01:27.975","Text":"and this comes from this times this 2nd row with the 2nd column."},{"Start":"01:27.975 ","End":"01:34.490","Text":"That\u0027s minus 1 times 1 and minus 3 is this a trace of this has got to be 0."},{"Start":"01:34.490 ","End":"01:38.690","Text":"The trace we just add the 6 and the minus 4,"},{"Start":"01:38.690 ","End":"01:42.670","Text":"and this gives us 2 equals 0."},{"Start":"01:42.670 ","End":"01:45.385","Text":"That\u0027s not possible."},{"Start":"01:45.385 ","End":"01:49.700","Text":"This contradiction means that there are no values of k"},{"Start":"01:49.700 ","End":"01:53.880","Text":"for which the matrices are orthogonal, not possible."},{"Start":"01:53.880 ","End":"01:57.590","Text":"See the problem is that k dropped out from this line to this line,"},{"Start":"01:57.590 ","End":"02:02.180","Text":"k didn\u0027t appear and we got a contradictory equality."},{"Start":"02:02.180 ","End":"02:07.940","Text":"It can happen, no values of k make the 2 matrices A and B orthogonal,"},{"Start":"02:07.940 ","End":"02:09.660","Text":"and we\u0027re done."}],"ID":10185},{"Watched":false,"Name":"Exercise 7","Duration":"4m 48s","ChapterTopicVideoID":13520,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.550","Text":"In this exercise, we don\u0027t say explicitly,"},{"Start":"00:02.550 ","End":"00:07.080","Text":"but it\u0027s understood that u and v are 2 vectors in an inner product space."},{"Start":"00:07.080 ","End":"00:10.575","Text":"We have to prove that something,"},{"Start":"00:10.575 ","End":"00:12.930","Text":"if and only if something else,"},{"Start":"00:12.930 ","End":"00:15.150","Text":"this double arrow means if and only if."},{"Start":"00:15.150 ","End":"00:18.510","Text":"We have to show that the norm of u plus v equals the norm of u"},{"Start":"00:18.510 ","End":"00:23.295","Text":"minus v if and only if u is perpendicular to v,"},{"Start":"00:23.295 ","End":"00:25.695","Text":"or u and v are orthogonal."},{"Start":"00:25.695 ","End":"00:31.300","Text":"It turns out that there\u0027s a nice geometric interpretation in the plane in R^2."},{"Start":"00:31.300 ","End":"00:35.600","Text":"We start with the geometric interpretation."},{"Start":"00:35.600 ","End":"00:43.265","Text":"A parallelogram is a rectangle if and only if its diagonals have equal length."},{"Start":"00:43.265 ","End":"00:47.270","Text":"Now, if I have a parallelogram it\u0027s going to be a rectangle"},{"Start":"00:47.270 ","End":"00:52.615","Text":"if and only if this is 90 degrees."},{"Start":"00:52.615 ","End":"00:58.025","Text":"If we let this vector from here to here be u,"},{"Start":"00:58.025 ","End":"01:00.020","Text":"and this vector from here to here,"},{"Start":"01:00.020 ","End":"01:08.255","Text":"we\u0027ll call it v. Then u and v need to be orthogonal or perpendicular."},{"Start":"01:08.255 ","End":"01:12.170","Text":"This diagonal, if I make it into a vector,"},{"Start":"01:12.170 ","End":"01:17.160","Text":"corresponds to the vector u plus v,"},{"Start":"01:17.330 ","End":"01:21.130","Text":"all the way from here to here."},{"Start":"01:21.440 ","End":"01:25.805","Text":"The other diagonal from here to here,"},{"Start":"01:25.805 ","End":"01:35.700","Text":"this vector is u minus v. What this says is the norm is the length,"},{"Start":"01:35.700 ","End":"01:38.780","Text":"the length of u plus v is this diagonal."},{"Start":"01:38.780 ","End":"01:41.735","Text":"Length of u minus v is this diagonal,"},{"Start":"01:41.735 ","End":"01:46.880","Text":"and the lengths are equal if and only if there\u0027s a 90 degree angle here,"},{"Start":"01:46.880 ","End":"01:48.710","Text":"which means that the parallelogram is"},{"Start":"01:48.710 ","End":"01:52.880","Text":"a rectangle and the other way around because in the rectangle,"},{"Start":"01:52.880 ","End":"01:56.620","Text":"the diagonals are certainly equal."},{"Start":"01:56.620 ","End":"01:59.290","Text":"The geometry is the less important part,"},{"Start":"01:59.290 ","End":"02:01.175","Text":"it just gives you a visual idea."},{"Start":"02:01.175 ","End":"02:05.090","Text":"Let\u0027s get to the proof and we\u0027re going to prove each part separately."},{"Start":"02:05.090 ","End":"02:07.970","Text":"First of all, from right to left,"},{"Start":"02:07.970 ","End":"02:11.255","Text":"that if u is perpendicular to v,"},{"Start":"02:11.255 ","End":"02:16.885","Text":"then these 2 vectors are equal in length or have equal norm."},{"Start":"02:16.885 ","End":"02:20.045","Text":"Since u is perpendicular to v,"},{"Start":"02:20.045 ","End":"02:25.520","Text":"then the inner product of u with v is same thing as v with u is 0."},{"Start":"02:25.520 ","End":"02:30.770","Text":"Now the idea is to compute this and to compute this and see that it comes out the same."},{"Start":"02:30.770 ","End":"02:32.885","Text":"There was this and this."},{"Start":"02:32.885 ","End":"02:36.170","Text":"This 1 is by the definition of the norm,"},{"Start":"02:36.170 ","End":"02:40.070","Text":"the square root of inner product of u plus v with itself."},{"Start":"02:40.070 ","End":"02:44.600","Text":"Next, we expand the inner product using linearity,"},{"Start":"02:44.600 ","End":"02:48.830","Text":"so we get u with you, and u with v,"},{"Start":"02:48.830 ","End":"02:56.915","Text":"v with u and v with v. But this is 0 and this is 0 as we see here,"},{"Start":"02:56.915 ","End":"03:01.710","Text":"so what we get is just this expression."},{"Start":"03:01.880 ","End":"03:05.210","Text":"The other half is practically the same thing."},{"Start":"03:05.210 ","End":"03:07.865","Text":"We need the length of u minus v,"},{"Start":"03:07.865 ","End":"03:09.290","Text":"so it\u0027s just like here,"},{"Start":"03:09.290 ","End":"03:11.090","Text":"u minus v with u minus v,"},{"Start":"03:11.090 ","End":"03:15.995","Text":"we get this expression and these 2 come out to be 0 so we get this."},{"Start":"03:15.995 ","End":"03:20.570","Text":"Notice that this and this are exactly the same."},{"Start":"03:20.570 ","End":"03:27.650","Text":"This and this, which means that u plus v equals u minus v. That\u0027s just step 1,"},{"Start":"03:27.650 ","End":"03:30.665","Text":"now we need to prove the other direction."},{"Start":"03:30.665 ","End":"03:33.270","Text":"Get some space here."},{"Start":"03:35.300 ","End":"03:38.030","Text":"Step 2 is the other direction,"},{"Start":"03:38.030 ","End":"03:40.140","Text":"which means that we start with this,"},{"Start":"03:40.140 ","End":"03:43.860","Text":"and at the end we have to reach this."},{"Start":"03:44.210 ","End":"03:47.590","Text":"Here, I\u0027ve done 2 steps in 1."},{"Start":"03:47.590 ","End":"03:51.860","Text":"This is the square root of u plus v,"},{"Start":"03:51.860 ","End":"03:52.970","Text":"inner product with u plus,"},{"Start":"03:52.970 ","End":"03:56.555","Text":"but we saw already before that it expands to this,"},{"Start":"03:56.555 ","End":"03:57.890","Text":"so I did that."},{"Start":"03:57.890 ","End":"04:04.575","Text":"Also, we already expanded u minus v with u minus v and a product to give us this."},{"Start":"04:04.575 ","End":"04:08.220","Text":"Now I just square both sides or throw away the square root,"},{"Start":"04:08.220 ","End":"04:10.075","Text":"and we\u0027ve got this."},{"Start":"04:10.075 ","End":"04:13.460","Text":"Notice that this cancels with this,"},{"Start":"04:13.460 ","End":"04:16.400","Text":"and this cancels with this."},{"Start":"04:16.400 ","End":"04:19.190","Text":"Also u, v equals v, u,"},{"Start":"04:19.190 ","End":"04:20.840","Text":"so here we have twice u,"},{"Start":"04:20.840 ","End":"04:23.905","Text":"v here we have minus twice u, v,"},{"Start":"04:23.905 ","End":"04:26.210","Text":"bring it all to the left-hand side and we have"},{"Start":"04:26.210 ","End":"04:29.600","Text":"4 times the inner product of u with v is 0,"},{"Start":"04:29.600 ","End":"04:31.565","Text":"and if 4 times it is 0,"},{"Start":"04:31.565 ","End":"04:33.830","Text":"then it itself is also 0."},{"Start":"04:33.830 ","End":"04:36.589","Text":"If u inner product with v is 0,"},{"Start":"04:36.589 ","End":"04:40.720","Text":"it means that u and v are orthogonal."},{"Start":"04:40.730 ","End":"04:44.990","Text":"This is the symbol for orthogonal, and that\u0027s it."},{"Start":"04:44.990 ","End":"04:46.310","Text":"We\u0027ve proved both directions,"},{"Start":"04:46.310 ","End":"04:48.630","Text":"so we are done."}],"ID":14160},{"Watched":false,"Name":"Exercise 8","Duration":"4m 4s","ChapterTopicVideoID":9712,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.060","Text":"In this exercise, we don\u0027t say explicitly,"},{"Start":"00:03.060 ","End":"00:07.230","Text":"we assume u and v are 2 vectors in an inner product space."},{"Start":"00:07.230 ","End":"00:11.280","Text":"What we have to prove is that the norm of u plus"},{"Start":"00:11.280 ","End":"00:15.420","Text":"v squared equals the norm of u squared plus the norm of v squared,"},{"Start":"00:15.420 ","End":"00:20.490","Text":"if and only if u is perpendicular to v or orthogonal to"},{"Start":"00:20.490 ","End":"00:26.625","Text":"v. It turns out there\u0027s also a geometric interpretation in the plane R^2."},{"Start":"00:26.625 ","End":"00:32.235","Text":"We\u0027ll start with the geometrical interpretation in the plane and"},{"Start":"00:32.235 ","End":"00:39.795","Text":"the inner product and R^2 is just the standard inner product, the dot product."},{"Start":"00:39.795 ","End":"00:44.660","Text":"The meaning of this is Pythagoras\u0027s theorem,"},{"Start":"00:44.660 ","End":"00:49.775","Text":"the famous 1, and also its inverse because it goes in both directions."},{"Start":"00:49.775 ","End":"00:51.380","Text":"I\u0027ll show you."},{"Start":"00:51.380 ","End":"00:58.045","Text":"Let\u0027s say this is u and this vector is v,"},{"Start":"00:58.045 ","End":"01:03.695","Text":"then this vector will be u plus v. Now,"},{"Start":"01:03.695 ","End":"01:09.065","Text":"Pythagoras\u0027s theorem says that the square of the hypotenuse,"},{"Start":"01:09.065 ","End":"01:11.790","Text":"which is the length of this squared,"},{"Start":"01:11.790 ","End":"01:12.910","Text":"the length is the norm,"},{"Start":"01:12.910 ","End":"01:16.339","Text":"so the norm of this squared equals"},{"Start":"01:16.339 ","End":"01:20.690","Text":"the length of this squared plus the length of this squared."},{"Start":"01:20.690 ","End":"01:24.050","Text":"But Pythagoras\u0027s theorem also has"},{"Start":"01:24.050 ","End":"01:28.430","Text":"an inverse that if I don\u0027t know if this is a right angle,"},{"Start":"01:28.430 ","End":"01:32.585","Text":"but if this squared equals this squared plus this squared,"},{"Start":"01:32.585 ","End":"01:35.450","Text":"then that guarantees that this is a right angle."},{"Start":"01:35.450 ","End":"01:38.035","Text":"It actually works both ways."},{"Start":"01:38.035 ","End":"01:42.170","Text":"The other way is the inverse Pythagoras\u0027s theorem."},{"Start":"01:42.170 ","End":"01:44.450","Text":"The geometry is less important,"},{"Start":"01:44.450 ","End":"01:45.830","Text":"the proof is more important."},{"Start":"01:45.830 ","End":"01:47.755","Text":"Let\u0027s get started."},{"Start":"01:47.755 ","End":"01:50.090","Text":"We\u0027ll break it up into 2 steps."},{"Start":"01:50.090 ","End":"01:51.320","Text":"This is an if and only if,"},{"Start":"01:51.320 ","End":"01:53.390","Text":"so we\u0027ll prove each direction separately."},{"Start":"01:53.390 ","End":"01:58.369","Text":"First of all, we\u0027ll assume that u and v are orthogonal,"},{"Start":"01:58.369 ","End":"02:05.945","Text":"meaning this is a 90 degrees and we\u0027ll prove that this equality holds."},{"Start":"02:05.945 ","End":"02:09.860","Text":"Now because u is perpendicular to v,"},{"Start":"02:09.860 ","End":"02:14.825","Text":"the inner product of u with v or with u, both are 0."},{"Start":"02:14.825 ","End":"02:18.410","Text":"The strategy is to evaluate this and"},{"Start":"02:18.410 ","End":"02:22.270","Text":"then to evaluate this and see that we get the same thing."},{"Start":"02:22.270 ","End":"02:28.760","Text":"The left-hand side, u plus v squared in a product of u plus v with itself."},{"Start":"02:28.760 ","End":"02:30.680","Text":"From linearity, we get this."},{"Start":"02:30.680 ","End":"02:33.215","Text":"We\u0027ve done this many times before."},{"Start":"02:33.215 ","End":"02:35.000","Text":"Now if you look here,"},{"Start":"02:35.000 ","End":"02:37.250","Text":"we know that these are 0,"},{"Start":"02:37.250 ","End":"02:39.745","Text":"this 1, and this 1."},{"Start":"02:39.745 ","End":"02:42.620","Text":"We get this and to finish off,"},{"Start":"02:42.620 ","End":"02:47.210","Text":"I just have to show that this here is equal to this,"},{"Start":"02:47.210 ","End":"02:54.710","Text":"but that\u0027s clear because the norm of u squared is inner product of u with u,"},{"Start":"02:54.710 ","End":"03:00.020","Text":"and similarly this v with v is norm of v squared so that gives us"},{"Start":"03:00.020 ","End":"03:07.545","Text":"this and that\u0027s the first part proven and now we\u0027ll go to step number 2."},{"Start":"03:07.545 ","End":"03:09.070","Text":"Step 2 is the inverse,"},{"Start":"03:09.070 ","End":"03:12.530","Text":"where we start off with this equality and we have"},{"Start":"03:12.530 ","End":"03:16.820","Text":"to show from this that u and v are perpendicular."},{"Start":"03:16.820 ","End":"03:21.620","Text":"Now the norm of u plus v squared is equal to this."},{"Start":"03:21.620 ","End":"03:26.140","Text":"I skipped a step or 2 because we\u0027ve done this already many times."},{"Start":"03:26.140 ","End":"03:30.540","Text":"Also this plus this is going to equal u,"},{"Start":"03:30.540 ","End":"03:31.650","Text":"v plus v,"},{"Start":"03:31.650 ","End":"03:38.690","Text":"v. What we end up with u plus v squared is this,"},{"Start":"03:38.690 ","End":"03:40.370","Text":"is equal to this plus this,"},{"Start":"03:40.370 ","End":"03:42.320","Text":"which is this plus this."},{"Start":"03:42.320 ","End":"03:46.620","Text":"Now we have this and we can cancel u,"},{"Start":"03:46.620 ","End":"03:48.555","Text":"u with u, u,"},{"Start":"03:48.555 ","End":"03:50.325","Text":"v, v with v,"},{"Start":"03:50.325 ","End":"03:55.130","Text":"v, so twice inner product with u and v is 0,"},{"Start":"03:55.130 ","End":"03:58.070","Text":"which means that u with v is 0,"},{"Start":"03:58.070 ","End":"04:01.070","Text":"and so u and v are perpendicular."},{"Start":"04:01.070 ","End":"04:04.650","Text":"That\u0027s the other direction and we are done."}],"ID":10186},{"Watched":false,"Name":"Exercise 9","Duration":"3m 11s","ChapterTopicVideoID":9713,"CourseChapterTopicPlaylistID":7311,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.190","Text":"In this exercise,"},{"Start":"00:02.190 ","End":"00:06.810","Text":"the u and v are 2 vectors in an inner product space."},{"Start":"00:06.810 ","End":"00:09.285","Text":"I didn\u0027t say that explicitly, we assume that."},{"Start":"00:09.285 ","End":"00:14.430","Text":"We\u0027re given that the norm"},{"Start":"00:14.430 ","End":"00:18.710","Text":"of u equals the norm of v or the length of u equals length of v. From this,"},{"Start":"00:18.710 ","End":"00:24.650","Text":"we have to prove that the vector u minus v is"},{"Start":"00:24.650 ","End":"00:27.500","Text":"perpendicular or orthogonal to u"},{"Start":"00:27.500 ","End":"00:31.130","Text":"plus v. Then we\u0027ll just have to prove that if this is true,"},{"Start":"00:31.130 ","End":"00:32.495","Text":"then this is true."},{"Start":"00:32.495 ","End":"00:39.685","Text":"It turns out there\u0027s also a geometric interpretation in the Euclidean plane too."},{"Start":"00:39.685 ","End":"00:42.904","Text":"We\u0027ll do the geometric part first."},{"Start":"00:42.904 ","End":"00:49.310","Text":"What this actually means in the plane with the standard inner product,"},{"Start":"00:49.310 ","End":"00:54.020","Text":"is that the diagonals in a rhombus are mutually perpendicular."},{"Start":"00:54.020 ","End":"00:56.210","Text":"If you don\u0027t remember what a rhombus is,"},{"Start":"00:56.210 ","End":"01:00.685","Text":"it\u0027s a quadrilateral where all 4 sides are equal in length,"},{"Start":"01:00.685 ","End":"01:02.210","Text":"this, this, and this,"},{"Start":"01:02.210 ","End":"01:04.715","Text":"and this and this are all equal in length."},{"Start":"01:04.715 ","End":"01:15.070","Text":"Let\u0027s call this 1 u and let\u0027s call this v. Rhombus is a parallelogram,"},{"Start":"01:15.070 ","End":"01:17.200","Text":"so this is also u,"},{"Start":"01:17.200 ","End":"01:22.260","Text":"and this is also v. If you think about it,"},{"Start":"01:22.260 ","End":"01:27.960","Text":"this will be v plus u or"},{"Start":"01:27.960 ","End":"01:34.210","Text":"u plus v. This vector here,"},{"Start":"01:34.210 ","End":"01:44.680","Text":"the other diagonal is actually u minus v. Let\u0027s see how this says what I wrote here."},{"Start":"01:44.680 ","End":"01:49.920","Text":"A rhombus means that u and v have equal length,"},{"Start":"01:49.920 ","End":"01:52.550","Text":"norm of u equals norm of v,"},{"Start":"01:52.550 ","End":"01:55.520","Text":"because then everything will be equal because this is also"},{"Start":"01:55.520 ","End":"01:58.520","Text":"u and this is all v. If this is true,"},{"Start":"01:58.520 ","End":"02:01.080","Text":"then we have a rhombus."},{"Start":"02:01.960 ","End":"02:04.930","Text":"These are the diagonals,"},{"Start":"02:04.930 ","End":"02:07.620","Text":"u minus v and u plus v,"},{"Start":"02:07.620 ","End":"02:10.740","Text":"and this says that they are perpendicular."},{"Start":"02:10.740 ","End":"02:14.749","Text":"That\u0027s the geometry and that\u0027s actually less important."},{"Start":"02:14.749 ","End":"02:16.820","Text":"We need to do the proof,"},{"Start":"02:16.820 ","End":"02:19.795","Text":"the formal proof, that\u0027s the more important part."},{"Start":"02:19.795 ","End":"02:23.900","Text":"The strategy will be to show that the inner product of u"},{"Start":"02:23.900 ","End":"02:29.460","Text":"minus v with u plus v is 0 because that will mean that they\u0027re perpendicular."},{"Start":"02:29.560 ","End":"02:35.365","Text":"Start off with this expression and see if we can get to 0 at the end."},{"Start":"02:35.365 ","End":"02:43.360","Text":"Expanding by linearity, we get this and then we get this because this and this cancel,"},{"Start":"02:43.360 ","End":"02:46.075","Text":"they\u0027re the same by symmetry."},{"Start":"02:46.075 ","End":"02:49.735","Text":"Now, this is the norm of u squared."},{"Start":"02:49.735 ","End":"02:52.945","Text":"This is the norm of v squared,"},{"Start":"02:52.945 ","End":"02:57.680","Text":"but the norm of u is equal to the norm of v from here,"},{"Start":"02:57.680 ","End":"03:02.710","Text":"but the squares are also equal and so we end up with 0,"},{"Start":"03:02.710 ","End":"03:11.840","Text":"which means that these 2 vectors or these 2 are indeed orthogonal. We\u0027re done."}],"ID":10187}],"Thumbnail":null,"ID":7311},{"Name":"Orthogonal Complement","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Lesson 1 - Orthogonal Complement","Duration":"8m 45s","ChapterTopicVideoID":10027,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":"https://www.proprep.uk/Images/Videos_Thumbnails/10027.jpeg","UploadDate":"2017-08-16T11:28:55.2430000","DurationForVideoObject":"PT8M45S","Description":null,"MetaTitle":"Lesson 1 - Orthogonal Complement: Video + Workbook | Proprep","MetaDescription":"Inner Product Spaces - Orthogonal Complement. Watch the video made by an expert in the field. Download the workbook and maximize your learning.","Canonical":"https://www.proprep.uk/general-modules/all/linear-algebra/inner-product-spaces/orthogonal-complement/vid10154","VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.945","Text":"In this clip, our real aim is something called orthogonal compliment."},{"Start":"00:06.945 ","End":"00:13.185","Text":"But we need to complete some theory before we can get there."},{"Start":"00:13.185 ","End":"00:20.535","Text":"We have to learn what is a sum of subspaces and what is a direct sum of subspaces?"},{"Start":"00:20.535 ","End":"00:23.190","Text":"For the moment, we\u0027ll just stick to"},{"Start":"00:23.190 ","End":"00:28.470","Text":"vector spaces and not necessarily in a product spaces."},{"Start":"00:28.470 ","End":"00:32.550","Text":"There\u0027s going to be several definitions and propositions."},{"Start":"00:32.550 ","End":"00:33.960","Text":"Then all of them,"},{"Start":"00:33.960 ","End":"00:40.140","Text":"I\u0027m going to assume as a vector space V and U and W, 2 subspaces."},{"Start":"00:40.140 ","End":"00:47.180","Text":"The first definition will be the sum of 2 subspaces U and W."},{"Start":"00:47.180 ","End":"00:50.210","Text":"U plus W,"},{"Start":"00:50.210 ","End":"00:54.440","Text":"that\u0027s how it\u0027s written, is defined,"},{"Start":"00:54.440 ","End":"01:00.520","Text":"to be the set of all little u plus little w,"},{"Start":"01:00.520 ","End":"01:04.700","Text":"where u is in U and w is in W. In other words,"},{"Start":"01:04.700 ","End":"01:10.400","Text":"it\u0027s all the sums of 1 from here and 1 from here added together."},{"Start":"01:10.400 ","End":"01:13.660","Text":"Here that is in writing."},{"Start":"01:13.660 ","End":"01:16.250","Text":"Now, from the definition,"},{"Start":"01:16.250 ","End":"01:21.050","Text":"you could just say that it\u0027s a subset of V. It turns out that if you"},{"Start":"01:21.050 ","End":"01:25.990","Text":"take all possible sums of an element from U and element from W,"},{"Start":"01:25.990 ","End":"01:32.060","Text":"that the result is actually a subspace of V. It\u0027s not difficult to prove,"},{"Start":"01:32.060 ","End":"01:34.280","Text":"but I\u0027m not going to spend the time doing it,"},{"Start":"01:34.280 ","End":"01:40.380","Text":"we\u0027ll just accept it that the sum of 2 subspaces is also a subspace."},{"Start":"01:40.380 ","End":"01:45.080","Text":"Similar to this, instead of taking the sum,"},{"Start":"01:45.080 ","End":"01:49.340","Text":"if we take the intersection of 2 subspaces,"},{"Start":"01:49.340 ","End":"01:52.075","Text":"that also gives a subspace."},{"Start":"01:52.075 ","End":"01:59.000","Text":"These are the 2 basic operations that we can do on that 2 subspaces."},{"Start":"01:59.000 ","End":"02:02.155","Text":"We can take the sum, we can take the intersection."},{"Start":"02:02.155 ","End":"02:05.100","Text":"Now here\u0027s the claim."},{"Start":"02:05.100 ","End":"02:09.180","Text":"If S_1 spans U,"},{"Start":"02:09.180 ","End":"02:11.820","Text":"S_1 is a spanning set for U."},{"Start":"02:11.820 ","End":"02:18.270","Text":"Similarly, S_2 is a spanning set for W. Then U"},{"Start":"02:18.270 ","End":"02:24.720","Text":"plus W is spanned by the union S_1 with S_2."},{"Start":"02:24.720 ","End":"02:26.910","Text":"Union putting them together."},{"Start":"02:26.910 ","End":"02:29.540","Text":"I\u0027ll probably use it later on."},{"Start":"02:29.540 ","End":"02:33.575","Text":"Now, let me get to another definition."},{"Start":"02:33.575 ","End":"02:37.595","Text":"There is something called the direct sum."},{"Start":"02:37.595 ","End":"02:41.465","Text":"I have the 2 subspaces, U and W,"},{"Start":"02:41.465 ","End":"02:48.170","Text":"happened to be such that their intersection is just the 0 vector."},{"Start":"02:48.170 ","End":"02:58.480","Text":"The set containing just the 0 vector is the trivial subspace or no subspace."},{"Start":"02:58.480 ","End":"03:00.330","Text":"It\u0027s not the empty set."},{"Start":"03:00.330 ","End":"03:03.930","Text":"It\u0027s the set that contains just 1 element, just the 0."},{"Start":"03:03.930 ","End":"03:05.525","Text":"It\u0027s a subspace."},{"Start":"03:05.525 ","End":"03:07.849","Text":"It\u0027s the smallest possible subspace."},{"Start":"03:07.849 ","End":"03:12.530","Text":"If U intersection W is just the 0,"},{"Start":"03:12.530 ","End":"03:17.490","Text":"then the sum of U and V is called"},{"Start":"03:17.490 ","End":"03:23.630","Text":"the direct sum of U and V. We write it with a different plus."},{"Start":"03:23.630 ","End":"03:25.990","Text":"You could still write it with the old plus."},{"Start":"03:25.990 ","End":"03:27.160","Text":"It still is a sum,"},{"Start":"03:27.160 ","End":"03:30.370","Text":"but it\u0027s a special sum, direct sum."},{"Start":"03:30.370 ","End":"03:35.090","Text":"We emphasize that by putting a plus in a circle."},{"Start":"03:35.480 ","End":"03:38.370","Text":"That\u0027s all it is."},{"Start":"03:38.370 ","End":"03:48.235","Text":"Now, the next claim is assuming that U intersection W is just the trivial subspace."},{"Start":"03:48.235 ","End":"03:52.255","Text":"This is going to be similar to this claim."},{"Start":"03:52.255 ","End":"04:01.395","Text":"This 1 says that if I have a basis for U and I have a basis for W,"},{"Start":"04:01.395 ","End":"04:07.850","Text":"then if I take the union of the 2 basis that will be a basis"},{"Start":"04:07.850 ","End":"04:14.935","Text":"of the direct sum of U plus W. I can write direct sum because of this assumption."},{"Start":"04:14.935 ","End":"04:18.570","Text":"Now, actually, it\u0027s more than that."},{"Start":"04:18.570 ","End":"04:22.520","Text":"It turns out that this union is a disjoint union,"},{"Start":"04:22.520 ","End":"04:25.685","Text":"sometimes written with a dot in the union,"},{"Start":"04:25.685 ","End":"04:28.520","Text":"which means that they won\u0027t have any overlap."},{"Start":"04:28.520 ","End":"04:31.010","Text":"It\u0027s called a disjoint union."},{"Start":"04:31.010 ","End":"04:33.500","Text":"But don\u0027t worry about that."},{"Start":"04:33.500 ","End":"04:35.720","Text":"If it\u0027s irregular sum and we have"},{"Start":"04:35.720 ","End":"04:39.185","Text":"a spanning set for 1 and the spanning set for the other."},{"Start":"04:39.185 ","End":"04:43.430","Text":"We take the union of the spanning sets and in the case of direct sum,"},{"Start":"04:43.430 ","End":"04:44.900","Text":"we can take a basis for 1,"},{"Start":"04:44.900 ","End":"04:46.145","Text":"the basis for the other,"},{"Start":"04:46.145 ","End":"04:48.245","Text":"and take the union of the basis,"},{"Start":"04:48.245 ","End":"04:51.210","Text":"to get the basis for the direct sum."},{"Start":"04:52.670 ","End":"04:57.610","Text":"One more claim before we get into the subject proper."},{"Start":"04:57.610 ","End":"05:03.650","Text":"Again, assuming that U and W have a trivial intersection,"},{"Start":"05:03.650 ","End":"05:08.030","Text":"then the dimension of the direct sum is"},{"Start":"05:08.030 ","End":"05:14.560","Text":"the dimension of 1 of them plus the dimension of the other, as written here."},{"Start":"05:14.560 ","End":"05:19.130","Text":"Now, we come to the real subject of the clip,"},{"Start":"05:19.130 ","End":"05:24.070","Text":"which is the concept of orthogonal complement of a subspace."},{"Start":"05:24.070 ","End":"05:28.220","Text":"This time, we\u0027re not just in any old vector space,"},{"Start":"05:28.220 ","End":"05:30.380","Text":"we\u0027re in an inner product space."},{"Start":"05:30.380 ","End":"05:33.050","Text":"Recall that means a vector space with"},{"Start":"05:33.050 ","End":"05:37.570","Text":"an inner product which has to satisfy certain conditions."},{"Start":"05:37.570 ","End":"05:43.490","Text":"I\u0027m going to define orthogonal complement of a subspace W. First"},{"Start":"05:43.490 ","End":"05:48.530","Text":"of all the way it\u0027s written is W with this sign,"},{"Start":"05:48.530 ","End":"05:52.715","Text":"which is the perpendicularity sign from geometry."},{"Start":"05:52.715 ","End":"06:01.075","Text":"Actually we pronounce this as W perp, purpose and perpendicular."},{"Start":"06:01.075 ","End":"06:05.660","Text":"Other, you can just say it in full W orthogonal complement."},{"Start":"06:05.660 ","End":"06:11.975","Text":"Anyway, it\u0027s defined as all the vectors in V which"},{"Start":"06:11.975 ","End":"06:18.770","Text":"are orthogonal or perpendicular to the subspace W. Now,"},{"Start":"06:18.770 ","End":"06:25.790","Text":"I don\u0027t believe we discussed the concept of a vector perpendicular to a subspace."},{"Start":"06:25.790 ","End":"06:31.700","Text":"What this means is that V is orthogonal to"},{"Start":"06:31.700 ","End":"06:37.560","Text":"every vector W in big W. If v is perpendicular to w,"},{"Start":"06:37.560 ","End":"06:39.930","Text":"for every w in big W,"},{"Start":"06:39.930 ","End":"06:44.705","Text":"then we say that v is perpendicular or orthogonal to the whole subspace."},{"Start":"06:44.705 ","End":"06:49.280","Text":"Make sense? Rephrasing again,"},{"Start":"06:49.280 ","End":"06:55.190","Text":"W perp is the set of all vectors in V that are orthogonal"},{"Start":"06:55.190 ","End":"06:57.020","Text":"to all vectors in W."},{"Start":"06:57.020 ","End":"07:07.355","Text":"This is a repetition of what I said earlier that usually we pronounce this as W perp."},{"Start":"07:07.355 ","End":"07:11.855","Text":"We could say that this is the perp of W,"},{"Start":"07:11.855 ","End":"07:18.125","Text":"like the perp being short for perpendicular compliment or orthogonal complement."},{"Start":"07:18.125 ","End":"07:21.320","Text":"That\u0027s just pronunciation."},{"Start":"07:21.320 ","End":"07:24.080","Text":"Now, an important proposition,"},{"Start":"07:24.080 ","End":"07:29.305","Text":"W perp is actually a subspace of V,"},{"Start":"07:29.305 ","End":"07:32.560","Text":"satisfies all the axioms for a subspace."},{"Start":"07:32.560 ","End":"07:39.625","Text":"Not only that, but the intersection of W with W perp is the trivial subspace,"},{"Start":"07:39.625 ","End":"07:46.810","Text":"meaning the only vector that\u0027s common to both is the 0 vector."},{"Start":"07:46.810 ","End":"07:49.300","Text":"This is also easy to prove it."},{"Start":"07:49.300 ","End":"07:51.320","Text":"We\u0027re not going to approve it."},{"Start":"07:51.320 ","End":"08:00.140","Text":"The last thing in this clip is a theorem called the orthogonal decomposition theorem."},{"Start":"08:00.140 ","End":"08:10.700","Text":"That says that the vector space V is the direct sum of W and W perp."},{"Start":"08:11.860 ","End":"08:20.720","Text":"We already know that W intersection W perp is just the 0."},{"Start":"08:20.720 ","End":"08:22.910","Text":"If we were going to prove this,"},{"Start":"08:22.910 ","End":"08:28.910","Text":"all we\u0027d have to show is that the sum of W and W perp is V,"},{"Start":"08:28.910 ","End":"08:35.340","Text":"that every vector can be written as something in W plus something in W perp."},{"Start":"08:35.340 ","End":"08:43.100","Text":"Anyway, we\u0027ll see it a lot in the exercises."},{"Start":"08:43.170 ","End":"08:46.340","Text":"That\u0027s it for now."}],"ID":10154},{"Watched":false,"Name":"Exercise 1","Duration":"8m 34s","ChapterTopicVideoID":10036,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:07.875","Text":"In this exercise, we\u0027re given W is the span of these 2 vectors."},{"Start":"00:07.875 ","End":"00:10.500","Text":"We are in the space R^4,"},{"Start":"00:10.500 ","End":"00:12.630","Text":"and I mean the inner product space because in"},{"Start":"00:12.630 ","End":"00:15.885","Text":"this chapter we\u0027re not just dealing with vector spaces."},{"Start":"00:15.885 ","End":"00:17.640","Text":"Usually, I won\u0027t say,"},{"Start":"00:17.640 ","End":"00:20.040","Text":"and if I don\u0027t say it means the standard inner product,"},{"Start":"00:20.040 ","End":"00:22.110","Text":"which is the dot product."},{"Start":"00:22.110 ","End":"00:24.405","Text":"There\u0027s 2 questions."},{"Start":"00:24.405 ","End":"00:29.610","Text":"1 is to find a basis for W perp,"},{"Start":"00:29.610 ","End":"00:34.500","Text":"that\u0027s the orthogonal complement and its dimension."},{"Start":"00:34.500 ","End":"00:36.630","Text":"In the second part,"},{"Start":"00:36.630 ","End":"00:39.480","Text":"to show that the result from the first part"},{"Start":"00:39.480 ","End":"00:44.665","Text":"confirms the orthogonal decomposition theorem."},{"Start":"00:44.665 ","End":"00:48.740","Text":"Now, a vector say x, y, z, t,"},{"Start":"00:48.740 ","End":"00:53.270","Text":"will be in W perp if and only if it\u0027s"},{"Start":"00:53.270 ","End":"00:58.010","Text":"orthogonal to all the vectors in W. But we don\u0027t need"},{"Start":"00:58.010 ","End":"01:02.630","Text":"to check it against all the vectors in W. It\u0027s enough just to check against these 2"},{"Start":"01:02.630 ","End":"01:07.340","Text":"because they span W. If something is perpendicular to these 2,"},{"Start":"01:07.340 ","End":"01:10.130","Text":"it will be perpendicular to any linear combination,"},{"Start":"01:10.130 ","End":"01:16.759","Text":"IE to all W. Let\u0027s write down those 2 conditions using set theory notation."},{"Start":"01:16.759 ","End":"01:19.490","Text":"W perp is the set of all x, y, z, t,"},{"Start":"01:19.490 ","End":"01:22.925","Text":"such that something and something."},{"Start":"01:22.925 ","End":"01:26.240","Text":"This says that it\u0027s perpendicular to"},{"Start":"01:26.240 ","End":"01:31.580","Text":"this vector because the inner product is the dot product equals 0."},{"Start":"01:31.580 ","End":"01:33.275","Text":"That\u0027s the condition."},{"Start":"01:33.275 ","End":"01:37.445","Text":"To be perpendicular to this 1 means a dot product with this,"},{"Start":"01:37.445 ","End":"01:41.169","Text":"which is the inner product, is 0 also."},{"Start":"01:41.169 ","End":"01:44.965","Text":"Now, this dot product,"},{"Start":"01:44.965 ","End":"01:49.880","Text":"you just take the first with the first and multiply the second with the second,"},{"Start":"01:49.880 ","End":"01:51.980","Text":"the third with the third, and so on."},{"Start":"01:51.980 ","End":"01:55.535","Text":"Then we add them together so it\u0027s 1 times x plus 2 times y."},{"Start":"01:55.535 ","End":"01:58.380","Text":"What we get is this."},{"Start":"01:58.610 ","End":"02:05.115","Text":"Similarly this dot product is 2x and so on gives us this."},{"Start":"02:05.115 ","End":"02:09.300","Text":"What we have is a system of linear equations,"},{"Start":"02:09.300 ","End":"02:13.165","Text":"the 2 equations, 4 unknowns."},{"Start":"02:13.165 ","End":"02:15.619","Text":"We could solve it with matrices."},{"Start":"02:15.619 ","End":"02:17.285","Text":"I don\u0027t think there\u0027s any need to."},{"Start":"02:17.285 ","End":"02:24.275","Text":"We can bring it to row echelon form in the equation form."},{"Start":"02:24.275 ","End":"02:28.070","Text":"What I do is subtract twice the first row"},{"Start":"02:28.070 ","End":"02:32.265","Text":"from the second row and that will give us this,"},{"Start":"02:32.265 ","End":"02:34.080","Text":"2x minus x is nothing,"},{"Start":"02:34.080 ","End":"02:37.005","Text":"5y minus 2y is y and so on."},{"Start":"02:37.005 ","End":"02:39.540","Text":"This is an echelon form already."},{"Start":"02:39.540 ","End":"02:42.800","Text":"We can see that z and t are"},{"Start":"02:42.800 ","End":"02:50.970","Text":"the free variables and the dependent constrained variables are x and y."},{"Start":"02:51.310 ","End":"02:58.445","Text":"Let me also write that z and t are the free variables."},{"Start":"02:58.445 ","End":"03:02.330","Text":"Now we\u0027re going to use a technique of the wandering 1s,"},{"Start":"03:02.330 ","End":"03:08.840","Text":"I called it, where each time we let 1 of the free variables be 1 and the other 0."},{"Start":"03:08.840 ","End":"03:11.830","Text":"I wrote 2 and 1 so you can see the pattern."},{"Start":"03:11.830 ","End":"03:14.375","Text":"We 1 time let t equals 1,"},{"Start":"03:14.375 ","End":"03:17.990","Text":"z equals 0, and then t equals 0, z equals 1."},{"Start":"03:17.990 ","End":"03:23.570","Text":"X and y are dependent and they will be computed from t and z, c now."},{"Start":"03:23.570 ","End":"03:25.990","Text":"If t is 1 and z is 0,"},{"Start":"03:25.990 ","End":"03:28.335","Text":"then from this equation,"},{"Start":"03:28.335 ","End":"03:32.555","Text":"the 5z disappears and we get that y equals 1."},{"Start":"03:32.555 ","End":"03:33.800","Text":"Once y equals 1,"},{"Start":"03:33.800 ","End":"03:34.850","Text":"you plug it into here."},{"Start":"03:34.850 ","End":"03:36.649","Text":"I won\u0027t do the computations."},{"Start":"03:36.649 ","End":"03:38.525","Text":"We get x equals minus 3."},{"Start":"03:38.525 ","End":"03:43.430","Text":"Similarly, if we let t equals 0 and z equals 1,"},{"Start":"03:43.430 ","End":"03:47.120","Text":"then from here we get y is minus 5."},{"Start":"03:47.120 ","End":"03:49.580","Text":"Then from the first equation,"},{"Start":"03:49.580 ","End":"03:52.570","Text":"check it, you get x equals 11."},{"Start":"03:52.570 ","End":"04:00.410","Text":"I can use these to get a basis for the solution space from this row."},{"Start":"04:00.410 ","End":"04:02.600","Text":"If I put them in order x, y, z, t,"},{"Start":"04:02.600 ","End":"04:06.185","Text":"I get this vector minus 3101."},{"Start":"04:06.185 ","End":"04:08.600","Text":"From the second row here,"},{"Start":"04:08.600 ","End":"04:11.090","Text":"I get x is 11, y is minus 5."},{"Start":"04:11.090 ","End":"04:14.110","Text":"Shortly get this vector."},{"Start":"04:14.110 ","End":"04:21.400","Text":"W perp is spanned by these 2 vectors."},{"Start":"04:21.400 ","End":"04:26.120","Text":"More than that, this wandering ones technique gives us a basis."},{"Start":"04:26.120 ","End":"04:29.195","Text":"These 2 form a basis."},{"Start":"04:29.195 ","End":"04:34.765","Text":"We can say that the dimension of W perp is 2."},{"Start":"04:34.765 ","End":"04:38.890","Text":"Now we go on to part 2."},{"Start":"04:40.520 ","End":"04:42.605","Text":"Just going to scroll off."},{"Start":"04:42.605 ","End":"04:45.815","Text":"Let\u0027s just note that these 2 vectors,"},{"Start":"04:45.815 ","End":"04:51.975","Text":"where the vectors that spanned W. Remember these,"},{"Start":"04:51.975 ","End":"04:54.540","Text":"and now they\u0027ve disappeared."},{"Start":"04:54.540 ","End":"04:57.045","Text":"That\u0027s these 2 here,"},{"Start":"04:57.045 ","End":"04:59.025","Text":"this 1 and this 1."},{"Start":"04:59.025 ","End":"05:05.674","Text":"These span W, These 2 span W perp."},{"Start":"05:05.674 ","End":"05:09.080","Text":"If I take the sum of the 2 sub-spaces,"},{"Start":"05:09.080 ","End":"05:12.890","Text":"that\u0027s equal to the span of the union of the 2 sets,"},{"Start":"05:12.890 ","End":"05:15.980","Text":"this set of these 2 union, the set of these 2,"},{"Start":"05:15.980 ","End":"05:23.335","Text":"which means all 4 of them will span W plus W perp."},{"Start":"05:23.335 ","End":"05:30.365","Text":"Our question is, does it equal all of 4?"},{"Start":"05:30.365 ","End":"05:35.970","Text":"That\u0027s what the orthogonal decomposition theorem says."},{"Start":"05:36.280 ","End":"05:39.680","Text":"How would we show that they span all of our 4?"},{"Start":"05:39.680 ","End":"05:42.380","Text":"The strategy will be to show that these 4 are linearly"},{"Start":"05:42.380 ","End":"05:47.960","Text":"independent and any 4 linearly independent vectors in our 4 are basis,"},{"Start":"05:47.960 ","End":"05:51.290","Text":"and therefore they span and all will be well."},{"Start":"05:51.290 ","End":"05:55.130","Text":"Now, how do we show they are linearly independent?"},{"Start":"05:55.130 ","End":"05:58.070","Text":"For that, we can use matrices."},{"Start":"05:58.070 ","End":"06:02.340","Text":"Put all 4 of them as the rows of the 4 by 4 matrix."},{"Start":"06:02.340 ","End":"06:05.670","Text":"This 1 is the first row, and so on."},{"Start":"06:05.670 ","End":"06:07.635","Text":"This 1 is the last row."},{"Start":"06:07.635 ","End":"06:13.750","Text":"Now we want to bring this to echelon form and see whether or not we get a row of 0s."},{"Start":"06:13.750 ","End":"06:15.910","Text":"I\u0027m not going into great detail."},{"Start":"06:15.910 ","End":"06:20.410","Text":"This is routine, but basically what I did first was make these 3 entries"},{"Start":"06:20.410 ","End":"06:25.380","Text":"0 by subtracting or adding multiples of the first row from the other 3."},{"Start":"06:25.380 ","End":"06:26.920","Text":"We get 0s here."},{"Start":"06:26.920 ","End":"06:30.825","Text":"You know this stuff pretty well by now."},{"Start":"06:30.825 ","End":"06:35.470","Text":"Then I got this by using this 1 here,"},{"Start":"06:35.470 ","End":"06:38.350","Text":"adding or subtracting multiples of the second row from"},{"Start":"06:38.350 ","End":"06:42.305","Text":"the 3rd and 4th to get the 0 here and here."},{"Start":"06:42.305 ","End":"06:48.130","Text":"Now all I want to do is get a 0 here and I\u0027ll be an echelon form."},{"Start":"06:48.130 ","End":"06:54.355","Text":"What I could do is take a 147 times this row,"},{"Start":"06:54.355 ","End":"06:58.510","Text":"subtracted from 38 times this row."},{"Start":"06:58.510 ","End":"07:01.105","Text":"That\u0027ll give me a 0 here."},{"Start":"07:01.105 ","End":"07:06.115","Text":"But this lower right need some computation."},{"Start":"07:06.115 ","End":"07:14.330","Text":"It\u0027s actually 38 times 38."},{"Start":"07:15.920 ","End":"07:18.250","Text":"Whoops, it was a minor typo."},{"Start":"07:18.250 ","End":"07:19.750","Text":"There is a minus missing here."},{"Start":"07:19.750 ","End":"07:25.155","Text":"It was 38 times this row,"},{"Start":"07:25.155 ","End":"07:29.630","Text":"minus 147 times this row or plus."},{"Start":"07:29.630 ","End":"07:33.049","Text":"Anyway, this is what it is. It\u0027s not 0."},{"Start":"07:33.049 ","End":"07:37.730","Text":"That\u0027s for sure because this is an even number and this is an odd number."},{"Start":"07:37.730 ","End":"07:39.575","Text":"I don\u0027t even have to compute it."},{"Start":"07:39.575 ","End":"07:42.005","Text":"Once I know this is not 0,"},{"Start":"07:42.005 ","End":"07:47.390","Text":"then I know that in echelon form there are no 0 rows"},{"Start":"07:47.390 ","End":"07:53.260","Text":"and therefore the 4 vectors here are linearly independent."},{"Start":"07:53.260 ","End":"07:55.520","Text":"Again here, I just repeated what I already said."},{"Start":"07:55.520 ","End":"07:58.460","Text":"These 4 linearly independent and they\u0027re in our 4,"},{"Start":"07:58.460 ","End":"08:00.140","Text":"so they span our 4."},{"Start":"08:00.140 ","End":"08:04.250","Text":"That means that W plus W perp is our 4,"},{"Start":"08:04.250 ","End":"08:05.495","Text":"which is what we wanted."},{"Start":"08:05.495 ","End":"08:07.780","Text":"But to be precise,"},{"Start":"08:07.780 ","End":"08:13.970","Text":"the orthogonal decomposition theorem talks about direct sum."},{"Start":"08:13.970 ","End":"08:16.835","Text":"I can write instead of the sum,"},{"Start":"08:16.835 ","End":"08:19.670","Text":"direct sum because there\u0027s also,"},{"Start":"08:19.670 ","End":"08:26.390","Text":"we showed earlier that the intersection of a subspace with its perp is always just"},{"Start":"08:26.390 ","End":"08:34.290","Text":"the 0 so that it justifies writing it this way. We are done."}],"ID":10155},{"Watched":false,"Name":"Exercise 2","Duration":"4m 56s","ChapterTopicVideoID":10037,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.080 ","End":"00:03.930","Text":"In this exercise, we have W,"},{"Start":"00:03.930 ","End":"00:08.190","Text":"which is the span of this 1 vector 1, 1,1"},{"Start":"00:08.190 ","End":"00:12.210","Text":"in R_3, and R_3 has an inner product space,"},{"Start":"00:12.210 ","End":"00:13.620","Text":"the standard inner product,"},{"Start":"00:13.620 ","End":"00:15.850","Text":"which is the dot-product."},{"Start":"00:16.040 ","End":"00:20.955","Text":"We have to find a basis for W perp,"},{"Start":"00:20.955 ","End":"00:24.615","Text":"the orthogonal complement, and also the dimension."},{"Start":"00:24.615 ","End":"00:29.010","Text":"Then we have to show that the result of 1 is"},{"Start":"00:29.010 ","End":"00:34.960","Text":"consistent or confirms the orthogonal decomposition theorem."},{"Start":"00:34.960 ","End":"00:38.230","Text":"Let\u0027s see if we can find W perp."},{"Start":"00:38.230 ","End":"00:43.880","Text":"What it means to be in W perp means to be orthogonal to all of W."},{"Start":"00:43.880 ","End":"00:49.729","Text":"But it\u0027s enough to show that\u0027s orthogonal to this vector,"},{"Start":"00:49.729 ","End":"00:56.000","Text":"if it\u0027s orthogonal to a vector or a set of vectors it\u0027s also orthogonal to the span."},{"Start":"00:56.000 ","End":"00:59.790","Text":"All we have to check is to look for those x, y,"},{"Start":"00:59.790 ","End":"01:05.580","Text":"z which are orthogonal to 1,1,1 means that the inner product"},{"Start":"01:05.580 ","End":"01:09.390","Text":"with 1, 1, 1 is 0, but inner product is dot-product,"},{"Start":"01:09.390 ","End":"01:11.305","Text":"this is what we get."},{"Start":"01:11.305 ","End":"01:15.020","Text":"Now, this condition here with the dot product means that"},{"Start":"01:15.020 ","End":"01:18.740","Text":"1 times x plus 1 times y plus 1 times z is 0."},{"Start":"01:18.740 ","End":"01:20.870","Text":"In other words, x plus y plus z is 0."},{"Start":"01:20.870 ","End":"01:23.089","Text":"It\u0027s a system of linear equations,"},{"Start":"01:23.089 ","End":"01:25.765","Text":"1 equation in 3 unknowns."},{"Start":"01:25.765 ","End":"01:32.645","Text":"We see here that y and z are the free variables,"},{"Start":"01:32.645 ","End":"01:36.345","Text":"and x, depends on y and z."},{"Start":"01:36.345 ","End":"01:42.040","Text":"Now we\u0027re going to use the technique that we call the wondering ones."},{"Start":"01:42.040 ","End":"01:44.630","Text":"Let\u0027s look at both of these simultaneously."},{"Start":"01:44.630 ","End":"01:50.345","Text":"In one case, I let z be 1 and the other free variables 0."},{"Start":"01:50.345 ","End":"01:53.470","Text":"In the other case, I let y be 1,"},{"Start":"01:53.470 ","End":"01:55.725","Text":"and z be 0."},{"Start":"01:55.725 ","End":"02:01.220","Text":"In each case I compute x from y and z by plugging into here."},{"Start":"02:01.220 ","End":"02:05.855","Text":"In each of the cases I get x equals minus 1,"},{"Start":"02:05.855 ","End":"02:07.100","Text":"because in both these cases,"},{"Start":"02:07.100 ","End":"02:08.795","Text":"z plus y is 1,"},{"Start":"02:08.795 ","End":"02:11.720","Text":"and therefore x is minus 1."},{"Start":"02:11.720 ","End":"02:15.140","Text":"The first row corresponds to x,"},{"Start":"02:15.140 ","End":"02:18.140","Text":"y, z equals minus 1, 0, 1."},{"Start":"02:18.140 ","End":"02:19.810","Text":"That\u0027s this vector here."},{"Start":"02:19.810 ","End":"02:23.150","Text":"The other one is minus 1, 1, 0,"},{"Start":"02:23.150 ","End":"02:25.265","Text":"and that\u0027s this one here."},{"Start":"02:25.265 ","End":"02:31.205","Text":"This is W perp and its dimension is 2,"},{"Start":"02:31.205 ","End":"02:35.660","Text":"because there\u0027s 2 elements here and this is a basis."},{"Start":"02:35.660 ","End":"02:37.190","Text":"When we do the wondering ones method,"},{"Start":"02:37.190 ","End":"02:39.240","Text":"we get a basis."},{"Start":"02:39.240 ","End":"02:43.475","Text":"Let\u0027s go on to part 2."},{"Start":"02:43.475 ","End":"02:51.755","Text":"What we want to show is that W plus W perp is really R_3."},{"Start":"02:51.755 ","End":"02:55.790","Text":"Afterwards we can put a circle around this plus and make it a direct sum."},{"Start":"02:55.790 ","End":"02:58.670","Text":"Now when you have a sum of 2 sub-spaces,"},{"Start":"02:58.670 ","End":"03:03.705","Text":"you simply take the union of their spanning sets,"},{"Start":"03:03.705 ","End":"03:06.975","Text":"W is spanned by this 1,"},{"Start":"03:06.975 ","End":"03:08.790","Text":"W perp, by these 2,"},{"Start":"03:08.790 ","End":"03:14.660","Text":"just throw them all together and we get W plus W perp spanned by these 3."},{"Start":"03:14.660 ","End":"03:17.545","Text":"The question is, is it all of our 3?"},{"Start":"03:17.545 ","End":"03:20.945","Text":"The strategy will be to show that these are"},{"Start":"03:20.945 ","End":"03:24.035","Text":"linearly independent and then we\u0027ll take it from there."},{"Start":"03:24.035 ","End":"03:29.405","Text":"Now, how do we show that those 3 vectors are linearly independent?"},{"Start":"03:29.405 ","End":"03:33.365","Text":"Simplest is probably to do it with a matrix."},{"Start":"03:33.365 ","End":"03:40.115","Text":"We put these 3 vectors as rows of a 3-by-3 matrix."},{"Start":"03:40.115 ","End":"03:46.040","Text":"We want to bring this to echelon form and then see if we get any rows of 0s or not."},{"Start":"03:46.040 ","End":"03:49.880","Text":"Add the first row to the second and to the third,"},{"Start":"03:49.880 ","End":"03:52.570","Text":"and then we get the 0s here."},{"Start":"03:52.570 ","End":"03:57.980","Text":"Then of course we subtract twice the middle row from the bottom row,"},{"Start":"03:57.980 ","End":"04:00.395","Text":"which will give us a 0 here."},{"Start":"04:00.395 ","End":"04:06.880","Text":"Now this is in echelon form and there are no rows of 0s."},{"Start":"04:06.880 ","End":"04:11.865","Text":"These 3 vectors are linearly independent."},{"Start":"04:11.865 ","End":"04:15.470","Text":"Sometimes abbreviate linearly independent vectors in R_3."},{"Start":"04:15.470 ","End":"04:18.500","Text":"We already know that in R_n,"},{"Start":"04:18.500 ","End":"04:20.630","Text":"if we have n linearly independent vectors,"},{"Start":"04:20.630 ","End":"04:23.845","Text":"then they span R_n in our case n is 3."},{"Start":"04:23.845 ","End":"04:25.755","Text":"Just to be pedantic,"},{"Start":"04:25.755 ","End":"04:29.870","Text":"the orthogonal decomposition theorem talks about a direct sum."},{"Start":"04:29.870 ","End":"04:32.525","Text":"The direct sum is the same as the sum"},{"Start":"04:32.525 ","End":"04:39.690","Text":"because the intersection of these 2 is the trivial space,"},{"Start":"04:39.690 ","End":"04:45.575","Text":"just 0 and so we can say that the sum is the direct sum."},{"Start":"04:45.575 ","End":"04:49.670","Text":"The spanning shows that this plus this really is R_3."},{"Start":"04:49.670 ","End":"04:56.940","Text":"Yeah, we\u0027ve confirmed the orthogonal decomposition theorem in this case and we\u0027re done."}],"ID":10156},{"Watched":false,"Name":"Exercise 3","Duration":"5m 36s","ChapterTopicVideoID":10038,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.170","Text":"The previous exercises were about RN."},{"Start":"00:04.170 ","End":"00:05.730","Text":"Let\u0027s do something different."},{"Start":"00:05.730 ","End":"00:07.770","Text":"We\u0027ll take the inner product space,"},{"Start":"00:07.770 ","End":"00:13.740","Text":"P_2 of R. If you don\u0027t remember what this is,"},{"Start":"00:13.740 ","End":"00:17.310","Text":"I\u0027ll remind you it\u0027s the polynomials of"},{"Start":"00:17.310 ","End":"00:21.495","Text":"degree less than or equal to 2 over the real numbers,"},{"Start":"00:21.495 ","End":"00:24.420","Text":"like real number coefficients."},{"Start":"00:24.420 ","End":"00:26.330","Text":"That\u0027s a vector space."},{"Start":"00:26.330 ","End":"00:30.365","Text":"I have to give you an inner product to get it to be an inner product space."},{"Start":"00:30.365 ","End":"00:34.940","Text":"We\u0027re going to borrow an inner product from this space,"},{"Start":"00:34.940 ","End":"00:38.750","Text":"continuous functions on the interval from 0-1."},{"Start":"00:38.750 ","End":"00:43.880","Text":"Suddenly, every polynomial is continuous everywhere,"},{"Start":"00:43.880 ","End":"00:47.920","Text":"so polynomials are part of this space."},{"Start":"00:47.920 ","End":"00:54.470","Text":"The inner product defined here usually is the integral inner product,"},{"Start":"00:54.470 ","End":"00:56.990","Text":"and this is how it\u0027s described."},{"Start":"00:56.990 ","End":"01:04.030","Text":"The inner product is the integral from 0-1 of the product of the functions."},{"Start":"01:05.210 ","End":"01:11.900","Text":"Now, let W"},{"Start":"01:11.900 ","End":"01:16.885","Text":"be the span of a single polynomial x. I don\u0027t mean x as a variable."},{"Start":"01:16.885 ","End":"01:18.460","Text":"I mean it as a polynomial,"},{"Start":"01:18.460 ","End":"01:21.730","Text":"the polynomial p of x equals x."},{"Start":"01:22.070 ","End":"01:30.715","Text":"That\u0027s a subspace of P_2 of R. Now finally the question or the task,"},{"Start":"01:30.715 ","End":"01:37.460","Text":"find a basis for W and also its dimension."},{"Start":"01:37.740 ","End":"01:46.060","Text":"Now, a typical element in this space would be of this form,"},{"Start":"01:46.060 ","End":"01:50.115","Text":"a polynomial of the form a plus bx plus cx squared."},{"Start":"01:50.115 ","End":"01:54.965","Text":"Every polynomial of degree less than or equal to 2 is of this form."},{"Start":"01:54.965 ","End":"01:56.210","Text":"You could use different letters."},{"Start":"01:56.210 ","End":"02:03.320","Text":"I choose a, b, and c. Now in order for such a polynomial to be in W perp,"},{"Start":"02:03.320 ","End":"02:12.870","Text":"it has to be orthogonal to W. It\u0027s enough that it\u0027s orthogonal to the polynomial x."},{"Start":"02:12.870 ","End":"02:16.400","Text":"First of all, let\u0027s just write it in set theory notation."},{"Start":"02:16.400 ","End":"02:19.670","Text":"W perp is all the polynomials p of x,"},{"Start":"02:19.670 ","End":"02:21.950","Text":"which are of this form,"},{"Start":"02:21.950 ","End":"02:28.144","Text":"such that the inner product of p of x and x is 0."},{"Start":"02:28.144 ","End":"02:33.560","Text":"This just says that p of x is orthogonal to x."},{"Start":"02:33.560 ","End":"02:38.165","Text":"Now I\u0027m going to apply the definition of the inner product to this."},{"Start":"02:38.165 ","End":"02:40.445","Text":"The definition is written here."},{"Start":"02:40.445 ","End":"02:43.430","Text":"We take the integral from 0 to 1,"},{"Start":"02:43.430 ","End":"02:47.000","Text":"the first function which is p of x,"},{"Start":"02:47.000 ","End":"02:49.985","Text":"which is a plus bx plus cx squared."},{"Start":"02:49.985 ","End":"02:54.470","Text":"The second function is x, the integral dx."},{"Start":"02:54.470 ","End":"02:58.285","Text":"The condition is that this should equal 0."},{"Start":"02:58.285 ","End":"03:00.800","Text":"Let\u0027s do a bit of calculus here."},{"Start":"03:00.800 ","End":"03:02.330","Text":"First of all, the algebra part,"},{"Start":"03:02.330 ","End":"03:06.935","Text":"we\u0027ll multiply x by a plus bx plus cx squared,"},{"Start":"03:06.935 ","End":"03:11.020","Text":"and we get ax plus bx squared plus cx cubed."},{"Start":"03:11.020 ","End":"03:14.310","Text":"I\u0027m assuming you\u0027ve studied integration."},{"Start":"03:14.310 ","End":"03:16.660","Text":"Here are the a, b, c,"},{"Start":"03:16.660 ","End":"03:18.860","Text":"and the integral of x is x squared over 2,"},{"Start":"03:18.860 ","End":"03:23.430","Text":"x squared gives x cubed over 3, so on."},{"Start":"03:23.430 ","End":"03:25.605","Text":"To definite integrals,"},{"Start":"03:25.605 ","End":"03:30.620","Text":"we have to plug in the upper limit and the lower limit and subtract."},{"Start":"03:30.620 ","End":"03:32.980","Text":"If x is 0, we don\u0027t get anything."},{"Start":"03:32.980 ","End":"03:37.460","Text":"All we have to do is plug in x equals 1 to this expression."},{"Start":"03:37.460 ","End":"03:45.475","Text":"That gives us a/2 plus b/3 plus c/4 equals 0."},{"Start":"03:45.475 ","End":"03:47.930","Text":"Let\u0027s get rid of the fractions,"},{"Start":"03:47.930 ","End":"03:51.800","Text":"the denominator, the common denominator would be 12."},{"Start":"03:51.800 ","End":"03:54.080","Text":"Multiply everything by 12,"},{"Start":"03:54.080 ","End":"03:55.640","Text":"and this is what we get."},{"Start":"03:55.640 ","End":"03:57.920","Text":"This is the condition on a, b, c."},{"Start":"03:57.920 ","End":"04:03.970","Text":"In order for a polynomial to be in W perp."},{"Start":"04:03.970 ","End":"04:07.290","Text":"1 equation and 3 unknowns,"},{"Start":"04:07.290 ","End":"04:14.480","Text":"b and c are the free variables and a depends on"},{"Start":"04:14.480 ","End":"04:18.080","Text":"b and c. We\u0027ll use what we call the"},{"Start":"04:18.080 ","End":"04:23.015","Text":"wandering ones method to get a basis for the solution space."},{"Start":"04:23.015 ","End":"04:27.950","Text":"Each of the free variables in turn gets to be 1 and the other 0."},{"Start":"04:27.950 ","End":"04:31.955","Text":"First of all, let\u0027s try b equals 1 and c equals 0."},{"Start":"04:31.955 ","End":"04:33.320","Text":"If you plug that in,"},{"Start":"04:33.320 ","End":"04:36.610","Text":"the answer is minus 2/3, check it."},{"Start":"04:37.070 ","End":"04:41.595","Text":"Then the other way around c will be 1 and b will be 0."},{"Start":"04:41.595 ","End":"04:43.670","Text":"If you plug that in here,"},{"Start":"04:43.670 ","End":"04:48.310","Text":"check it, you get that a equals minus a 1/2."},{"Start":"04:48.310 ","End":"04:54.785","Text":"Now, each of these rows gives us a polynomial."},{"Start":"04:54.785 ","End":"04:58.700","Text":"Just look inside the curly brackets for the moment."},{"Start":"04:58.700 ","End":"05:03.110","Text":"The first row, if we take it in the order a, b, c,"},{"Start":"05:03.110 ","End":"05:08.360","Text":"corresponds to minus 2/3 plus x."},{"Start":"05:08.360 ","End":"05:12.815","Text":"This 1 gives us minus a 1/2 plus x squared."},{"Start":"05:12.815 ","End":"05:20.555","Text":"These 2 polynomials are the basis for the solution space,"},{"Start":"05:20.555 ","End":"05:27.595","Text":"which is the orthogonal complement of W or W perp."},{"Start":"05:27.595 ","End":"05:30.035","Text":"Since there are 2 of them,"},{"Start":"05:30.035 ","End":"05:34.130","Text":"then the dimension of W perp is 2."},{"Start":"05:34.130 ","End":"05:37.080","Text":"We are done."}],"ID":10157},{"Watched":false,"Name":"Exercise 4","Duration":"5m 46s","ChapterTopicVideoID":10039,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.725","Text":"In this exercise, we have the inner product space of"},{"Start":"00:04.725 ","End":"00:10.635","Text":"polynomials of degree 2 or less over the reals."},{"Start":"00:10.635 ","End":"00:13.590","Text":"I have to tell you what the inner product is."},{"Start":"00:13.590 ","End":"00:21.300","Text":"We borrow it from the space of continuous functions on the interval from 0-1."},{"Start":"00:21.300 ","End":"00:25.500","Text":"Polynomials are continuous everywhere. That\u0027s okay."},{"Start":"00:25.500 ","End":"00:32.150","Text":"The standard inner product for this space is the integral inner product,"},{"Start":"00:32.150 ","End":"00:34.070","Text":"which is defined like so,"},{"Start":"00:34.070 ","End":"00:37.025","Text":"the integral of the product over the interval."},{"Start":"00:37.025 ","End":"00:44.885","Text":"Now we want to take a subspace of P_2 of R. We\u0027ll take the span of the 2 polynomials,"},{"Start":"00:44.885 ","End":"00:47.005","Text":"x and x squared."},{"Start":"00:47.005 ","End":"00:50.360","Text":"I could have given them letters P of x is x,"},{"Start":"00:50.360 ","End":"00:51.785","Text":"q of x is x squared."},{"Start":"00:51.785 ","End":"00:53.600","Text":"Just give them like this."},{"Start":"00:53.600 ","End":"00:58.669","Text":"What our task is to do is to find a basis for"},{"Start":"00:58.669 ","End":"01:04.880","Text":"the orthogonal complement of W or W [inaudible] ."},{"Start":"01:04.880 ","End":"01:09.270","Text":"Need to find a basis and the dimension."},{"Start":"01:09.400 ","End":"01:14.224","Text":"Now I want to interpret what this all means."},{"Start":"01:14.224 ","End":"01:22.430","Text":"A polynomial in P_2 of R is of the form a plus bx plus cx squared,"},{"Start":"01:22.430 ","End":"01:24.995","Text":"a, b and c are real numbers."},{"Start":"01:24.995 ","End":"01:28.610","Text":"That\u0027s how we can characterize these polynomials."},{"Start":"01:28.610 ","End":"01:33.785","Text":"Now, what does it mean for a polynomial p to be in W [inaudible] ?"},{"Start":"01:33.785 ","End":"01:38.225","Text":"It means it\u0027s orthogonal to everything in w,"},{"Start":"01:38.225 ","End":"01:44.720","Text":"but it\u0027s enough for it to be orthogonal to x and to x squared to the spanning set."},{"Start":"01:44.720 ","End":"01:46.790","Text":"If it\u0027s orthogonal to these 2,"},{"Start":"01:46.790 ","End":"01:49.790","Text":"it will be orthogonal to any linear combination."},{"Start":"01:49.790 ","End":"01:54.545","Text":"We have 2 orthogonality conditions."},{"Start":"01:54.545 ","End":"01:56.570","Text":"P is orthogonal to x,"},{"Start":"01:56.570 ","End":"02:00.830","Text":"which means the inner product of this is 0 and P is orthogonal to x squared,"},{"Start":"02:00.830 ","End":"02:04.750","Text":"which means that this inner product is 0."},{"Start":"02:04.750 ","End":"02:06.800","Text":"We\u0027ll do them 1 at a time."},{"Start":"02:06.800 ","End":"02:09.365","Text":"Let\u0027s start with this first condition"},{"Start":"02:09.365 ","End":"02:13.345","Text":"and see where it leads to and then we\u0027ll do the other 1."},{"Start":"02:13.345 ","End":"02:18.680","Text":"This 1 by definition of the inner product is this integral."},{"Start":"02:18.680 ","End":"02:21.815","Text":"This is p of x, this is x."},{"Start":"02:21.815 ","End":"02:29.415","Text":"First a bit of algebra to multiply everything by x here and then the integration,"},{"Start":"02:29.415 ","End":"02:33.470","Text":"and assuming you know how to do into simple integration."},{"Start":"02:33.470 ","End":"02:35.495","Text":"This is the indefinite integral."},{"Start":"02:35.495 ","End":"02:40.219","Text":"We have to substitute the limits of integration and subtract."},{"Start":"02:40.219 ","End":"02:42.440","Text":"If I plug in 0, I don\u0027t get anything,"},{"Start":"02:42.440 ","End":"02:44.695","Text":"so I just have to plug in 1."},{"Start":"02:44.695 ","End":"02:48.795","Text":"That\u0027s this, but I don\u0027t want to work with fractions,"},{"Start":"02:48.795 ","End":"02:52.610","Text":"so if we multiply everything by 12,"},{"Start":"02:52.610 ","End":"02:54.500","Text":"we get this condition,"},{"Start":"02:54.500 ","End":"02:57.440","Text":"which is an equation in a, b,"},{"Start":"02:57.440 ","End":"03:02.390","Text":"and c. Then we\u0027ll get another equation from the other half from this condition,"},{"Start":"03:02.390 ","End":"03:05.315","Text":"the orthogonality with x squared."},{"Start":"03:05.315 ","End":"03:07.525","Text":"Let\u0027s see what that gives us."},{"Start":"03:07.525 ","End":"03:11.434","Text":"A similar integral, but where we had x before,"},{"Start":"03:11.434 ","End":"03:13.564","Text":"we now have x squared."},{"Start":"03:13.564 ","End":"03:16.775","Text":"Once again first the algebra to multiply out,"},{"Start":"03:16.775 ","End":"03:21.050","Text":"just raise all the exponents by 2 inside there."},{"Start":"03:21.050 ","End":"03:24.470","Text":"Now we\u0027ll do the actual integral,"},{"Start":"03:24.470 ","End":"03:27.410","Text":"which is this, at least this is the indefinite integral."},{"Start":"03:27.410 ","End":"03:31.010","Text":"Now we have to plug in the 1 and 0 and subtract,"},{"Start":"03:31.010 ","End":"03:33.755","Text":"which gives us this."},{"Start":"03:33.755 ","End":"03:42.285","Text":"If we get rid of the fractions by multiplying by 60,"},{"Start":"03:42.285 ","End":"03:44.610","Text":"then we get this."},{"Start":"03:44.610 ","End":"03:47.990","Text":"Now we have a second equation in a, b,"},{"Start":"03:47.990 ","End":"03:51.020","Text":"and c, and if I scroll back,"},{"Start":"03:51.020 ","End":"03:52.850","Text":"you would see the first 1."},{"Start":"03:52.850 ","End":"03:56.225","Text":"Let\u0027s start a new page with both equations."},{"Start":"03:56.225 ","End":"04:00.480","Text":"Here we are, 2 equations in 3 unknowns, a,"},{"Start":"04:00.480 ","End":"04:04.089","Text":"b, c. I\u0027m not going to work with matrices."},{"Start":"04:04.089 ","End":"04:05.500","Text":"We\u0027ll just work directly."},{"Start":"04:05.500 ","End":"04:07.420","Text":"I want to bring it into echelon form."},{"Start":"04:07.420 ","End":"04:11.160","Text":"I want to get a 0 in the second equation 20^a."},{"Start":"04:11.160 ","End":"04:17.490","Text":"The least common multiple of 6 and 20 is 60."},{"Start":"04:17.490 ","End":"04:25.320","Text":"If I take 10 of these and 3 of those and subtract and put that into the second row."},{"Start":"04:25.320 ","End":"04:29.600","Text":"Then feel free to check this by pausing."},{"Start":"04:29.600 ","End":"04:34.300","Text":"This is what we get and there\u0027s an absent coefficient of a which is what we wanted,"},{"Start":"04:34.300 ","End":"04:39.145","Text":"so this is now in echelon form."},{"Start":"04:39.145 ","End":"04:47.845","Text":"We can see that c is the free variable,"},{"Start":"04:47.845 ","End":"04:53.640","Text":"and a and b depend on it."},{"Start":"04:53.640 ","End":"04:55.895","Text":"Using the method of wondering 1\u0027s,"},{"Start":"04:55.895 ","End":"04:57.460","Text":"well, there is only 1 possibility."},{"Start":"04:57.460 ","End":"05:00.220","Text":"We have to let c equal 1."},{"Start":"05:00.220 ","End":"05:04.000","Text":"I tried it earlier and I got fractions, decimals,"},{"Start":"05:04.000 ","End":"05:08.680","Text":"I got b is minus 1.2 and a 0.3."},{"Start":"05:08.680 ","End":"05:12.190","Text":"We don\u0027t have to take c equals 1."},{"Start":"05:12.190 ","End":"05:14.635","Text":"If we took c equals 10,"},{"Start":"05:14.635 ","End":"05:17.290","Text":"then we can avoid working with fractions."},{"Start":"05:17.290 ","End":"05:20.080","Text":"You could work in decimals if you wanted to."},{"Start":"05:20.080 ","End":"05:22.370","Text":"I prefer whole numbers."},{"Start":"05:22.400 ","End":"05:27.590","Text":"We\u0027ve seen there\u0027s only 1 polynomial in the basis which we build from this."},{"Start":"05:27.590 ","End":"05:32.075","Text":"Remember we wanted a plus bx plus cx squared,"},{"Start":"05:32.075 ","End":"05:34.525","Text":"so that gives us this."},{"Start":"05:34.525 ","End":"05:44.550","Text":"W [inaudible] is the span of this 1 polynomial and its dimension is 1,"},{"Start":"05:44.550 ","End":"05:47.200","Text":"and we are done."}],"ID":10158},{"Watched":false,"Name":"Exercise 5","Duration":"4m 55s","ChapterTopicVideoID":10040,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:10.180","Text":"In this exercise, we\u0027re concerned with the space of 2 by 2 real matrices."},{"Start":"00:10.790 ","End":"00:15.930","Text":"We have a standard inner product for all these matrices,"},{"Start":"00:15.930 ","End":"00:21.255","Text":"which is that the inner product of 2 matrices of the same size"},{"Start":"00:21.255 ","End":"00:27.790","Text":"is the trace of the transpose of the second times the first."},{"Start":"00:27.790 ","End":"00:33.695","Text":"Now, we have a subspace of this inner product space W,"},{"Start":"00:33.695 ","End":"00:40.095","Text":"which is the subspace spanned by these 2 particular matrices."},{"Start":"00:40.095 ","End":"00:45.740","Text":"Our task is to find a basis for and"},{"Start":"00:45.740 ","End":"00:52.740","Text":"the dimension of the orthogonal complement of W, W perp."},{"Start":"00:52.790 ","End":"00:56.515","Text":"Let\u0027s let the general matrix be x, y, z,"},{"Start":"00:56.515 ","End":"01:01.550","Text":"t. The condition on being in W perp is"},{"Start":"01:01.550 ","End":"01:07.305","Text":"that of being orthogonal to both of these matrices."},{"Start":"01:07.305 ","End":"01:09.140","Text":"There\u0027s 2 conditions."},{"Start":"01:09.140 ","End":"01:13.010","Text":"The first condition is orthogonality with this one,"},{"Start":"01:13.010 ","End":"01:16.245","Text":"which means the inner product of these 2 is 0."},{"Start":"01:16.245 ","End":"01:19.025","Text":"It also has to be orthogonal to this one."},{"Start":"01:19.025 ","End":"01:21.715","Text":"This inner product is 0."},{"Start":"01:21.715 ","End":"01:24.440","Text":"Let\u0027s take the 2 conditions one at a time."},{"Start":"01:24.440 ","End":"01:28.115","Text":"First, this one, the inner product,"},{"Start":"01:28.115 ","End":"01:31.625","Text":"we look at the formula here or you could remember it."},{"Start":"01:31.625 ","End":"01:40.625","Text":"We need the transpose of the second times the first to be 0."},{"Start":"01:40.625 ","End":"01:43.475","Text":"I just left it at the moment as transpose."},{"Start":"01:43.475 ","End":"01:45.095","Text":"By the way, don\u0027t get confused."},{"Start":"01:45.095 ","End":"01:49.100","Text":"Tr is trace and the T is transpose."},{"Start":"01:49.100 ","End":"01:53.040","Text":"They both begin with the TRA."},{"Start":"01:53.320 ","End":"01:56.870","Text":"The transpose of this matrix,"},{"Start":"01:56.870 ","End":"02:00.830","Text":"if you just interchange rows and columns, is this."},{"Start":"02:00.830 ","End":"02:05.585","Text":"We want the product of this times this to have a trace of 0."},{"Start":"02:05.585 ","End":"02:08.580","Text":"Let\u0027s do the product first."},{"Start":"02:09.320 ","End":"02:12.680","Text":"This is the product. I didn\u0027t really need to do it all."},{"Start":"02:12.680 ","End":"02:15.919","Text":"I just needed to figure out the ones on the diagonal,"},{"Start":"02:15.919 ","End":"02:19.250","Text":"but it was 0s and 1s, so I did it all."},{"Start":"02:19.250 ","End":"02:23.629","Text":"The trace is 0 means that these diagonal,"},{"Start":"02:23.629 ","End":"02:25.010","Text":"the sum is 0,"},{"Start":"02:25.010 ","End":"02:27.755","Text":"so z plus t has to be 0."},{"Start":"02:27.755 ","End":"02:36.695","Text":"This is the equation that comes out of the first condition of orthogonality."},{"Start":"02:36.695 ","End":"02:43.190","Text":"Let\u0027s take the second orthogonality condition and see what we get from that one."},{"Start":"02:43.940 ","End":"02:46.070","Text":"We proceed as before,"},{"Start":"02:46.070 ","End":"02:48.440","Text":"it\u0027s just a different matrix here."},{"Start":"02:48.440 ","End":"02:51.440","Text":"Will take its transpose,"},{"Start":"02:51.440 ","End":"02:55.470","Text":"which is this the 1 moved from here to here."},{"Start":"02:55.470 ","End":"02:59.840","Text":"Now want to multiply and take the trace."},{"Start":"02:59.840 ","End":"03:03.860","Text":"The product is this and I computed it all,"},{"Start":"03:03.860 ","End":"03:06.815","Text":"but we really only needed to compute the diagonal."},{"Start":"03:06.815 ","End":"03:10.035","Text":"To say that the trace of this is 0,"},{"Start":"03:10.035 ","End":"03:13.105","Text":"is to say that z equals 0."},{"Start":"03:13.105 ","End":"03:16.405","Text":"Look, we\u0027ve got 2 equations."},{"Start":"03:16.405 ","End":"03:18.640","Text":"It\u0027s in 4 unknowns here."},{"Start":"03:18.640 ","End":"03:20.230","Text":"We don\u0027t see x and y here,"},{"Start":"03:20.230 ","End":"03:22.480","Text":"but these are 2 equations in x, y, z,"},{"Start":"03:22.480 ","End":"03:26.260","Text":"t. Now if I look at these 2 equations,"},{"Start":"03:26.260 ","End":"03:29.950","Text":"you can see that the free variables are the ones that are missing."},{"Start":"03:29.950 ","End":"03:31.270","Text":"There\u0027s no condition on it."},{"Start":"03:31.270 ","End":"03:32.605","Text":"They can be anything."},{"Start":"03:32.605 ","End":"03:35.435","Text":"They are free."},{"Start":"03:35.435 ","End":"03:39.130","Text":"We\u0027ll use our familiar trick of the wondering ones."},{"Start":"03:39.130 ","End":"03:40.270","Text":"We\u0027ll let x equal 1,"},{"Start":"03:40.270 ","End":"03:43.070","Text":"y equals 0, and then vice versa,"},{"Start":"03:43.070 ","End":"03:47.800","Text":"x equals 1, y equals 0 plus the second equation,"},{"Start":"03:47.800 ","End":"03:48.880","Text":"let me just write it."},{"Start":"03:48.880 ","End":"03:53.270","Text":"It was z plus t equals 0."},{"Start":"03:54.130 ","End":"03:58.880","Text":"Well, it doesn\u0027t matter what x and y are."},{"Start":"03:58.880 ","End":"04:00.575","Text":"Because in any event,"},{"Start":"04:00.575 ","End":"04:05.210","Text":"these 2 equations give us that z is 0 and t is 0."},{"Start":"04:05.210 ","End":"04:08.000","Text":"As you plugged z equals 0 here, you get t is 0,"},{"Start":"04:08.000 ","End":"04:09.755","Text":"so z is 0, t is 0,"},{"Start":"04:09.755 ","End":"04:11.900","Text":"and we do vice versa."},{"Start":"04:11.900 ","End":"04:14.494","Text":"The y is one and x is 0."},{"Start":"04:14.494 ","End":"04:19.040","Text":"Then it doesn\u0027t change anything about z and t. They are both 0,"},{"Start":"04:19.040 ","End":"04:21.230","Text":"they don\u0027t care about x and y."},{"Start":"04:21.230 ","End":"04:24.385","Text":"Now we have 2."},{"Start":"04:24.385 ","End":"04:27.079","Text":"Each of these 2 rows represents a matrix,"},{"Start":"04:27.079 ","End":"04:28.730","Text":"but let\u0027s put them in the right order."},{"Start":"04:28.730 ","End":"04:34.130","Text":"The x, y, z, t basis for the solution space,"},{"Start":"04:34.130 ","End":"04:36.410","Text":"which is W perp,"},{"Start":"04:36.410 ","End":"04:41.050","Text":"is from here 1,0 and 0,0 and"},{"Start":"04:41.050 ","End":"04:46.920","Text":"here 0,1 and also 0,0."},{"Start":"04:46.920 ","End":"04:51.135","Text":"Since this is a basis and there\u0027s 2 elements in it, 2 matrices,"},{"Start":"04:51.135 ","End":"04:56.680","Text":"the dimension of W perp is 2. We are done."}],"ID":10159},{"Watched":false,"Name":"Exercise 6","Duration":"8m 12s","ChapterTopicVideoID":10028,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.630","Text":"In this exercise, we have the space M_3 brackets R,"},{"Start":"00:06.630 ","End":"00:13.110","Text":"which means the 3 by 3 matrices over the real numbers."},{"Start":"00:13.110 ","End":"00:16.020","Text":"This is not just a vector space,"},{"Start":"00:16.020 ","End":"00:18.690","Text":"it\u0027s an inner product space with the usual inner product,"},{"Start":"00:18.690 ","End":"00:21.210","Text":"I\u0027ll remind you when we get to it."},{"Start":"00:21.310 ","End":"00:27.720","Text":"W is the subspace of diagonal matrices,"},{"Start":"00:27.720 ","End":"00:30.180","Text":"meaning there\u0027s entries along the diagonal,"},{"Start":"00:30.180 ","End":"00:32.985","Text":"but off the main diagonal it\u0027s all 0."},{"Start":"00:32.985 ","End":"00:39.440","Text":"Our task is to find the orthogonal complement W perp to"},{"Start":"00:39.440 ","End":"00:47.200","Text":"this subspace W. Now I claim that a basis for W,"},{"Start":"00:47.200 ","End":"00:54.115","Text":"the diagonal matrices, consists of these 3 matrices."},{"Start":"00:54.115 ","End":"00:56.110","Text":"Notice there\u0027s a 1 here,"},{"Start":"00:56.110 ","End":"00:57.505","Text":"then there\u0027s a 1 here,"},{"Start":"00:57.505 ","End":"00:58.750","Text":"and then there\u0027s a 1 here,"},{"Start":"00:58.750 ","End":"01:00.535","Text":"all along the main diagonal,"},{"Start":"01:00.535 ","End":"01:03.079","Text":"a 1 in different positions."},{"Start":"01:03.240 ","End":"01:12.070","Text":"I\u0027ll show you why they span W. Because if I take any diagonal matrix,"},{"Start":"01:12.070 ","End":"01:13.600","Text":"it will look like a, b, c,"},{"Start":"01:13.600 ","End":"01:16.840","Text":"here and I can write it as a times this 1,"},{"Start":"01:16.840 ","End":"01:18.040","Text":"and b times this 1,"},{"Start":"01:18.040 ","End":"01:20.990","Text":"and c times this 1."},{"Start":"01:20.990 ","End":"01:30.365","Text":"It spans and they\u0027re also linearly independent because if we have a linear combination,"},{"Start":"01:30.365 ","End":"01:34.070","Text":"a times this plus b times this plus c times this is 0."},{"Start":"01:34.070 ","End":"01:37.520","Text":"It means that this is the 0 matrix and it means that a,"},{"Start":"01:37.520 ","End":"01:40.745","Text":"b, and c are all 0."},{"Start":"01:40.745 ","End":"01:44.970","Text":"This shows that they are linearly independent."},{"Start":"01:45.200 ","End":"01:53.110","Text":"Good, we have a basis for W. What we would really like is a basis for W perp."},{"Start":"01:53.110 ","End":"01:55.729","Text":"When it says to find the orthogonal complement,"},{"Start":"01:55.729 ","End":"01:58.970","Text":"we can settle for just giving a basis for it that will do."},{"Start":"01:58.970 ","End":"02:02.905","Text":"How do we go about finding a basis for W perp?"},{"Start":"02:02.905 ","End":"02:07.865","Text":"Let\u0027s try and characterize what it means to be in W perp."},{"Start":"02:07.865 ","End":"02:10.400","Text":"Let\u0027s say we have a matrix A,"},{"Start":"02:10.400 ","End":"02:14.735","Text":"and I\u0027ll just call them x_1 through x_9."},{"Start":"02:14.735 ","End":"02:17.615","Text":"A is going to be orthogonal to"},{"Start":"02:17.615 ","End":"02:24.780","Text":"all the basis members of W. We don\u0027t have to check for all W,"},{"Start":"02:24.780 ","End":"02:30.170","Text":"we just have to check on the basis that will give us 3 orthogonality conditions."},{"Start":"02:30.170 ","End":"02:32.360","Text":"This would have to be orthogonal to this,"},{"Start":"02:32.360 ","End":"02:35.200","Text":"to this, and to this."},{"Start":"02:35.200 ","End":"02:38.320","Text":"Here\u0027s the first 1."},{"Start":"02:39.530 ","End":"02:44.360","Text":"Inner product of this with first matrix,"},{"Start":"02:44.360 ","End":"02:45.770","Text":"and the basis is 0."},{"Start":"02:45.770 ","End":"02:47.540","Text":"What does this mean?"},{"Start":"02:47.540 ","End":"02:52.910","Text":"Well, I\u0027ll tell you in a moment I decided I want to write all 3 of them first,"},{"Start":"02:52.910 ","End":"02:55.070","Text":"and now we\u0027ll see what it means."},{"Start":"02:55.070 ","End":"02:57.890","Text":"The inner product of X with Y, in general,"},{"Start":"02:57.890 ","End":"03:03.005","Text":"is the trace of the transpose of the second times the first,"},{"Start":"03:03.005 ","End":"03:06.210","Text":"Y transpose times X."},{"Start":"03:06.400 ","End":"03:09.890","Text":"This inner product is the transpose of"},{"Start":"03:09.890 ","End":"03:13.735","Text":"the second times the first and then we take the trace."},{"Start":"03:13.735 ","End":"03:17.070","Text":"Now the transpose of this is itself."},{"Start":"03:17.070 ","End":"03:20.610","Text":"We get the trace of this times x_1,"},{"Start":"03:20.610 ","End":"03:21.660","Text":"x_2, x_3, and so on."},{"Start":"03:21.660 ","End":"03:25.660","Text":"This is our matrix A is equal to 0."},{"Start":"03:25.660 ","End":"03:28.370","Text":"If you multiply it out,"},{"Start":"03:28.370 ","End":"03:32.150","Text":"then we get this matrix."},{"Start":"03:32.150 ","End":"03:34.970","Text":"I didn\u0027t have to multiply it all out."},{"Start":"03:34.970 ","End":"03:36.080","Text":"It was just easy to do,"},{"Start":"03:36.080 ","End":"03:39.275","Text":"I only really needed to do the ones for the diagonal."},{"Start":"03:39.275 ","End":"03:43.865","Text":"I would have taken this row with this column, and gotten x_1."},{"Start":"03:43.865 ","End":"03:46.550","Text":"But it was easy enough to do them all."},{"Start":"03:46.550 ","End":"03:51.275","Text":"Then I\u0027ve marked in a different color, darker the diagonal."},{"Start":"03:51.275 ","End":"03:54.050","Text":"The trace of this is this plus this plus this."},{"Start":"03:54.050 ","End":"03:57.475","Text":"This just means that x_1 is 0."},{"Start":"03:57.475 ","End":"04:00.300","Text":"Similarly, with the next 1,"},{"Start":"04:00.300 ","End":"04:10.760","Text":"this next 1 will give us that x_5 is 0 and the last 1 gives us x_9 is 0."},{"Start":"04:10.760 ","End":"04:17.295","Text":"Just do the same like we did in the first example, just boring computations."},{"Start":"04:17.295 ","End":"04:19.650","Text":"Now we have these 3 conditions,"},{"Start":"04:19.650 ","End":"04:22.530","Text":"x_1 is 0, x_5 is 0,"},{"Start":"04:22.530 ","End":"04:24.780","Text":"and x_9 is 0."},{"Start":"04:24.780 ","End":"04:29.535","Text":"Let me highlight them and what we get is a system of"},{"Start":"04:29.535 ","End":"04:36.760","Text":"3 equations in 9 unknowns, x_1 through x_9."},{"Start":"04:36.800 ","End":"04:38.930","Text":"Here we are in a new page."},{"Start":"04:38.930 ","End":"04:41.900","Text":"I just wrote them all as 1 equation; x_1, x_5,"},{"Start":"04:41.900 ","End":"04:43.640","Text":"and x_9 are all 0,"},{"Start":"04:43.640 ","End":"04:46.890","Text":"3 equations, 9 unknowns."},{"Start":"04:47.090 ","End":"04:53.955","Text":"Clearly, the 6 variables that don\u0027t appear here are the free variables,"},{"Start":"04:53.955 ","End":"04:56.610","Text":"and x_1, x_5,"},{"Start":"04:56.610 ","End":"04:59.105","Text":"and x_9 are forced."},{"Start":"04:59.105 ","End":"05:01.160","Text":"We don\u0027t even care actually what these are."},{"Start":"05:01.160 ","End":"05:06.215","Text":"These are going to be all 3 of them 0 no matter what and these are going to be free."},{"Start":"05:06.215 ","End":"05:09.820","Text":"We\u0027ll use our method of the wandering 1s,"},{"Start":"05:09.820 ","End":"05:15.560","Text":"as we called it, and they\u0027ll remind you what this wondering 1s method is."},{"Start":"05:15.560 ","End":"05:17.900","Text":"We take the free variables,"},{"Start":"05:17.900 ","End":"05:23.955","Text":"and each time we let 1 of them equal 1 or any non-zero number that\u0027s convenient,"},{"Start":"05:23.955 ","End":"05:26.165","Text":"and the rest of them 0."},{"Start":"05:26.165 ","End":"05:31.235","Text":"We take it in turns the 1 wonders from this 1 to this 1 to this 1 to this 1."},{"Start":"05:31.235 ","End":"05:37.710","Text":"For example, if we let x_2 equal 1 and the other 5,"},{"Start":"05:37.710 ","End":"05:41.800","Text":"again, it\u0027ll be 0, that\u0027s part of the wandering 1s."},{"Start":"05:42.440 ","End":"05:45.690","Text":"What it gives us is this matrix."},{"Start":"05:45.690 ","End":"05:48.360","Text":"I used colors. That\u0027s the x_2, that\u0027s 1."},{"Start":"05:48.360 ","End":"05:53.240","Text":"The 1s that are 0 from the free variables are in this color a greenish,"},{"Start":"05:53.240 ","End":"05:56.490","Text":"and the x_1,"},{"Start":"05:56.490 ","End":"06:00.440","Text":"x_5, x_9 are in a grayish color."},{"Start":"06:00.440 ","End":"06:03.770","Text":"These are the 3 diagonal 1s."},{"Start":"06:03.770 ","End":"06:09.605","Text":"Basically, we get this matrix with a 1 here and everything else 0."},{"Start":"06:09.605 ","End":"06:14.085","Text":"Now, the other 5 are all very similar."},{"Start":"06:14.085 ","End":"06:16.080","Text":"Here\u0027s what we get;"},{"Start":"06:16.080 ","End":"06:20.130","Text":"6 matrices in all this 1 is the first 1, that ones here."},{"Start":"06:20.130 ","End":"06:26.375","Text":"Then we let x_3 equals 1 and all the rest of them as 0."},{"Start":"06:26.375 ","End":"06:28.130","Text":"This is what we get."},{"Start":"06:28.130 ","End":"06:30.110","Text":"Actually it\u0027s fairly clear in each case,"},{"Start":"06:30.110 ","End":"06:33.305","Text":"the diagonal is going to be all zeros from this."},{"Start":"06:33.305 ","End":"06:36.080","Text":"Each time we\u0027re letting 1 of them be 1 and the other 5 as"},{"Start":"06:36.080 ","End":"06:39.470","Text":"0 so there\u0027s like 8 zeros altogether,"},{"Start":"06:39.470 ","End":"06:44.700","Text":"the 3 on the diagonal and the other 5 and a single 1 in the positions 2,"},{"Start":"06:44.700 ","End":"06:47.055","Text":"3, 4, 6, 7, and 8."},{"Start":"06:47.055 ","End":"06:52.585","Text":"We get this and this is going to be our basis for W perp."},{"Start":"06:52.585 ","End":"06:55.725","Text":"Now in principle, we\u0027re done here."},{"Start":"06:55.725 ","End":"06:57.690","Text":"We found W perp,"},{"Start":"06:57.690 ","End":"07:00.440","Text":"we found a basis, it counts like we found it."},{"Start":"07:00.440 ","End":"07:07.300","Text":"But I\u0027d like to say a few words more about the orthogonal decomposition theorem."},{"Start":"07:07.300 ","End":"07:10.215","Text":"Here I prepared them."},{"Start":"07:10.215 ","End":"07:13.685","Text":"Notice that this is consistent with that theorem."},{"Start":"07:13.685 ","End":"07:22.970","Text":"What the theorem says is that the space should equal W direct sum W perp."},{"Start":"07:22.970 ","End":"07:26.195","Text":"Let\u0027s check the dimensions."},{"Start":"07:26.195 ","End":"07:34.900","Text":"The dimension of W is 3 because we had 3 basis elements the dimension of W perp is 6,"},{"Start":"07:34.900 ","End":"07:37.005","Text":"because there are 6 here."},{"Start":"07:37.005 ","End":"07:43.890","Text":"The dimension of the whole 3 by 3 matrix space is 9,"},{"Start":"07:43.890 ","End":"07:47.685","Text":"and 3 plus 6 equals 9."},{"Start":"07:47.685 ","End":"07:54.460","Text":"For this, we can deduce that W plus W perp is all of M_3."},{"Start":"07:54.590 ","End":"07:59.880","Text":"It is a direct sum because W and W perp,"},{"Start":"08:00.080 ","End":"08:03.660","Text":"they only have the 0 in common."},{"Start":"08:03.660 ","End":"08:07.520","Text":"That was just an extra note that this is consistent with"},{"Start":"08:07.520 ","End":"08:12.420","Text":"the orthogonal decomposition theorem. Now we\u0027re really done."}],"ID":10160},{"Watched":false,"Name":"Exercise 7","Duration":"5m 58s","ChapterTopicVideoID":10029,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.020","Text":"This is another exercise with orthogonal complements."},{"Start":"00:04.020 ","End":"00:11.100","Text":"This time we start out with the space of 2 by 2 real matrices."},{"Start":"00:11.100 ","End":"00:18.810","Text":"Subspace W is the space of all the symmetric matrices."},{"Start":"00:18.810 ","End":"00:25.210","Text":"We want to find a basis for the orthogonal complement."},{"Start":"00:26.450 ","End":"00:31.160","Text":"The inner product is the usual inner product for matrices,"},{"Start":"00:31.160 ","End":"00:34.500","Text":"and I\u0027ll remind you again when the time comes."},{"Start":"00:34.670 ","End":"00:37.370","Text":"Now, it\u0027s a fairly easy exercise."},{"Start":"00:37.370 ","End":"00:42.575","Text":"It\u0027s just a bit time consuming to find a basis for W,"},{"Start":"00:42.575 ","End":"00:46.864","Text":"but here is 1 such basis."},{"Start":"00:46.864 ","End":"00:52.340","Text":"Notice that each of these is a symmetric matrix,"},{"Start":"00:52.340 ","End":"00:53.510","Text":"we have a 1 here,"},{"Start":"00:53.510 ","End":"00:56.480","Text":"we have a 1 here, and this time we have 2 1s here."},{"Start":"00:56.480 ","End":"00:59.290","Text":"Each of them is symmetrical about the diagonal"},{"Start":"00:59.290 ","End":"01:02.665","Text":"or the transpose equals itself,"},{"Start":"01:02.665 ","End":"01:05.660","Text":"and so we start with this."},{"Start":"01:05.660 ","End":"01:12.250","Text":"Next, we want to find what W per is."},{"Start":"01:12.250 ","End":"01:15.975","Text":"Let\u0027s take a general matrix A,"},{"Start":"01:15.975 ","End":"01:17.405","Text":"call it x1, x2, x3,"},{"Start":"01:17.405 ","End":"01:22.150","Text":"x4, and let\u0027s see what it means for it to be in W perp."},{"Start":"01:22.150 ","End":"01:26.470","Text":"Well, it means that it\u0027s orthogonal to W,"},{"Start":"01:26.470 ","End":"01:32.950","Text":"but we settle for orthogonal to the basis members of W."},{"Start":"01:32.950 ","End":"01:33.745","Text":"It\u0027s the same thing."},{"Start":"01:33.745 ","End":"01:37.210","Text":"We just have to show that this is orthogonal to this,"},{"Start":"01:37.210 ","End":"01:38.470","Text":"to this, and to this,"},{"Start":"01:38.470 ","End":"01:40.780","Text":"that will give us 3 equations."},{"Start":"01:40.780 ","End":"01:46.000","Text":"Now, I\u0027m reminding you what I mean by the usual inner product with matrices."},{"Start":"01:46.000 ","End":"01:51.290","Text":"The inner product of say x with y is this."},{"Start":"01:51.290 ","End":"01:58.190","Text":"It\u0027s always the trace of the transpose of the 2nd times the 1st."},{"Start":"01:58.190 ","End":"02:04.750","Text":"Now, the 1st of the 3 conditions is that this is orthogonal to the 1st 1."},{"Start":"02:04.750 ","End":"02:08.650","Text":"Orthogonal means that the inner product is 0,"},{"Start":"02:08.650 ","End":"02:11.695","Text":"and the inner product is defined here."},{"Start":"02:11.695 ","End":"02:14.625","Text":"This gives the trace."},{"Start":"02:14.625 ","End":"02:17.710","Text":"Now, this looks like y,"},{"Start":"02:17.710 ","End":"02:19.150","Text":"and in fact, it is y,"},{"Start":"02:19.150 ","End":"02:20.580","Text":"but it\u0027s y transpose."},{"Start":"02:20.580 ","End":"02:23.230","Text":"It\u0027s just that this matrix is symmetric,"},{"Start":"02:23.230 ","End":"02:26.865","Text":"so its own transpose."},{"Start":"02:26.865 ","End":"02:29.680","Text":"We have the trace of this times this is 0,"},{"Start":"02:29.680 ","End":"02:34.470","Text":"and if you check what the trace of this times this is,"},{"Start":"02:34.470 ","End":"02:37.400","Text":"actually, I could write it out for you."},{"Start":"02:37.400 ","End":"02:44.700","Text":"If we take this 1st row with the 1st column here,"},{"Start":"02:44.700 ","End":"02:48.000","Text":"that gives us x,1."},{"Start":"02:48.000 ","End":"02:54.390","Text":"If we take the 2nd row with the 2nd column,"},{"Start":"02:54.390 ","End":"02:56.340","Text":"that will give us this entry,"},{"Start":"02:56.340 ","End":"02:57.720","Text":"which is 0,"},{"Start":"02:57.720 ","End":"03:01.970","Text":"we don\u0027t care about this and this,"},{"Start":"03:01.970 ","End":"03:04.640","Text":"and the trace comes out to be x1 plus 0,"},{"Start":"03:04.640 ","End":"03:07.885","Text":"which is x1, which equals 0."},{"Start":"03:07.885 ","End":"03:10.410","Text":"Here\u0027s the next 1,"},{"Start":"03:10.410 ","End":"03:16.860","Text":"which is that this A is orthogonal to,"},{"Start":"03:16.860 ","End":"03:18.435","Text":"I just lost it there,"},{"Start":"03:18.435 ","End":"03:21.495","Text":"this 1, the middle 1."},{"Start":"03:21.495 ","End":"03:28.395","Text":"Similarly, computation we get x_2 plus x_3 equals 0."},{"Start":"03:28.395 ","End":"03:33.600","Text":"This product is x_2 and x_3 on the diagonal."},{"Start":"03:33.600 ","End":"03:36.075","Text":"This is what we get,"},{"Start":"03:36.075 ","End":"03:38.280","Text":"and the last 1, if you check,"},{"Start":"03:38.280 ","End":"03:40.185","Text":"gives us x_4 equals 0."},{"Start":"03:40.185 ","End":"03:41.895","Text":"We have 3 equations,"},{"Start":"03:41.895 ","End":"03:46.155","Text":"this 1, this 1, and this 1."},{"Start":"03:46.155 ","End":"03:49.980","Text":"Let me just write them again properly."},{"Start":"03:49.980 ","End":"03:52.110","Text":"Here is the SLE,"},{"Start":"03:52.110 ","End":"03:54.630","Text":"I just collected these 3 together."},{"Start":"03:54.630 ","End":"03:57.600","Text":"The 1st 1 in each row,"},{"Start":"03:57.600 ","End":"04:02.775","Text":"the leading term is not free."},{"Start":"04:02.775 ","End":"04:06.330","Text":"The free 1 is x_3."},{"Start":"04:06.330 ","End":"04:09.680","Text":"We said x3 to be whatever we want,"},{"Start":"04:09.680 ","End":"04:11.000","Text":"and then x1, x_2,"},{"Start":"04:11.000 ","End":"04:14.250","Text":"and x4 will follow from that."},{"Start":"04:14.510 ","End":"04:17.730","Text":"Using our wondering 1s technique,"},{"Start":"04:17.730 ","End":"04:19.010","Text":"or just saying x3 is free,"},{"Start":"04:19.010 ","End":"04:21.460","Text":"let it be anything not 0,"},{"Start":"04:21.460 ","End":"04:23.195","Text":"let it equal 1,"},{"Start":"04:23.195 ","End":"04:26.580","Text":"and if x3 is 1,"},{"Start":"04:26.580 ","End":"04:30.585","Text":"then x2 comes out to be minus 1, and from here and here,"},{"Start":"04:30.585 ","End":"04:32.760","Text":"x1 is 0 and x4 is 0,"},{"Start":"04:32.760 ","End":"04:36.250","Text":"so these 4 values give us a matrix."},{"Start":"04:36.250 ","End":"04:38.885","Text":"Just get them in the right position,"},{"Start":"04:38.885 ","End":"04:42.115","Text":"x1, x2, x3, x4."},{"Start":"04:42.115 ","End":"04:44.670","Text":"That\u0027s just 1 matrix,"},{"Start":"04:44.670 ","End":"04:50.475","Text":"and this would be a basis for W perp,"},{"Start":"04:50.475 ","End":"04:54.440","Text":"and I guess I should have also added that,"},{"Start":"04:54.440 ","End":"04:58.160","Text":"I don\u0027t know if we are asked for this or not,"},{"Start":"04:58.160 ","End":"05:02.740","Text":"but the dimension is equal to 1 because there\u0027s only 1 matrix here."},{"Start":"05:02.740 ","End":"05:04.580","Text":"Now, we\u0027re basically done,"},{"Start":"05:04.580 ","End":"05:07.490","Text":"I just want to make a few more remarks."},{"Start":"05:07.490 ","End":"05:10.340","Text":"I just want to tie this in a bit showing"},{"Start":"05:10.340 ","End":"05:16.055","Text":"this verifies the orthogonal decomposition theorem,"},{"Start":"05:16.055 ","End":"05:24.830","Text":"which says that the whole space is equal to our subspace plus direct sum,"},{"Start":"05:24.830 ","End":"05:28.295","Text":"the subspace perp,"},{"Start":"05:28.295 ","End":"05:32.195","Text":"and then let\u0027s check the dimensions."},{"Start":"05:32.195 ","End":"05:34.580","Text":"The dimension of W is 3,"},{"Start":"05:34.580 ","End":"05:40.130","Text":"we saw earlier on that we had a basis with 3 matrices here."},{"Start":"05:40.130 ","End":"05:43.780","Text":"From here, we got the dimension of W perp is 1,"},{"Start":"05:43.780 ","End":"05:49.760","Text":"the 2-by-2 matrices altogether have dimension,2 squared is 4,"},{"Start":"05:49.760 ","End":"05:53.105","Text":"and indeed 3 plus 1 equals 4,"},{"Start":"05:53.105 ","End":"05:55.550","Text":"so everything works out."},{"Start":"05:55.550 ","End":"05:59.040","Text":"Okay, we\u0027re done."}],"ID":10161},{"Watched":false,"Name":"Exercise 8","Duration":"4m 23s","ChapterTopicVideoID":10030,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.080 ","End":"00:05.040","Text":"This exercise is a bit theoretical in nature."},{"Start":"00:05.040 ","End":"00:07.590","Text":"In this exercise, we\u0027re given"},{"Start":"00:07.590 ","End":"00:11.340","Text":"a homogeneous system of linear equations,"},{"Start":"00:11.340 ","End":"00:12.600","Text":"m by n."},{"Start":"00:12.600 ","End":"00:18.555","Text":"It\u0027s some equations in n unknowns and we write it in matrix form like this."},{"Start":"00:18.555 ","End":"00:21.915","Text":"A would be an m by n matrix."},{"Start":"00:21.915 ","End":"00:31.185","Text":"Let\u0027s say that U is the solution space of this SLE in R^n."},{"Start":"00:31.185 ","End":"00:34.140","Text":"R^n has the usual inner product,"},{"Start":"00:34.140 ","End":"00:36.075","Text":"which is the dot product."},{"Start":"00:36.075 ","End":"00:43.135","Text":"Our task is to describe U that somehow relates the 2 concepts;"},{"Start":"00:43.135 ","End":"00:49.669","Text":"to the concept of orthogonal complement and the concept of a row space,"},{"Start":"00:49.669 ","End":"00:53.910","Text":"specifically of the matrix A here."},{"Start":"00:54.380 ","End":"01:03.065","Text":"Let\u0027s start by writing this condensed matrix form equation with more detail."},{"Start":"01:03.065 ","End":"01:05.135","Text":"Like so, this is A,"},{"Start":"01:05.135 ","End":"01:06.860","Text":"this is X,"},{"Start":"01:06.860 ","End":"01:12.975","Text":"the vector of unknowns, X_1 through X_n."},{"Start":"01:12.975 ","End":"01:16.930","Text":"Here it\u0027s homogeneous, the vector 0."},{"Start":"01:16.930 ","End":"01:18.935","Text":"Now let\u0027s just say,"},{"Start":"01:18.935 ","End":"01:21.665","Text":"just so I can explain it better,"},{"Start":"01:21.665 ","End":"01:28.650","Text":"that U is the span of 3 vectors, u, v, and w."},{"Start":"01:28.650 ","End":"01:31.910","Text":"You\u0027ll see at the end they won\u0027t need"},{"Start":"01:31.910 ","End":"01:37.910","Text":"this assumption that it specifically has 3 vectors in it."},{"Start":"01:37.910 ","End":"01:42.600","Text":"Just yeah, for educational purposes."},{"Start":"01:44.090 ","End":"01:47.060","Text":"U is in the solution space,"},{"Start":"01:47.060 ","End":"01:49.130","Text":"so I can replace X with u,"},{"Start":"01:49.130 ","End":"01:52.325","Text":"so we get A times u is 0."},{"Start":"01:52.325 ","End":"01:56.020","Text":"In full, this is what we get."},{"Start":"01:56.020 ","End":"02:00.730","Text":"What this means is that u,"},{"Start":"02:00.730 ","End":"02:04.834","Text":"the vectors orthogonal to each row."},{"Start":"02:04.834 ","End":"02:09.350","Text":"Just for example, let\u0027s take the second row."},{"Start":"02:09.350 ","End":"02:16.145","Text":"This with this dot product will give us this 0."},{"Start":"02:16.145 ","End":"02:18.200","Text":"But for every row it would work."},{"Start":"02:18.200 ","End":"02:21.410","Text":"This means that this vector u is orthogonal,"},{"Start":"02:21.410 ","End":"02:24.020","Text":"not just to the second row but to every row,"},{"Start":"02:24.020 ","End":"02:26.710","Text":"because we\u0027ve got all 0s."},{"Start":"02:26.710 ","End":"02:30.770","Text":"If u is orthogonal to each row in A,"},{"Start":"02:30.770 ","End":"02:35.450","Text":"then it\u0027s orthogonal to the whole row space of A."},{"Start":"02:35.450 ","End":"02:39.590","Text":"Let\u0027s call that row space W."},{"Start":"02:39.590 ","End":"02:44.615","Text":"What we did with the vector little u,"},{"Start":"02:44.615 ","End":"02:49.265","Text":"we can do with v and w also."},{"Start":"02:49.265 ","End":"02:55.075","Text":"This is what we get if we substitute v instead of x,"},{"Start":"02:55.075 ","End":"02:57.990","Text":"and here it is with w."},{"Start":"02:57.990 ","End":"03:02.975","Text":"What we said about u also is true for v and w,"},{"Start":"03:02.975 ","End":"03:08.060","Text":"that they are also orthogonal to each row in A"},{"Start":"03:08.060 ","End":"03:11.240","Text":"and hence to the row space which we called W."},{"Start":"03:11.240 ","End":"03:15.980","Text":"The fact is u, v and w are orthogonal"},{"Start":"03:15.980 ","End":"03:19.595","Text":"to the row space W,"},{"Start":"03:19.595 ","End":"03:23.180","Text":"then so is U solution space,"},{"Start":"03:23.180 ","End":"03:25.945","Text":"which is the span of these 3."},{"Start":"03:25.945 ","End":"03:29.295","Text":"Conversely, if we have some x,"},{"Start":"03:29.295 ","End":"03:33.815","Text":"since this was x, which is orthogonal to the row space,"},{"Start":"03:33.815 ","End":"03:38.150","Text":"then it means it\u0027s orthogonal to each of the rows and that means"},{"Start":"03:38.150 ","End":"03:43.410","Text":"that it\u0027s a solution to the system of linear equations."},{"Start":"03:43.570 ","End":"03:47.900","Text":"The conclusion is that U,"},{"Start":"03:47.900 ","End":"03:51.530","Text":"the solution space is the orthogonal complement"},{"Start":"03:51.530 ","End":"03:54.665","Text":"of the row space of matrix A."},{"Start":"03:54.665 ","End":"03:56.840","Text":"Because we showed that it\u0027s if and only if"},{"Start":"03:56.840 ","End":"03:59.540","Text":"that the vector x is a solution,"},{"Start":"03:59.540 ","End":"04:07.840","Text":"if and only if it\u0027s perpendicular or orthogonal to U."},{"Start":"04:08.390 ","End":"04:13.430","Text":"This is a nice simple characterization of the solution space."},{"Start":"04:13.430 ","End":"04:19.550","Text":"It\u0027s simply the orthogonal complement of the row space of the matrix."},{"Start":"04:19.550 ","End":"04:22.980","Text":"Okay, that\u0027s it."}],"ID":10162},{"Watched":false,"Name":"Exercise 9","Duration":"3m 12s","ChapterTopicVideoID":10031,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.755","Text":"In this exercise, we have an inner product space V,"},{"Start":"00:04.755 ","End":"00:09.255","Text":"and we have 2 subsets, W_1 and W_2."},{"Start":"00:09.255 ","End":"00:11.220","Text":"They don\u0027t have to be subspaces,"},{"Start":"00:11.220 ","End":"00:17.370","Text":"they are just subsets of V. What we have to prove is"},{"Start":"00:17.370 ","End":"00:25.455","Text":"that if W_1 is contained in W_2,"},{"Start":"00:25.455 ","End":"00:31.005","Text":"as far as set theory containment goes,"},{"Start":"00:31.005 ","End":"00:40.100","Text":"then W_2 perp is contained in W_1 perp."},{"Start":"00:40.100 ","End":"00:43.070","Text":"Contained it means also a subset."},{"Start":"00:43.070 ","End":"00:46.205","Text":"Notice the reversal of the indices,"},{"Start":"00:46.205 ","End":"00:50.190","Text":"W_1 is a subset of W_2,"},{"Start":"00:50.190 ","End":"00:54.570","Text":"but W_2 perp is a subset of W_1 perp."},{"Start":"00:54.570 ","End":"00:59.010","Text":"Actually, these 2 are subspace the perp of any set is always in space,"},{"Start":"00:59.010 ","End":"01:02.320","Text":"but that\u0027s okay, besides the point."},{"Start":"01:02.660 ","End":"01:06.845","Text":"This is like given and we have to prove this."},{"Start":"01:06.845 ","End":"01:14.795","Text":"Now to prove set containment we have to show that if a vector is in this,"},{"Start":"01:14.795 ","End":"01:18.510","Text":"then automatically it\u0027s also in this,"},{"Start":"01:18.510 ","End":"01:21.305","Text":"that\u0027s how we show that something is a subset."},{"Start":"01:21.305 ","End":"01:25.250","Text":"Take an element of this and show that it\u0027s also an element of this."},{"Start":"01:25.250 ","End":"01:29.200","Text":"Let\u0027s call it V from W_2 perp."},{"Start":"01:29.200 ","End":"01:34.610","Text":"This means that V is orthogonal to"},{"Start":"01:34.610 ","End":"01:43.710","Text":"all the vectors in W_2 and orthogonal means that the inner product is 0."},{"Start":"01:43.880 ","End":"01:49.885","Text":"Now because W_1 is a subset of W_2,"},{"Start":"01:49.885 ","End":"01:55.715","Text":"this property of the inner product of v with u being 0,"},{"Start":"01:55.715 ","End":"02:03.065","Text":"if it\u0027s true for all u and W_2 then in particular it\u0027s also true for all U and W_1"},{"Start":"02:03.065 ","End":"02:12.060","Text":"because W_1 is only possibly less than so that\u0027s also true."},{"Start":"02:12.320 ","End":"02:20.340","Text":"This translates to the statement that V is in W_1 perp."},{"Start":"02:20.600 ","End":"02:22.700","Text":"Yeah, that\u0027s the proof,"},{"Start":"02:22.700 ","End":"02:26.420","Text":"but I just want to give you a bit of intuition."},{"Start":"02:26.420 ","End":"02:34.420","Text":"If we take 2 sets and W_2 is larger than W_1,"},{"Start":"02:34.420 ","End":"02:39.860","Text":"then the orthogonal complement is"},{"Start":"02:39.860 ","End":"02:45.995","Text":"only going to get smaller because it has to be orthogonal to most stuff."},{"Start":"02:45.995 ","End":"02:52.505","Text":"If I have something that\u0027s orthogonal to more than W_1 and some more,"},{"Start":"02:52.505 ","End":"02:59.575","Text":"then it\u0027s going to get smaller than the set that\u0027s just orthogonal to W_1."},{"Start":"02:59.575 ","End":"03:01.545","Text":"As I increase this,"},{"Start":"03:01.545 ","End":"03:07.270","Text":"the orthogonal complement gets only small because there are more restrictions."},{"Start":"03:07.270 ","End":"03:08.900","Text":"I don\u0027t know if that helped or not."},{"Start":"03:08.900 ","End":"03:12.450","Text":"Anyway, here is the proof and we are done."}],"ID":10163},{"Watched":false,"Name":"Exercise 10","Duration":"1m 32s","ChapterTopicVideoID":10032,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.465","Text":"In this exercise, we have a subspace called W of an inner product space"},{"Start":"00:06.465 ","End":"00:15.735","Text":"V. We have to show that W is contained in W perp,"},{"Start":"00:15.735 ","End":"00:23.730","Text":"perp, the orthogonal complement of the orthogonal complement of W. What we want to"},{"Start":"00:23.730 ","End":"00:33.280","Text":"do is show that any vector that\u0027s in W is also in W perp, perp."},{"Start":"00:34.040 ","End":"00:39.220","Text":"Now if I take any u in W perp,"},{"Start":"00:39.220 ","End":"00:42.770","Text":"then v in product with u is going to be 0."},{"Start":"00:42.770 ","End":"00:46.265","Text":"If I have something from W and something from W perp,"},{"Start":"00:46.265 ","End":"00:49.215","Text":"they have to be orthogonal, so this is 0."},{"Start":"00:49.215 ","End":"00:53.645","Text":"I could say that this is true for this particular v,"},{"Start":"00:53.645 ","End":"00:58.980","Text":"but for all u in W perp."},{"Start":"00:59.210 ","End":"01:05.950","Text":"If v is orthogonal to every vector in W perp,"},{"Start":"01:05.950 ","End":"01:15.840","Text":"then v is orthogonal to the whole of W perp,"},{"Start":"01:16.130 ","End":"01:19.970","Text":"and if it\u0027s orthogonal to W perp,"},{"Start":"01:19.970 ","End":"01:25.260","Text":"it must be in its complement, W perp, perp."},{"Start":"01:25.580 ","End":"01:28.355","Text":"That\u0027s basically it."},{"Start":"01:28.355 ","End":"01:30.230","Text":"That\u0027s all there is to it."},{"Start":"01:30.230 ","End":"01:32.460","Text":"We are done."}],"ID":10164},{"Watched":false,"Name":"Exercise 11","Duration":"4m 23s","ChapterTopicVideoID":10033,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.330","Text":"In this exercise, we\u0027re given a subspace called W of an inner product space"},{"Start":"00:06.330 ","End":"00:12.435","Text":"called V and we assume that V has finite dimension."},{"Start":"00:12.435 ","End":"00:17.520","Text":"You\u0027ll see later why I need V to be of a finite dimension."},{"Start":"00:17.520 ","End":"00:26.475","Text":"We have to prove that the perp of the perp of W is W itself."},{"Start":"00:26.475 ","End":"00:31.260","Text":"Now in a previous exercise I think the 1 before this we already"},{"Start":"00:31.260 ","End":"00:38.420","Text":"showed that W is a subset of W perp perp."},{"Start":"00:38.420 ","End":"00:43.820","Text":"Now let\u0027s try and show that W and W perp perp have"},{"Start":"00:43.820 ","End":"00:48.120","Text":"the same dimension and"},{"Start":"00:48.120 ","End":"00:53.410","Text":"if we do this and together with this we\u0027ll get our result we\u0027ll you\u0027ll see."},{"Start":"00:53.410 ","End":"00:58.420","Text":"I\u0027m going to use the orthogonal decomposition theorem twice."},{"Start":"00:58.420 ","End":"01:02.770","Text":"First of all I\u0027m going to apply it to the subspace W of V"},{"Start":"01:02.770 ","End":"01:07.235","Text":"and we get that V is the direct sum of W plus W perp."},{"Start":"01:07.235 ","End":"01:11.645","Text":"But there\u0027s nothing special about W. In general,"},{"Start":"01:11.645 ","End":"01:17.200","Text":"if I have X is a subspace of V,"},{"Start":"01:17.700 ","End":"01:24.570","Text":"looking like subset that I mean subspace then in general V will"},{"Start":"01:24.570 ","End":"01:31.440","Text":"equal X plus direct sum X perp,"},{"Start":"01:31.440 ","End":"01:40.480","Text":"and if I let X equal W then I get this."},{"Start":"01:41.180 ","End":"01:49.740","Text":"But if I let X equal W perp the second time round then I get V equals X"},{"Start":"01:49.740 ","End":"01:57.435","Text":"which is W perp plus X perp and X perp is W perp perp."},{"Start":"01:57.435 ","End":"01:59.325","Text":"Sounds funny, I know perp perp."},{"Start":"01:59.325 ","End":"02:04.875","Text":"It\u0027s just the way it is. Orthogonal complement of the orthogonal complement."},{"Start":"02:04.875 ","End":"02:07.580","Text":"Now when we have a direct sum,"},{"Start":"02:07.580 ","End":"02:14.660","Text":"we also have an equation in an inequality in dimensions that"},{"Start":"02:14.660 ","End":"02:17.540","Text":"the dimension of V in"},{"Start":"02:17.540 ","End":"02:22.130","Text":"general dimension of V would be the dimension of X plus the dimension of X perp."},{"Start":"02:22.130 ","End":"02:23.640","Text":"Again I\u0027ll apply it twice,"},{"Start":"02:23.640 ","End":"02:25.335","Text":"once to this and once to this."},{"Start":"02:25.335 ","End":"02:31.085","Text":"So we get the dimension of V is the dimension of W plus the dimension of W perp."},{"Start":"02:31.085 ","End":"02:37.040","Text":"From here we get similar things that\u0027s instead of W,"},{"Start":"02:37.040 ","End":"02:40.540","Text":"we have W perp and here we have W perp perp."},{"Start":"02:40.540 ","End":"02:43.850","Text":"Here this stage it was important that the dimension of"},{"Start":"02:43.850 ","End":"02:46.860","Text":"V is finite because when you start dealing with"},{"Start":"02:46.860 ","End":"02:53.140","Text":"infinity for example infinity plus 2 might be the same as infinity plus 5,"},{"Start":"02:53.140 ","End":"02:55.585","Text":"but it doesn\u0027t mean that 2 equals 5."},{"Start":"02:55.585 ","End":"03:02.250","Text":"The arithmetic with infinity doesn\u0027t work so that\u0027s why we require the finite dimension."},{"Start":"03:02.330 ","End":"03:08.030","Text":"Now both the right-hand sides are equal to dim V so I can compare"},{"Start":"03:08.030 ","End":"03:13.865","Text":"the 2 right-hand sides and get this equality."},{"Start":"03:13.865 ","End":"03:21.870","Text":"Also we can subtract the dimension of W perp from both sides"},{"Start":"03:21.870 ","End":"03:29.430","Text":"just like this so we get that this equals this which is what I intended to show earlier."},{"Start":"03:29.430 ","End":"03:32.795","Text":"I stated that\u0027s my next goal."},{"Start":"03:32.795 ","End":"03:38.095","Text":"Now, from this and from the fact that we already know that"},{"Start":"03:38.095 ","End":"03:43.755","Text":"W is a subspace of W perp perp from a previous exercise,"},{"Start":"03:43.755 ","End":"03:47.265","Text":"together it implies that these 2 are"},{"Start":"03:47.265 ","End":"03:54.150","Text":"equal because the reason for that and I\u0027ve written it out,"},{"Start":"03:54.150 ","End":"03:56.900","Text":"we have perhaps encountered this before,"},{"Start":"03:56.900 ","End":"04:02.090","Text":"but I\u0027ll write it out explicitly that if you have a vector space and again we have"},{"Start":"04:02.090 ","End":"04:08.690","Text":"finite dimension n and we have a subspace with the same dimension n,"},{"Start":"04:08.690 ","End":"04:12.160","Text":"then the subspace has to be the whole space."},{"Start":"04:12.160 ","End":"04:16.535","Text":"It was a subspace with the same dimension is everything."},{"Start":"04:16.535 ","End":"04:19.730","Text":"From this we conclude that W equals"},{"Start":"04:19.730 ","End":"04:24.810","Text":"W perp perp and that\u0027s what we set out to show. We\u0027re done."}],"ID":10165},{"Watched":false,"Name":"Exercise 12","Duration":"7m 13s","ChapterTopicVideoID":10034,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.510","Text":"In this exercise, we have 2 subspaces,"},{"Start":"00:03.510 ","End":"00:08.100","Text":"W1 and W2 of an inner product space V,"},{"Start":"00:08.100 ","End":"00:11.850","Text":"and we have to prove the following."},{"Start":"00:11.850 ","End":"00:15.810","Text":"Let me explain how to view this."},{"Start":"00:15.810 ","End":"00:21.210","Text":"We only have 2 operations that we know that we can perform on subspaces."},{"Start":"00:21.210 ","End":"00:23.970","Text":"We can add them and we can intersect them,"},{"Start":"00:23.970 ","End":"00:26.190","Text":"and that leaves subspace."},{"Start":"00:26.190 ","End":"00:29.745","Text":"Now turns out that when you take the orthogonal complement,"},{"Start":"00:29.745 ","End":"00:32.715","Text":"the sum becomes an intersection."},{"Start":"00:32.715 ","End":"00:34.410","Text":"I\u0027ll tell you something more,"},{"Start":"00:34.410 ","End":"00:35.925","Text":"the next exercise,"},{"Start":"00:35.925 ","End":"00:37.080","Text":"the reverse is true."},{"Start":"00:37.080 ","End":"00:38.820","Text":"If I put here an intersection,"},{"Start":"00:38.820 ","End":"00:41.310","Text":"here I can put a plus, sum."},{"Start":"00:41.310 ","End":"00:45.110","Text":"Orthogonal complement changes 1 operation to the other operation."},{"Start":"00:45.110 ","End":"00:47.210","Text":"That\u0027s just a way of thinking about it."},{"Start":"00:47.210 ","End":"00:50.090","Text":"Anyway, let\u0027s get to proving it."},{"Start":"00:50.090 ","End":"00:53.450","Text":"Now, a common way of proving that 2 sets are equal"},{"Start":"00:53.450 ","End":"00:56.735","Text":"is showing that 1 is contained in the other,"},{"Start":"00:56.735 ","End":"00:58.190","Text":"and then vice versa."},{"Start":"00:58.190 ","End":"01:05.490","Text":"Let\u0027s start out by showing that this is contained in this, and that\u0027ll be step 1."},{"Start":"01:05.490 ","End":"01:07.740","Text":"Step 2 we\u0027ll do the reverse."},{"Start":"01:07.740 ","End":"01:14.830","Text":"The way we prove this is that we show that anything that is in here must also be in here."},{"Start":"01:14.830 ","End":"01:18.960","Text":"Let\u0027s take a general element, u,"},{"Start":"01:18.960 ","End":"01:28.620","Text":"a vector in W1 plus W2 perp Let me take a vector from W1."},{"Start":"01:28.620 ","End":"01:30.840","Text":"You ask why W1 not W2?"},{"Start":"01:30.840 ","End":"01:32.640","Text":"Well, we\u0027ll come to W2."},{"Start":"01:32.640 ","End":"01:35.625","Text":"For the moment, let\u0027s take it from W1."},{"Start":"01:35.625 ","End":"01:42.365","Text":"Now I claim that v is automatically also in W1 plus W2."},{"Start":"01:42.365 ","End":"01:44.735","Text":"That\u0027s easy to see."},{"Start":"01:44.735 ","End":"01:48.379","Text":"If we write v equals v plus 0,"},{"Start":"01:48.379 ","End":"01:52.490","Text":"then it is the sum of something from W1 and something from W2."},{"Start":"01:52.490 ","End":"01:58.825","Text":"V is in W1 and 0 is in every vector space, it\u0027s in W2."},{"Start":"01:58.825 ","End":"02:03.875","Text":"Now, it follows that u and v are orthogonal."},{"Start":"02:03.875 ","End":"02:06.110","Text":"Their inner product is 0."},{"Start":"02:06.110 ","End":"02:07.835","Text":"Why do I say that?"},{"Start":"02:07.835 ","End":"02:12.855","Text":"Because look, v is in W1 plus W2,"},{"Start":"02:12.855 ","End":"02:15.855","Text":"and u is in the perp of that."},{"Start":"02:15.855 ","End":"02:18.230","Text":"If something\u0027s in the perp of this,"},{"Start":"02:18.230 ","End":"02:26.065","Text":"then the inner product of this with anything in here and anything in here will be 0."},{"Start":"02:26.065 ","End":"02:30.140","Text":"This of course is true for any v in W1."},{"Start":"02:30.140 ","End":"02:32.245","Text":"I just picked any odd 1."},{"Start":"02:32.245 ","End":"02:42.295","Text":"U is perpendicular to every vector in W1 and so it belongs to W1 perp,"},{"Start":"02:42.295 ","End":"02:45.755","Text":"the orthogonal complement of W1."},{"Start":"02:45.755 ","End":"02:49.925","Text":"This is an intermediate result that I need."},{"Start":"02:49.925 ","End":"02:54.380","Text":"Now notice that I chose v from W1,"},{"Start":"02:54.380 ","End":"03:03.780","Text":"but I could just as well have chosen it from W2 and the same logic would apply."},{"Start":"03:03.780 ","End":"03:07.865","Text":"I\u0027m not going to repeat myself and I\u0027ll just write in the same way,"},{"Start":"03:07.865 ","End":"03:11.600","Text":"u is also in W2 perp."},{"Start":"03:11.600 ","End":"03:15.000","Text":"Now I\u0027d like to remind you what intersection is."},{"Start":"03:15.000 ","End":"03:17.675","Text":"We have the intersection sign here."},{"Start":"03:17.675 ","End":"03:20.600","Text":"In general, from set theory,"},{"Start":"03:20.600 ","End":"03:24.070","Text":"something belongs to the intersection of 2 sets,"},{"Start":"03:24.070 ","End":"03:29.330","Text":"it means that it belongs to the 1 and it belongs to the other."},{"Start":"03:29.330 ","End":"03:34.165","Text":"This would be the symbol for and in case you don\u0027t know it."},{"Start":"03:34.165 ","End":"03:37.520","Text":"If we apply this logic instead of to A and B,"},{"Start":"03:37.520 ","End":"03:40.175","Text":"to W1 perp and W2 perp,"},{"Start":"03:40.175 ","End":"03:42.260","Text":"we\u0027ve got u belongs to this,"},{"Start":"03:42.260 ","End":"03:43.790","Text":"and u belongs to this,"},{"Start":"03:43.790 ","End":"03:47.190","Text":"so it belongs to their intersection."},{"Start":"03:47.260 ","End":"03:51.170","Text":"I\u0027ll highlight this and notice that we\u0027ve completed step"},{"Start":"03:51.170 ","End":"03:55.310","Text":"1 because we started out with u belonging here,"},{"Start":"03:55.310 ","End":"03:57.980","Text":"and we ended up with u belonging here,"},{"Start":"03:57.980 ","End":"03:59.510","Text":"which is the right-hand side."},{"Start":"03:59.510 ","End":"04:01.505","Text":"Now let\u0027s do the reverse."},{"Start":"04:01.505 ","End":"04:05.930","Text":"This time we have to prove the opposite containment that this intersection"},{"Start":"04:05.930 ","End":"04:10.805","Text":"is contained in the perp of the sum."},{"Start":"04:10.805 ","End":"04:13.165","Text":"Let\u0027s clear some space."},{"Start":"04:13.165 ","End":"04:19.335","Text":"Let\u0027s start with a typical element u in this intersection."},{"Start":"04:19.335 ","End":"04:25.875","Text":"We should end up by showing that u is in this perp of the sum."},{"Start":"04:25.875 ","End":"04:34.095","Text":"Let\u0027s take any v. The general v in W1 plus W2."},{"Start":"04:34.095 ","End":"04:36.420","Text":"You\u0027ll see why I want to do this."},{"Start":"04:36.420 ","End":"04:37.905","Text":"It serves my purpose."},{"Start":"04:37.905 ","End":"04:40.660","Text":"We\u0027ll take a v from here,"},{"Start":"04:40.820 ","End":"04:43.875","Text":"and now I\u0027m going to do some reinterpretation."},{"Start":"04:43.875 ","End":"04:45.870","Text":"Each of these, I\u0027m going to re-interpret."},{"Start":"04:45.870 ","End":"04:47.715","Text":"This in the purple."},{"Start":"04:47.715 ","End":"04:52.125","Text":"If u belongs to W1 perp intersect W2 perp,"},{"Start":"04:52.125 ","End":"04:57.720","Text":"it means that u is in W1 perp and u is in W2 perp."},{"Start":"04:57.720 ","End":"04:58.770","Text":"We talked about this."},{"Start":"04:58.770 ","End":"05:02.280","Text":"The intersection means it\u0027s in both."},{"Start":"05:02.280 ","End":"05:06.440","Text":"Now let\u0027s interpret this in this green."},{"Start":"05:06.440 ","End":"05:10.490","Text":"If v is in the sum W1 plus W2,"},{"Start":"05:10.490 ","End":"05:15.830","Text":"that means I can write v as a sum v1 plus v2,"},{"Start":"05:15.830 ","End":"05:22.120","Text":"with v1 being in w1 and v2 being in W2."},{"Start":"05:22.120 ","End":"05:26.300","Text":"What I want to do now is compute the inner product of u with"},{"Start":"05:26.300 ","End":"05:31.250","Text":"v. My aim is to show that these 2 are orthogonal, so I should get 0."},{"Start":"05:31.250 ","End":"05:35.090","Text":"Now, because v equals v1 plus v2,"},{"Start":"05:35.090 ","End":"05:39.655","Text":"I can replace this v by v1 plus v2."},{"Start":"05:39.655 ","End":"05:48.580","Text":"Then by linearity, I can split this into inner product of u with v1 plus u with v2."},{"Start":"05:48.580 ","End":"05:51.090","Text":"Now let\u0027s look at the first 1,"},{"Start":"05:51.090 ","End":"05:53.300","Text":"inner product of u with v1."},{"Start":"05:53.300 ","End":"06:02.025","Text":"Notice that u is in W1 perp and v1 is in W1."},{"Start":"06:02.025 ","End":"06:06.195","Text":"If I look at this and this then"},{"Start":"06:06.195 ","End":"06:10.950","Text":"must be that the inner product is 0 because this is in W1 perp,"},{"Start":"06:10.950 ","End":"06:14.665","Text":"so inner product with anything in W1 is 0."},{"Start":"06:14.665 ","End":"06:18.305","Text":"Similarly, if we look at the other 1,"},{"Start":"06:18.305 ","End":"06:22.505","Text":"this time I look at u as being in W2 perp,"},{"Start":"06:22.505 ","End":"06:25.685","Text":"and v2 as being in W2."},{"Start":"06:25.685 ","End":"06:33.345","Text":"Again, u is in the perp of where v2 is."},{"Start":"06:33.345 ","End":"06:38.975","Text":"We get that u with v is 0 plus 0,"},{"Start":"06:38.975 ","End":"06:41.500","Text":"which is of course equal to 0."},{"Start":"06:41.500 ","End":"06:46.110","Text":"Now I choose any v from W1 plus W2, nothing special."},{"Start":"06:46.110 ","End":"06:51.170","Text":"It\u0027s true for all v in W1 plus W2,"},{"Start":"06:51.170 ","End":"06:55.045","Text":"u is orthogonal to all of them,"},{"Start":"06:55.045 ","End":"06:59.180","Text":"which means that u is in the perp of this space."},{"Start":"06:59.180 ","End":"07:02.220","Text":"In other words, in W1 plus W2 perp,"},{"Start":"07:02.220 ","End":"07:05.780","Text":"and that\u0027s what we were trying to show."},{"Start":"07:05.780 ","End":"07:07.940","Text":"We wanted to get from here to here,"},{"Start":"07:07.940 ","End":"07:09.590","Text":"from here to here."},{"Start":"07:09.590 ","End":"07:11.300","Text":"That\u0027s the second part,"},{"Start":"07:11.300 ","End":"07:13.980","Text":"and so we are done."}],"ID":10166},{"Watched":false,"Name":"Exercise 13","Duration":"2m 48s","ChapterTopicVideoID":10035,"CourseChapterTopicPlaylistID":7312,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:08.625","Text":"In this exercise, we have an inner product space V and two subspaces W_1 and W_2."},{"Start":"00:08.625 ","End":"00:12.510","Text":"We have to prove that the following holds."},{"Start":"00:12.510 ","End":"00:16.650","Text":"If I take the intersection and then the purp,"},{"Start":"00:16.650 ","End":"00:19.140","Text":"it\u0027s the same as taking the purpose of each,"},{"Start":"00:19.140 ","End":"00:21.640","Text":"and then taking the sum."},{"Start":"00:22.430 ","End":"00:25.460","Text":"If you go back and look at the previous exercise,"},{"Start":"00:25.460 ","End":"00:26.750","Text":"it was almost like this,"},{"Start":"00:26.750 ","End":"00:29.330","Text":"except that the plus was here,"},{"Start":"00:29.330 ","End":"00:31.970","Text":"and the intersection was here."},{"Start":"00:31.970 ","End":"00:34.880","Text":"I\u0027ve brought in a couple of formulas."},{"Start":"00:34.880 ","End":"00:37.670","Text":"The first one is what we were just talking about,"},{"Start":"00:37.670 ","End":"00:40.190","Text":"is the opposite of this except instead of W_1,"},{"Start":"00:40.190 ","End":"00:41.960","Text":"W_2, I\u0027m calling it A and B."},{"Start":"00:41.960 ","End":"00:44.465","Text":"If we had a plus here we\u0027d have an intersection here."},{"Start":"00:44.465 ","End":"00:49.365","Text":"A plus B purp is A purp intersect with B purp."},{"Start":"00:49.365 ","End":"00:52.245","Text":"Here is another result that we had,"},{"Start":"00:52.245 ","End":"00:54.495","Text":"we didn\u0027t call it C, it doesn\u0027t matter."},{"Start":"00:54.495 ","End":"00:59.435","Text":"Take purp of a purp you get back to the original space."},{"Start":"00:59.435 ","End":"01:00.860","Text":"The orthogonal complement of"},{"Start":"01:00.860 ","End":"01:04.565","Text":"the orthogonal complement brings us back to the original space."},{"Start":"01:04.565 ","End":"01:07.680","Text":"These are the two results we will be using."},{"Start":"01:07.760 ","End":"01:11.980","Text":"I\u0027m going to start out by using this rule."},{"Start":"01:11.980 ","End":"01:16.310","Text":"I\u0027m going to substitute instead of A will write W_1 purp,"},{"Start":"01:16.310 ","End":"01:20.320","Text":"and instead of B, W_2 purp."},{"Start":"01:20.320 ","End":"01:24.795","Text":"Straightforward substituting. Instead of A,"},{"Start":"01:24.795 ","End":"01:26.040","Text":"we have W_1 purp,"},{"Start":"01:26.040 ","End":"01:27.825","Text":"instead of B, W_2 purp."},{"Start":"01:27.825 ","End":"01:29.745","Text":"Then the purp of that."},{"Start":"01:29.745 ","End":"01:35.160","Text":"Then A purp is W_1 purp but with an extra purp,"},{"Start":"01:35.160 ","End":"01:37.605","Text":"it\u0027s purp purp, sounds funny."},{"Start":"01:37.605 ","End":"01:41.640","Text":"Intersect with W_2 purp and the purp that,"},{"Start":"01:41.640 ","End":"01:44.250","Text":"and this would be B purp."},{"Start":"01:44.250 ","End":"01:49.790","Text":"Now, I\u0027m going to apply this rule to the right-hand side."},{"Start":"01:49.790 ","End":"01:56.075","Text":"This just becomes W_1 and this one becomes W_2 and all the rest is the same."},{"Start":"01:56.075 ","End":"02:02.540","Text":"What I\u0027m going to do next is just to take the orthogonal complement of both sides,"},{"Start":"02:02.540 ","End":"02:06.490","Text":"apply the purp on the left side and the right side."},{"Start":"02:06.490 ","End":"02:08.865","Text":"To distinguish it I put it in red."},{"Start":"02:08.865 ","End":"02:11.970","Text":"This is the red purp that I applied to the left,"},{"Start":"02:11.970 ","End":"02:15.370","Text":"and here\u0027s the red purp I applied to the right."},{"Start":"02:15.440 ","End":"02:17.730","Text":"Using this rule,"},{"Start":"02:17.730 ","End":"02:20.300","Text":"these two [inaudible] are going to cancel each other out,"},{"Start":"02:20.300 ","End":"02:23.150","Text":"and then I\u0027ll be able to drop the brackets."},{"Start":"02:23.150 ","End":"02:25.055","Text":"This is what we get."},{"Start":"02:25.055 ","End":"02:31.060","Text":"W_1 purp plus W_2 purp equals W_1 intersect W_2 purp."},{"Start":"02:31.060 ","End":"02:34.839","Text":"Looking back at what we have to prove,"},{"Start":"02:34.839 ","End":"02:38.555","Text":"aren\u0027t exactly the same but all you have to do is switch"},{"Start":"02:38.555 ","End":"02:42.890","Text":"the left-hand side on the right-hand side of inequality which we can certainly do,"},{"Start":"02:42.890 ","End":"02:44.810","Text":"and then we have reached this,"},{"Start":"02:44.810 ","End":"02:46.010","Text":"and we have proven it,"},{"Start":"02:46.010 ","End":"02:48.300","Text":"so we are done."}],"ID":10167}],"Thumbnail":null,"ID":7312},{"Name":"Orthogonal Sets and Bases","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Lesson 1 - Orthogonal Sets of Vectors","Duration":"1m 58s","ChapterTopicVideoID":9716,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.060","Text":"In this clip, we\u0027ll learn about orthogonal and orthonormal sets of vectors."},{"Start":"00:06.060 ","End":"00:08.310","Text":"At least we\u0027ll define the concepts."},{"Start":"00:08.310 ","End":"00:13.950","Text":"We can talk about orthogonal and orthonormal in an inner product space,"},{"Start":"00:13.950 ","End":"00:15.810","Text":"not just a vector space,"},{"Start":"00:15.810 ","End":"00:18.315","Text":"has to have an inner product also."},{"Start":"00:18.315 ","End":"00:23.850","Text":"Then a set of vectors in such a space is called orthogonal."},{"Start":"00:23.850 ","End":"00:28.350","Text":"If it doesn\u0027t contain 0, the 0 vector,"},{"Start":"00:28.350 ","End":"00:35.040","Text":"and each pair of vectors in S is orthogonal, mutually orthogonal."},{"Start":"00:35.040 ","End":"00:37.080","Text":"I\u0027ll say the same thing again,"},{"Start":"00:37.080 ","End":"00:42.569","Text":"but in mathematical language, it doesn\u0027t contain 0."},{"Start":"00:42.569 ","End":"00:48.605","Text":"But that\u0027s this, 0 is not a member of S and each pair is orthogonal."},{"Start":"00:48.605 ","End":"00:52.525","Text":"Meaning if I take u and v from s,"},{"Start":"00:52.525 ","End":"00:56.930","Text":"but they have to be different and u is not equal to v,"},{"Start":"00:56.930 ","End":"01:01.810","Text":"then orthogonal, their inner product is 0."},{"Start":"01:01.880 ","End":"01:04.310","Text":"That\u0027s a matter of convention."},{"Start":"01:04.310 ","End":"01:07.430","Text":"If S just contains 1 non 0 vector,"},{"Start":"01:07.430 ","End":"01:09.425","Text":"it\u0027s also called orthogonal."},{"Start":"01:09.425 ","End":"01:12.439","Text":"There aren\u0027t any pairs to check it against."},{"Start":"01:12.439 ","End":"01:15.110","Text":"It\u0027s what we call vacuously true."},{"Start":"01:15.110 ","End":"01:19.820","Text":"You can\u0027t find any pair that\u0027s not orthogonal because there are no pairs at all."},{"Start":"01:19.820 ","End":"01:24.920","Text":"Anyway. Now the second concept, orthonormal."},{"Start":"01:24.920 ","End":"01:29.450","Text":"You start off with an orthogonal set and it\u0027s called"},{"Start":"01:29.450 ","End":"01:35.035","Text":"orthonormal if each of the members are unit vectors."},{"Start":"01:35.035 ","End":"01:39.785","Text":"Here\u0027s a very important claim or proposition or fact."},{"Start":"01:39.785 ","End":"01:45.925","Text":"An orthogonal set in an inner product space is necessarily linearly independent."},{"Start":"01:45.925 ","End":"01:48.560","Text":"If we have orthogonal vectors automatically,"},{"Start":"01:48.560 ","End":"01:50.810","Text":"they are linearly independent."},{"Start":"01:50.810 ","End":"01:53.180","Text":"It\u0027s actually quite easy to prove this,"},{"Start":"01:53.180 ","End":"01:59.340","Text":"but we won\u0027t bother with that. Done with this clip."}],"ID":10168},{"Watched":false,"Name":"Lesson 2 - Orthogonal Bases","Duration":"3m 19s","ChapterTopicVideoID":9717,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.940","Text":"This is a continuation of the previous clip,"},{"Start":"00:02.940 ","End":"00:06.195","Text":"where we talked about the orthogonal and orthonormal sets."},{"Start":"00:06.195 ","End":"00:10.110","Text":"Now we\u0027re going to talk about orthogonal and orthonormal bases."},{"Start":"00:10.110 ","End":"00:13.200","Text":"Again, it\u0027s an inner product space."},{"Start":"00:13.200 ","End":"00:18.390","Text":"I\u0027ll start with just talking about orthogonal and later we\u0027ll see about orthonormal."},{"Start":"00:18.390 ","End":"00:20.100","Text":"Now a set,"},{"Start":"00:20.100 ","End":"00:23.310","Text":"call it B of vectors in an inner product space,"},{"Start":"00:23.310 ","End":"00:31.080","Text":"V is called an orthogonal basis of V if it\u0027s an orthogonal set and also a basis."},{"Start":"00:31.080 ","End":"00:33.240","Text":"If it\u0027s orthogonal and it\u0027s a basis,"},{"Start":"00:33.240 ","End":"00:35.790","Text":"then it\u0027s an orthogonal basis."},{"Start":"00:35.790 ","End":"00:40.320","Text":"There\u0027s an important proposition that if we have"},{"Start":"00:40.320 ","End":"00:46.100","Text":"the same setup as above in the product space and subset S,"},{"Start":"00:46.100 ","End":"00:50.150","Text":"then S is an orthogonal basis if and only if it\u0027s"},{"Start":"00:50.150 ","End":"00:54.510","Text":"orthogonal and contains exactly n vectors."},{"Start":"00:54.510 ","End":"00:57.994","Text":"Instead of orthogonal and also a basis,"},{"Start":"00:57.994 ","End":"01:02.630","Text":"we say orthogonal and contains exactly n vectors,"},{"Start":"01:02.630 ","End":"01:05.100","Text":"where n is the dimension."},{"Start":"01:05.140 ","End":"01:10.040","Text":"Now, turns out that this is easier to work with."},{"Start":"01:10.040 ","End":"01:14.180","Text":"It\u0027s easier to count and vectors and to show something is a basis."},{"Start":"01:14.180 ","End":"01:18.000","Text":"We can take this proposition as a definition."},{"Start":"01:19.720 ","End":"01:21.770","Text":"For the remaining lessons,"},{"Start":"01:21.770 ","End":"01:26.775","Text":"this will be the definition of an orthogonal base."},{"Start":"01:26.775 ","End":"01:28.580","Text":"This highlight the important bit,"},{"Start":"01:28.580 ","End":"01:31.640","Text":"orthogonal basis means 2 things."},{"Start":"01:31.640 ","End":"01:36.499","Text":"It\u0027s orthogonal and contains exactly n vectors."},{"Start":"01:36.499 ","End":"01:38.660","Text":"Those 2 things will guarantee this,"},{"Start":"01:38.660 ","End":"01:43.380","Text":"I want to as a definition where n is the dimension."},{"Start":"01:43.520 ","End":"01:45.750","Text":"Now a few examples."},{"Start":"01:45.750 ","End":"01:48.015","Text":"The first 1, would take R^4."},{"Start":"01:48.015 ","End":"01:52.360","Text":"Remember, the dimension of R^4 is 4."},{"Start":"01:52.360 ","End":"01:56.420","Text":"It\u0027s an inner product space with the standard inner product,"},{"Start":"01:56.420 ","End":"01:57.830","Text":"which is a dot-product."},{"Start":"01:57.830 ","End":"02:01.340","Text":"If we have a set of 4 orthogonal vectors,"},{"Start":"02:01.340 ","End":"02:02.824","Text":"4 being the dimension,"},{"Start":"02:02.824 ","End":"02:05.905","Text":"then it\u0027s an orthogonal basis."},{"Start":"02:05.905 ","End":"02:11.940","Text":"In the space of 3 by 3 real matrices,"},{"Start":"02:11.940 ","End":"02:16.485","Text":"the dimension of this we know is 3 squared, which is 9."},{"Start":"02:16.485 ","End":"02:20.940","Text":"If we have a set of 9 orthogonal matrices,"},{"Start":"02:20.940 ","End":"02:24.690","Text":"then it will be a basis."},{"Start":"02:24.690 ","End":"02:27.110","Text":"Once again, it isn\u0027t inner product space."},{"Start":"02:27.110 ","End":"02:30.590","Text":"There is a standard inner product for matrices."},{"Start":"02:30.590 ","End":"02:33.320","Text":"I want to write it so you can look back,"},{"Start":"02:33.320 ","End":"02:36.870","Text":"and there is an inner product defined on this."},{"Start":"02:36.940 ","End":"02:40.565","Text":"In P4 over R,"},{"Start":"02:40.565 ","End":"02:45.025","Text":"the polynomials of degree less than or equal to 4."},{"Start":"02:45.025 ","End":"02:49.460","Text":"A set of 5 orthogonal polynomials is a basis because remember"},{"Start":"02:49.460 ","End":"02:55.740","Text":"the dimension of this is 4 plus 1, which is 5."},{"Start":"02:56.090 ","End":"02:58.925","Text":"That\u0027s it for orthogonal."},{"Start":"02:58.925 ","End":"03:01.295","Text":"Now what about orthonormal?"},{"Start":"03:01.295 ","End":"03:02.880","Text":"Well, exactly the same."},{"Start":"03:02.880 ","End":"03:08.855","Text":"If I just copy this lesson and replace the word orthogonal with orthonormal everywhere,"},{"Start":"03:08.855 ","End":"03:11.795","Text":"then you\u0027ve got the orthonormal case."},{"Start":"03:11.795 ","End":"03:16.025","Text":"Same thing. Everything works with orthonormal instead of orthogonal."},{"Start":"03:16.025 ","End":"03:19.440","Text":"Okay. That\u0027s that then."}],"ID":10169},{"Watched":false,"Name":"Exercise 1","Duration":"2m 59s","ChapterTopicVideoID":9718,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.445","Text":"In this exercise, we\u0027re working in the inner product space R3,"},{"Start":"00:05.445 ","End":"00:10.470","Text":"and if it doesn\u0027t say it\u0027s the usual in a product, the dot product."},{"Start":"00:10.470 ","End":"00:16.650","Text":"We have a set of vectors here and there are 3 parts."},{"Start":"00:16.650 ","End":"00:20.145","Text":"A, show that the set S is orthogonal."},{"Start":"00:20.145 ","End":"00:27.945","Text":"B, to normalize the vectors in S to obtain an orthonormal set."},{"Start":"00:27.945 ","End":"00:35.325","Text":"Last part, to show that S is actually a basis without doing all computations."},{"Start":"00:35.325 ","End":"00:38.350","Text":"We start with A."},{"Start":"00:38.350 ","End":"00:42.785","Text":"Now, this set obviously doesn\u0027t contain 0."},{"Start":"00:42.785 ","End":"00:45.140","Text":"That\u0027s 1 of the conditions for orthogonal."},{"Start":"00:45.140 ","End":"00:47.929","Text":"We also have to show it\u0027s pair-wise orthogonal,"},{"Start":"00:47.929 ","End":"00:55.920","Text":"meaning the inner product of any 2 of these is 0 and only 3 combinations this with this,"},{"Start":"00:55.920 ","End":"00:57.825","Text":"this with this, this with this."},{"Start":"00:57.825 ","End":"01:01.150","Text":"Remember that the inner product is a dot product."},{"Start":"01:01.150 ","End":"01:04.880","Text":"In the first instance, we\u0027ll take this 1 and this 1."},{"Start":"01:04.880 ","End":"01:13.025","Text":"The dot product is 2 times 1 plus 1 times 2 plus minus 4 times 1, it\u0027s 0."},{"Start":"01:13.025 ","End":"01:18.650","Text":"Yes, these 2 are orthogonal and similarly the first and the last,"},{"Start":"01:18.650 ","End":"01:21.305","Text":"and here the second and the third."},{"Start":"01:21.305 ","End":"01:23.000","Text":"The order doesn\u0027t matter."},{"Start":"01:23.000 ","End":"01:25.715","Text":"Otherwise, you would have had more combinations."},{"Start":"01:25.715 ","End":"01:27.905","Text":"That\u0027s the orthogonal part."},{"Start":"01:27.905 ","End":"01:29.740","Text":"Now on to B."},{"Start":"01:29.740 ","End":"01:40.140","Text":"To normalize the vector means to divide it by its norm and then we\u0027ll get a unit vector."},{"Start":"01:41.720 ","End":"01:50.720","Text":"Here it is. On the numerators are the 3 vectors and each 1 divided by its norm."},{"Start":"01:50.720 ","End":"01:53.990","Text":"The norm of this is a square root of 2 square plus 1 square"},{"Start":"01:53.990 ","End":"01:57.410","Text":"plus negative 4 squared and so on for the other 2."},{"Start":"01:57.410 ","End":"02:05.085","Text":"Now we just have to make 3 computations just to simplify a bit."},{"Start":"02:05.085 ","End":"02:10.580","Text":"This would be after normalization and"},{"Start":"02:10.580 ","End":"02:16.040","Text":"really shouldn\u0027t use the same letter S because it\u0027s been reserved."},{"Start":"02:16.040 ","End":"02:19.380","Text":"Let\u0027s call it S hat or something."},{"Start":"02:20.300 ","End":"02:23.595","Text":"This is not the same S as before."},{"Start":"02:23.595 ","End":"02:31.650","Text":"The last part C was to show that S is a basis."},{"Start":"02:32.290 ","End":"02:34.460","Text":"1 of the definitions,"},{"Start":"02:34.460 ","End":"02:40.235","Text":"the second definition said that if it\u0027s orthogonal and contains exactly,"},{"Start":"02:40.235 ","End":"02:44.100","Text":"in this case 3 members,"},{"Start":"02:44.100 ","End":"02:46.275","Text":"3 being the dimension of the space,"},{"Start":"02:46.275 ","End":"02:48.630","Text":"then the result follows."},{"Start":"02:48.630 ","End":"02:54.030","Text":"That was 1 of our definition so S is a basis."},{"Start":"02:55.610 ","End":"02:59.470","Text":"That concludes this exercise."}],"ID":10170},{"Watched":false,"Name":"Exercise 2","Duration":"4m 23s","ChapterTopicVideoID":9719,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.530","Text":"In this exercise, we\u0027re given a set of vectors,"},{"Start":"00:04.530 ","End":"00:06.855","Text":"3 of them in R^3."},{"Start":"00:06.855 ","End":"00:12.090","Text":"This being an inner product space with the usual inner product,"},{"Start":"00:12.090 ","End":"00:13.965","Text":"which is the dot product."},{"Start":"00:13.965 ","End":"00:15.510","Text":"If it doesn\u0027t say anything,"},{"Start":"00:15.510 ","End":"00:17.295","Text":"then that\u0027s what it means."},{"Start":"00:17.295 ","End":"00:24.945","Text":"Our task is to write this vector as a linear combination of the members of S,"},{"Start":"00:24.945 ","End":"00:27.030","Text":"but using inner products,"},{"Start":"00:27.030 ","End":"00:29.970","Text":"not using the techniques we learned in"},{"Start":"00:29.970 ","End":"00:34.780","Text":"vector spaces with row operations and echelon form and so on."},{"Start":"00:36.080 ","End":"00:39.680","Text":"In the previous exercise, if you look back,"},{"Start":"00:39.680 ","End":"00:46.460","Text":"we proved that S is orthogonal and this is really important."},{"Start":"00:46.460 ","End":"00:48.035","Text":"I should have said,"},{"Start":"00:48.035 ","End":"00:51.260","Text":"given a hint, see previous exercise."},{"Start":"00:51.260 ","End":"00:54.020","Text":"Let\u0027s say the coordinates are x, y,"},{"Start":"00:54.020 ","End":"00:58.905","Text":"z relative to this basis."},{"Start":"00:58.905 ","End":"01:00.960","Text":"We showed the rest was the basis."},{"Start":"01:00.960 ","End":"01:06.065","Text":"Here\u0027s how it works and I\u0027ve used colors to help you to follow."},{"Start":"01:06.065 ","End":"01:08.360","Text":"If I want to find x,"},{"Start":"01:08.360 ","End":"01:13.550","Text":"I take both sides of the equation and take"},{"Start":"01:13.550 ","End":"01:19.610","Text":"the inner product or dot product of each side with this vector 2, 1, 4."},{"Start":"01:19.610 ","End":"01:22.220","Text":"See here I put dot 2, 1,"},{"Start":"01:22.220 ","End":"01:25.525","Text":"4 and here, and here and here."},{"Start":"01:25.525 ","End":"01:31.010","Text":"The reason that this is good is that because this set is orthogonal,"},{"Start":"01:31.010 ","End":"01:35.735","Text":"any 2 different ones like this times this will be 0,"},{"Start":"01:35.735 ","End":"01:38.830","Text":"this times this will be 0."},{"Start":"01:38.830 ","End":"01:44.140","Text":"Here, I can\u0027t do that because it\u0027s not 2 different vectors it\u0027s the same vector."},{"Start":"01:44.140 ","End":"01:47.030","Text":"But we can do the calculation if you figure"},{"Start":"01:47.030 ","End":"01:49.340","Text":"out the dot product of these I won\u0027t go into the detail,"},{"Start":"01:49.340 ","End":"01:51.170","Text":"comes out minus 3."},{"Start":"01:51.170 ","End":"01:56.330","Text":"If you do the dot product of these 2 comes out to be 21."},{"Start":"01:56.330 ","End":"01:58.430","Text":"I\u0027ll leave it like this for now."},{"Start":"01:58.430 ","End":"02:02.540","Text":"I\u0027ll extract x later though mentally we can see it\u0027s minus 3/21,"},{"Start":"02:02.540 ","End":"02:03.890","Text":"which is minus 1/7."},{"Start":"02:03.890 ","End":"02:07.550","Text":"Anyway, I\u0027d rather move on to the next one to find y,"},{"Start":"02:07.550 ","End":"02:12.065","Text":"we take the dot product of both sides with this vector here."},{"Start":"02:12.065 ","End":"02:15.185","Text":"Here is the dot product that we get."},{"Start":"02:15.185 ","End":"02:18.810","Text":"The color should help you to follow what\u0027s going on."},{"Start":"02:18.890 ","End":"02:23.375","Text":"Once again, we can cancel because of the orthogonality."},{"Start":"02:23.375 ","End":"02:24.820","Text":"Like these 2 are different,"},{"Start":"02:24.820 ","End":"02:26.570","Text":"the dot product is 0,"},{"Start":"02:26.570 ","End":"02:28.550","Text":"so this term disappears."},{"Start":"02:28.550 ","End":"02:30.140","Text":"These 2 are different,"},{"Start":"02:30.140 ","End":"02:31.925","Text":"so this term disappears."},{"Start":"02:31.925 ","End":"02:37.350","Text":"I just need to compute this term and this."},{"Start":"02:37.350 ","End":"02:43.715","Text":"This dot product, I\u0027ll leave you to do the calculation to check it it\u0027s 18."},{"Start":"02:43.715 ","End":"02:47.509","Text":"This dot product comes out 6,"},{"Start":"02:47.509 ","End":"02:49.820","Text":"1 squared plus 2 squared plus 1 squared,"},{"Start":"02:49.820 ","End":"02:51.815","Text":"1 plus 4 plus 1, 6."},{"Start":"02:51.815 ","End":"02:54.540","Text":"I\u0027ll extract y later."},{"Start":"02:54.590 ","End":"02:58.385","Text":"Now, the third one we want to go for z."},{"Start":"02:58.385 ","End":"03:05.340","Text":"We\u0027re going to multiply both sides by this vector here,"},{"Start":"03:05.340 ","End":"03:06.990","Text":"the 3 minus 21."},{"Start":"03:06.990 ","End":"03:10.510","Text":"There it is, everywhere."},{"Start":"03:11.060 ","End":"03:13.520","Text":"Once again we can cancel."},{"Start":"03:13.520 ","End":"03:18.120","Text":"These 2 are different and it\u0027s orthogonal so this is 0."},{"Start":"03:18.120 ","End":"03:22.930","Text":"This is not but this with this is 0."},{"Start":"03:23.810 ","End":"03:28.550","Text":"This dot product computed comes out 48."},{"Start":"03:28.550 ","End":"03:33.835","Text":"This with this comes out 9 plus 4 plus 1, 14."},{"Start":"03:33.835 ","End":"03:36.780","Text":"We actually have found x,"},{"Start":"03:36.780 ","End":"03:38.980","Text":"y, and z."},{"Start":"03:39.020 ","End":"03:44.805","Text":"Look at these where we almost have x, y, and z."},{"Start":"03:44.805 ","End":"03:46.965","Text":"Now, let\u0027s extract them."},{"Start":"03:46.965 ","End":"03:50.930","Text":"x comes out minus 3 over 21, which is minus 1/7."},{"Start":"03:50.930 ","End":"03:52.910","Text":"y comes out 18 over 6,"},{"Start":"03:52.910 ","End":"03:55.040","Text":"which is 3, and z,"},{"Start":"03:55.040 ","End":"04:01.540","Text":"48 over 14, which we can cancel by 2, 24 over 7."},{"Start":"04:01.540 ","End":"04:11.865","Text":"We can use these 3 numbers to get this as a linear combination of these 3."},{"Start":"04:11.865 ","End":"04:13.095","Text":"Just to copy them,"},{"Start":"04:13.095 ","End":"04:15.140","Text":"minus 1/7 was with this one,"},{"Start":"04:15.140 ","End":"04:16.550","Text":"3 with this one,"},{"Start":"04:16.550 ","End":"04:19.830","Text":"and 24 over 7 with that one."},{"Start":"04:20.380 ","End":"04:23.580","Text":"That\u0027s it. We\u0027re done."}],"ID":10171},{"Watched":false,"Name":"Exercise 3","Duration":"4m 35s","ChapterTopicVideoID":10022,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.715","Text":"This exercise is actually a generalization of the previous exercise."},{"Start":"00:05.715 ","End":"00:08.850","Text":"If you look back, it\u0027s exactly the same,"},{"Start":"00:08.850 ","End":"00:13.720","Text":"we got this set S of vectors in R^3."},{"Start":"00:13.940 ","End":"00:21.190","Text":"We showed even earlier that it is an orthogonal basis of R^3."},{"Start":"00:21.190 ","End":"00:23.040","Text":"But in the previous exercise,"},{"Start":"00:23.040 ","End":"00:26.520","Text":"we had numbers here like 13 minus 1,"},{"Start":"00:26.520 ","End":"00:30.210","Text":"7 and this time we\u0027re doing it at the general a, b,"},{"Start":"00:30.210 ","End":"00:40.600","Text":"c. The same task to write this vector as a linear combination,"},{"Start":"00:41.480 ","End":"00:44.750","Text":"that\u0027s what we mean by coordinate vector."},{"Start":"00:44.750 ","End":"00:50.025","Text":"To write this linear combination of these."},{"Start":"00:50.025 ","End":"00:54.070","Text":"I\u0027m going to use inner products and orthogonality,"},{"Start":"00:54.070 ","End":"00:57.720","Text":"not all the techniques we learned in vector spaces."},{"Start":"00:58.030 ","End":"01:00.920","Text":"Let\u0027s get started."},{"Start":"01:00.920 ","End":"01:05.580","Text":"It might help if you review the previous exercise"},{"Start":"01:05.580 ","End":"01:10.774","Text":"first because this is similar but a little bit more abstract."},{"Start":"01:10.774 ","End":"01:13.550","Text":"There we had numbers 13 minus 1,"},{"Start":"01:13.550 ","End":"01:15.320","Text":"7, and here we got a, b,"},{"Start":"01:15.320 ","End":"01:20.930","Text":"c. So we start out by letting the coordinates be x,"},{"Start":"01:20.930 ","End":"01:25.310","Text":"y, z, and we write this,"},{"Start":"01:25.310 ","End":"01:27.740","Text":"this is the linear combination."},{"Start":"01:27.740 ","End":"01:31.530","Text":"Our task is to find x, y, and z."},{"Start":"01:31.690 ","End":"01:39.580","Text":"The way to find x is to take the dot product of both sides with this vector here,"},{"Start":"01:39.580 ","End":"01:42.420","Text":"and the coloring should help."},{"Start":"01:42.420 ","End":"01:49.680","Text":"The idea is that because we know that S is an orthogonal set,"},{"Start":"01:49.680 ","End":"01:51.520","Text":"it\u0027s a basis even."},{"Start":"01:51.520 ","End":"01:54.005","Text":"We know that this dot,"},{"Start":"01:54.005 ","End":"01:59.405","Text":"this is 0 and this dot this is 0."},{"Start":"01:59.405 ","End":"02:02.105","Text":"Not here because it\u0027s the same,"},{"Start":"02:02.105 ","End":"02:03.890","Text":"only every 2 different ones,"},{"Start":"02:03.890 ","End":"02:06.110","Text":"If you dot-product them, you get 0."},{"Start":"02:06.110 ","End":"02:11.865","Text":"But from here, we can just do the dot-product,"},{"Start":"02:11.865 ","End":"02:14.700","Text":"and we have an expression in a,"},{"Start":"02:14.700 ","End":"02:24.310","Text":"b, c here that\u0027s equal to this dot product comes out 21,4 plus 1 plus 16."},{"Start":"02:24.350 ","End":"02:28.650","Text":"We have x in terms of a, b, and c,"},{"Start":"02:28.650 ","End":"02:32.180","Text":"just divide both sides by 21 and switch sides,"},{"Start":"02:32.180 ","End":"02:33.800","Text":"so this is x."},{"Start":"02:33.800 ","End":"02:38.340","Text":"Now let\u0027s go onto the next 1, let\u0027s do y."},{"Start":"02:38.360 ","End":"02:41.355","Text":"For that, we want to multiply,"},{"Start":"02:41.355 ","End":"02:42.440","Text":"well, not really multiply,"},{"Start":"02:42.440 ","End":"02:45.650","Text":"take the dot product of both sides with 1,"},{"Start":"02:45.650 ","End":"02:49.405","Text":"2, 1, and here we are."},{"Start":"02:49.405 ","End":"02:52.790","Text":"Again using the orthogonality,"},{"Start":"02:52.790 ","End":"02:55.475","Text":"the dot product of 2 different ones is 0."},{"Start":"02:55.475 ","End":"02:57.455","Text":"This comes out 0,"},{"Start":"02:57.455 ","End":"03:01.925","Text":"not So here, but here, is also 0."},{"Start":"03:01.925 ","End":"03:05.520","Text":"Now let\u0027s expand a times 1,"},{"Start":"03:05.520 ","End":"03:08.645","Text":"b times 2, c times 1 add them,"},{"Start":"03:08.645 ","End":"03:14.840","Text":"comes out to be this dot with this is 1 plus 4 plus 1 is 6,"},{"Start":"03:14.840 ","End":"03:17.680","Text":"so we get y times 6."},{"Start":"03:17.680 ","End":"03:21.900","Text":"That gives us y and now we just need z,"},{"Start":"03:21.900 ","End":"03:29.070","Text":"so we\u0027re going to take this and dot product both sides with the 3 minus 2,1,"},{"Start":"03:29.070 ","End":"03:31.600","Text":"lets scroll it off."},{"Start":"03:32.300 ","End":"03:35.145","Text":"But this is what we get,"},{"Start":"03:35.145 ","End":"03:37.620","Text":"and this time 0 here,"},{"Start":"03:37.620 ","End":"03:41.260","Text":"0 here just leaves this."},{"Start":"03:41.450 ","End":"03:43.620","Text":"This with this is 3,"},{"Start":"03:43.620 ","End":"03:52.870","Text":"a minus 2b plus c. This dot product is 9 plus 4 plus 1 is 14 with the z."},{"Start":"03:52.910 ","End":"03:55.170","Text":"Now we\u0027ve got z,"},{"Start":"03:55.170 ","End":"03:56.670","Text":"also we have x, y,"},{"Start":"03:56.670 ","End":"04:02.215","Text":"and z and then we just have to plug it in to that linear combination."},{"Start":"04:02.215 ","End":"04:05.165","Text":"We see that the a, b, c is,"},{"Start":"04:05.165 ","End":"04:09.234","Text":"this is the x part times the first vector,"},{"Start":"04:09.234 ","End":"04:12.355","Text":"and then what we called y times the second,"},{"Start":"04:12.355 ","End":"04:14.640","Text":"and what we called z times the third."},{"Start":"04:14.640 ","End":"04:16.340","Text":"But it\u0027s all just in terms of a,"},{"Start":"04:16.340 ","End":"04:18.130","Text":"b, and c now."},{"Start":"04:18.130 ","End":"04:20.385","Text":"You give me any a, b, c,"},{"Start":"04:20.385 ","End":"04:23.175","Text":"I do the 3 computations and get these numbers,"},{"Start":"04:23.175 ","End":"04:28.610","Text":"and that gives me the coordinates of how to write it"},{"Start":"04:28.610 ","End":"04:35.080","Text":"is the linear combination of these 3 basis vectors. We\u0027re done."}],"ID":10172},{"Watched":false,"Name":"Exercise 4","Duration":"5m 15s","ChapterTopicVideoID":10023,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.780","Text":"This exercise is a little bit abstract."},{"Start":"00:03.780 ","End":"00:08.115","Text":"Suppose we have a basis,"},{"Start":"00:08.115 ","End":"00:10.650","Text":"an orthogonal basis, u_1,"},{"Start":"00:10.650 ","End":"00:16.680","Text":"u_2 up to u_n in some inner product space V,"},{"Start":"00:16.680 ","End":"00:20.025","Text":"we have to show that for every vector in V,"},{"Start":"00:20.025 ","End":"00:23.115","Text":"the following formula holds."},{"Start":"00:23.115 ","End":"00:27.944","Text":"What this says is how to break up V"},{"Start":"00:27.944 ","End":"00:36.100","Text":"into something times u_1 plus something times u_2 plus and so on up something times u_n."},{"Start":"00:37.420 ","End":"00:43.490","Text":"If you label each one of these coefficients,"},{"Start":"00:43.490 ","End":"00:45.380","Text":"call it say a_i,"},{"Start":"00:45.380 ","End":"00:50.385","Text":"which is v inner product with the u_i over u_i with you u_i,"},{"Start":"00:50.385 ","End":"00:52.840","Text":"I mean that\u0027s the pattern."},{"Start":"00:52.840 ","End":"00:58.355","Text":"Then these coefficients are called Fourier coefficients."},{"Start":"00:58.355 ","End":"01:04.680","Text":"They\u0027re also called the components of v relative to the basis."},{"Start":"01:06.620 ","End":"01:10.190","Text":"There is a short way to show this."},{"Start":"01:10.190 ","End":"01:15.860","Text":"I\u0027m going to take a slightly longer approach that might be more understandable."},{"Start":"01:15.860 ","End":"01:20.490","Text":"What I\u0027m going to do is show that the coefficient for u_1 is this,"},{"Start":"01:20.490 ","End":"01:28.630","Text":"for u_2 it\u0027s this and for u_n it\u0027s this rather than going on a general term i."},{"Start":"01:28.970 ","End":"01:33.020","Text":"More work but I think it\u0027s easier to understand."},{"Start":"01:33.020 ","End":"01:35.420","Text":"Because B is a basis,"},{"Start":"01:35.420 ","End":"01:42.050","Text":"any vector v will be written as a linear combination of the members of the basis."},{"Start":"01:42.050 ","End":"01:44.570","Text":"We know that v is of this form,"},{"Start":"01:44.570 ","End":"01:46.880","Text":"we just don\u0027t know the coefficients a_1,"},{"Start":"01:46.880 ","End":"01:49.415","Text":"a_2 up to a_n."},{"Start":"01:49.415 ","End":"01:53.170","Text":"Now if we want to find a_1,"},{"Start":"01:53.170 ","End":"01:59.185","Text":"what we\u0027ll do is take the inner product of both sides with u_1."},{"Start":"01:59.185 ","End":"02:08.080","Text":"Here is v inner product u_1 and here\u0027s all this inner product with u_1."},{"Start":"02:08.080 ","End":"02:12.845","Text":"On the right-hand side, we can use linearity to break this inner product"},{"Start":"02:12.845 ","End":"02:18.750","Text":"into n pieces, like so."},{"Start":"02:19.630 ","End":"02:29.790","Text":"Again, using linearity or actually it\u0027s called homogeneity."},{"Start":"02:31.810 ","End":"02:35.405","Text":"Because of the orthogonality,"},{"Start":"02:35.405 ","End":"02:42.115","Text":"the inner product of any two different use will be 0."},{"Start":"02:42.115 ","End":"02:45.065","Text":"Everything will be 0 except the first one,"},{"Start":"02:45.065 ","End":"02:47.045","Text":"because it\u0027s u_1 with u_1."},{"Start":"02:47.045 ","End":"02:53.055","Text":"But anything not u_1 with u_1 will be 0."},{"Start":"02:53.055 ","End":"02:57.330","Text":"We\u0027re just left with this equals this and then we can extract"},{"Start":"02:57.330 ","End":"03:01.800","Text":"a_1 by just dividing by this."},{"Start":"03:01.800 ","End":"03:09.625","Text":"That\u0027s what a_1 equals and next we\u0027ll do the same thing to find a_2."},{"Start":"03:09.625 ","End":"03:12.365","Text":"Make some space here."},{"Start":"03:12.365 ","End":"03:17.630","Text":"For that, we take the inner product of both sides with u_2."},{"Start":"03:17.630 ","End":"03:21.800","Text":"Again, we break this up with linearity,"},{"Start":"03:21.800 ","End":"03:27.480","Text":"and then the homogeneity to pull the constants out."},{"Start":"03:28.220 ","End":"03:33.150","Text":"Then this is 0 by the orthogonality,"},{"Start":"03:33.150 ","End":"03:37.290","Text":"and so is this because each of the u\u0027s is different,"},{"Start":"03:37.290 ","End":"03:41.895","Text":"except for this one which is not 0."},{"Start":"03:41.895 ","End":"03:47.520","Text":"Now we can extract a_2 and it comes out like this."},{"Start":"03:47.520 ","End":"03:50.840","Text":"We have a_1 and we have a_2 and so far if you check,"},{"Start":"03:50.840 ","End":"03:54.440","Text":"it\u0027s what we\u0027re supposed to get with what we have to prove."},{"Start":"03:54.440 ","End":"03:56.450","Text":"We\u0027ll do a third one,"},{"Start":"03:56.450 ","End":"03:59.020","Text":"which is the a_n,"},{"Start":"03:59.020 ","End":"04:03.320","Text":"so I jump back here to this v equals"},{"Start":"04:03.320 ","End":"04:11.030","Text":"the sum of coefficients with the members of the basis."},{"Start":"04:11.030 ","End":"04:17.430","Text":"This time we take the inner product of both sides with u_n."},{"Start":"04:17.500 ","End":"04:20.375","Text":"This is what we get."},{"Start":"04:20.375 ","End":"04:25.100","Text":"We\u0027re going to use again linearity and homogeneity to"},{"Start":"04:25.100 ","End":"04:29.450","Text":"get this and then this and then we note"},{"Start":"04:29.450 ","End":"04:39.030","Text":"that this is 0 and this is 0 because of the orthogonality then last one which isn\u0027t."},{"Start":"04:39.030 ","End":"04:44.790","Text":"Now we can extract a_n and this is what we get."},{"Start":"04:44.790 ","End":"04:49.384","Text":"Everything is just like in what we were required to prove."},{"Start":"04:49.384 ","End":"04:54.440","Text":"I just wanted to remark, instead of doing 3 checks on a_1 and a_2,"},{"Start":"04:54.440 ","End":"04:59.520","Text":"and a_1 could have just done it once for a general a_i."},{"Start":"04:59.620 ","End":"05:03.235","Text":"The same proof for a_n,"},{"Start":"05:03.235 ","End":"05:06.005","Text":"if you just replace n by i,"},{"Start":"05:06.005 ","End":"05:07.460","Text":"you\u0027ll get it for any i,"},{"Start":"05:07.460 ","End":"05:11.120","Text":"which could be 1-n or anything in between."},{"Start":"05:11.120 ","End":"05:15.570","Text":"Anyway, we have proven it and we are done."}],"ID":10173},{"Watched":false,"Name":"Exercise 5","Duration":"7m 17s","ChapterTopicVideoID":10024,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:07.440","Text":"In this exercise our inner product space is the continuous functions on"},{"Start":"00:07.440 ","End":"00:16.380","Text":"the closed interval from 0 to Pi and the usual integral in the product."},{"Start":"00:16.380 ","End":"00:19.545","Text":"We consider the set S,"},{"Start":"00:19.545 ","End":"00:23.460","Text":"it\u0027s actually an infinite set, cosine x,"},{"Start":"00:23.460 ","End":"00:31.480","Text":"cosine 2x, cosine 3x cosine of any positive whole number times x."},{"Start":"00:32.660 ","End":"00:39.910","Text":"One of the questions is, is S orthogonal?"},{"Start":"00:39.910 ","End":"00:44.185","Text":"If so, maybe it\u0027s orthonormal."},{"Start":"00:44.185 ","End":"00:47.105","Text":"But if it\u0027s orthogonal not orthonormal,"},{"Start":"00:47.105 ","End":"00:50.670","Text":"then we have to normalize it."},{"Start":"00:52.760 ","End":"00:57.945","Text":"First we note that S doesn\u0027t contain zero,"},{"Start":"00:57.945 ","End":"01:03.520","Text":"the zero function it would be in this case."},{"Start":"01:03.520 ","End":"01:05.780","Text":"It\u0027s important not to forget this."},{"Start":"01:05.780 ","End":"01:07.340","Text":"If a set contains 0,"},{"Start":"01:07.340 ","End":"01:09.455","Text":"then it\u0027s not orthogonal."},{"Start":"01:09.455 ","End":"01:15.810","Text":"Next, we have to show that pairwise orthogonal."},{"Start":"01:15.820 ","End":"01:21.080","Text":"What that means is that if we take any 2 positive integers,"},{"Start":"01:21.080 ","End":"01:24.260","Text":"k and l, but they have to be different,"},{"Start":"01:24.260 ","End":"01:30.990","Text":"then cosine of kx is orthogonal to cosine lx,"},{"Start":"01:30.990 ","End":"01:34.095","Text":"meaning their inner product is zero."},{"Start":"01:34.095 ","End":"01:37.200","Text":"Remember the integral in the product,"},{"Start":"01:37.200 ","End":"01:39.130","Text":"if not, I\u0027ll remind you."},{"Start":"01:39.130 ","End":"01:44.155","Text":"Start with the left-hand side here and see if we can get it to equal 0."},{"Start":"01:44.155 ","End":"01:48.740","Text":"This here is the integral inner product for this particular interval 0"},{"Start":"01:48.740 ","End":"01:55.825","Text":"Pi that expresses itself with the numbers here from 0 to Pi."},{"Start":"01:55.825 ","End":"02:00.790","Text":"We\u0027ll have to use some trigonometrical formulas."},{"Start":"02:00.790 ","End":"02:03.000","Text":"Perhaps I should have written it,"},{"Start":"02:03.000 ","End":"02:12.645","Text":"cosine of Alpha times cosine of Beta is 1/2 of"},{"Start":"02:12.645 ","End":"02:16.575","Text":"cosine Alpha plus Beta"},{"Start":"02:16.575 ","End":"02:22.920","Text":"plus cosine of Alpha minus Beta."},{"Start":"02:22.920 ","End":"02:28.520","Text":"If you apply that with Alpha is kx and Beta is lx,"},{"Start":"02:28.520 ","End":"02:31.370","Text":"then this is what we get."},{"Start":"02:31.370 ","End":"02:34.610","Text":"We pull the half out in front of the integral."},{"Start":"02:34.610 ","End":"02:38.450","Text":"The integral of cosine generally is sine,"},{"Start":"02:38.450 ","End":"02:40.820","Text":"but here it\u0027s not x,"},{"Start":"02:40.820 ","End":"02:43.100","Text":"it\u0027s some constant times x,"},{"Start":"02:43.100 ","End":"02:46.205","Text":"so we have to divide by the internal derivative."},{"Start":"02:46.205 ","End":"02:50.555","Text":"Anyway, this is the indefinite integral and then"},{"Start":"02:50.555 ","End":"02:56.100","Text":"we have to make it a definite integral by plugging in the 2 limits and subtracting."},{"Start":"02:56.120 ","End":"03:00.560","Text":"Now, when we plug in the lower limit, x equals 0,"},{"Start":"03:00.560 ","End":"03:03.710","Text":"because sine 0 is 0,"},{"Start":"03:03.710 ","End":"03:05.690","Text":"this doesn\u0027t give us anything,"},{"Start":"03:05.690 ","End":"03:08.665","Text":"so we just need to plug in the Pi."},{"Start":"03:08.665 ","End":"03:14.430","Text":"Instead of x, I\u0027ll write Pi and this is what we get and this turns out 0."},{"Start":"03:14.430 ","End":"03:24.810","Text":"The reason is, is that the sine of any whole number times Pi is equal to 0."},{"Start":"03:24.810 ","End":"03:29.135","Text":"If I let n equal k plus l or k minus l, it doesn\u0027t matter."},{"Start":"03:29.135 ","End":"03:32.060","Text":"The sine of any multiple of Pi is 0."},{"Start":"03:32.060 ","End":"03:37.845","Text":"It says 0 minus 0, which is 0."},{"Start":"03:37.845 ","End":"03:42.570","Text":"Those 2 functions are orthogonal."},{"Start":"03:42.570 ","End":"03:48.715","Text":"We have pairwise orthogonality and so the set is orthogonal."},{"Start":"03:48.715 ","End":"03:54.215","Text":"Now, let\u0027s see if by any chance it\u0027s orthonormal."},{"Start":"03:54.215 ","End":"04:00.380","Text":"Orthonormal means that each member as a function,"},{"Start":"04:00.380 ","End":"04:02.915","Text":"each f in this set S,"},{"Start":"04:02.915 ","End":"04:06.120","Text":"we have to have a norm of 1."},{"Start":"04:06.120 ","End":"04:09.770","Text":"Although it\u0027s easier to show that the norm squared is 1."},{"Start":"04:09.770 ","End":"04:14.665","Text":"A norm squared is just an easier formula, is the same thing."},{"Start":"04:14.665 ","End":"04:19.640","Text":"Let\u0027s start the computation of the norm of f squared."},{"Start":"04:19.640 ","End":"04:23.800","Text":"It\u0027s just the inner product of f with itself,"},{"Start":"04:23.800 ","End":"04:27.255","Text":"and f has to be of the form cosine kx,"},{"Start":"04:27.255 ","End":"04:30.015","Text":"that\u0027s all that we have in our set."},{"Start":"04:30.015 ","End":"04:32.909","Text":"Let\u0027s see what this is."},{"Start":"04:32.909 ","End":"04:35.620","Text":"Depends on k possibly,"},{"Start":"04:35.620 ","End":"04:41.600","Text":"apply the integral inner product so we get this integral."},{"Start":"04:41.600 ","End":"04:48.950","Text":"Then we borrowed another formula from trigonometry for cosine squared Alpha."},{"Start":"04:48.950 ","End":"04:55.160","Text":"We\u0027ll use this here with Alpha equaling kx."},{"Start":"04:55.160 ","End":"05:01.380","Text":"We get this after bringing the half out in front of the integral."},{"Start":"05:01.430 ","End":"05:04.845","Text":"It\u0027s not a difficult integral."},{"Start":"05:04.845 ","End":"05:08.925","Text":"I\u0027m assuming you\u0027ve studied trigonometric integrals."},{"Start":"05:08.925 ","End":"05:18.689","Text":"The integral of 1 is x and the integral of cosine 2kx would be just sine to kx."},{"Start":"05:18.689 ","End":"05:20.630","Text":"But because of the inner derivative,"},{"Start":"05:20.630 ","End":"05:23.050","Text":"we have to divide by 2k,"},{"Start":"05:23.050 ","End":"05:24.970","Text":"and because it\u0027s a definite integral,"},{"Start":"05:24.970 ","End":"05:28.820","Text":"we have to plug in 0 and plug in Pi and then subtract."},{"Start":"05:28.820 ","End":"05:32.075","Text":"Now, if we plug in 0,"},{"Start":"05:32.075 ","End":"05:37.550","Text":"x is 0 and sine of 0 is also 0,"},{"Start":"05:37.550 ","End":"05:41.625","Text":"and if x is Pi,"},{"Start":"05:41.625 ","End":"05:44.070","Text":"then here we get Pi."},{"Start":"05:44.070 ","End":"05:52.820","Text":"But once again, sine of Pi times anything is 0. We had that before."},{"Start":"05:52.820 ","End":"05:56.195","Text":"Sine of n Pi is always 0."},{"Start":"05:56.195 ","End":"05:59.570","Text":"We\u0027re just left with the Pi here together with the half."},{"Start":"05:59.570 ","End":"06:02.190","Text":"That\u0027s a 1/2 Pi."},{"Start":"06:02.750 ","End":"06:05.835","Text":"Just write it as a decimal,"},{"Start":"06:05.835 ","End":"06:08.670","Text":"a half is 0.5,"},{"Start":"06:08.670 ","End":"06:11.475","Text":"and that was the norm of f squared."},{"Start":"06:11.475 ","End":"06:15.320","Text":"The norm of f is the square root of this"},{"Start":"06:15.320 ","End":"06:20.285","Text":"and doesn\u0027t depend on k. We always get the same thing,"},{"Start":"06:20.285 ","End":"06:24.120","Text":"but it\u0027s not equal to 1."},{"Start":"06:25.130 ","End":"06:29.655","Text":"Even if we had one function not equal to 1,"},{"Start":"06:29.655 ","End":"06:32.890","Text":"it wouldn\u0027t be orthonormal, the set."},{"Start":"06:32.980 ","End":"06:38.240","Text":"Now in the question we were asked to normalize it,"},{"Start":"06:38.240 ","End":"06:39.710","Text":"that if it was orthogonal,"},{"Start":"06:39.710 ","End":"06:42.650","Text":"but not orthonormal to normalize the set."},{"Start":"06:42.650 ","End":"06:51.754","Text":"That means dividing each vector in this case function by its norm."},{"Start":"06:51.754 ","End":"06:58.895","Text":"We divide each of these by the square root of 0.5 Pi."},{"Start":"06:58.895 ","End":"07:00.980","Text":"This is orthonormal."},{"Start":"07:00.980 ","End":"07:02.780","Text":"I just gave it a name."},{"Start":"07:02.780 ","End":"07:04.250","Text":"I can\u0027t use S again,"},{"Start":"07:04.250 ","End":"07:06.740","Text":"so I called it S with a hat on."},{"Start":"07:06.740 ","End":"07:11.105","Text":"That just reminds me that unit vectors have a hat on,"},{"Start":"07:11.105 ","End":"07:13.740","Text":"or any other letter."},{"Start":"07:15.130 ","End":"07:18.840","Text":"That\u0027s it. We are done."}],"ID":10175},{"Watched":false,"Name":"Exercise 6","Duration":"11m 18s","ChapterTopicVideoID":10019,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:09.015","Text":"In this exercise, we have the space of continuous functions on the interval from 0-2 Pi."},{"Start":"00:09.015 ","End":"00:13.785","Text":"The inner product is the standard integral in a product."},{"Start":"00:13.785 ","End":"00:18.930","Text":"Now, we consider the following set of functions."},{"Start":"00:18.930 ","End":"00:23.700","Text":"Notice that there\u0027s a 1 and then there are cosines, cosine x,"},{"Start":"00:23.700 ","End":"00:25.845","Text":"cosine 2x, cosine 3x,"},{"Start":"00:25.845 ","End":"00:27.450","Text":"and there are sines, sine x,"},{"Start":"00:27.450 ","End":"00:29.685","Text":"sine 2x, sine 3x."},{"Start":"00:29.685 ","End":"00:35.340","Text":"Really, there\u0027s 3 kinds,"},{"Start":"00:35.340 ","End":"00:38.255","Text":"twice infinity plus 1 if you like."},{"Start":"00:38.255 ","End":"00:41.570","Text":"The question is, is S orthogonal?"},{"Start":"00:41.570 ","End":"00:44.530","Text":"If so, is it orthonormal?"},{"Start":"00:44.530 ","End":"00:47.615","Text":"If it\u0027s orthogonal but not orthonormal,"},{"Start":"00:47.615 ","End":"00:51.425","Text":"then we should normalize it."},{"Start":"00:51.425 ","End":"00:59.165","Text":"The first check for orthogonal is that the set should not contain 0 and the 0 function,"},{"Start":"00:59.165 ","End":"01:01.835","Text":"and indeed, it doesn\u0027t so that\u0027s okay."},{"Start":"01:01.835 ","End":"01:07.235","Text":"Next, we have to check pairwise orthogonality."},{"Start":"01:07.235 ","End":"01:10.610","Text":"But as many combinations of pairs,"},{"Start":"01:10.610 ","End":"01:12.980","Text":"I can take a cosine with a cosine,"},{"Start":"01:12.980 ","End":"01:14.270","Text":"a sine with a sign,"},{"Start":"01:14.270 ","End":"01:15.815","Text":"a sign with the cosine,"},{"Start":"01:15.815 ","End":"01:17.405","Text":"1 with a cosine,"},{"Start":"01:17.405 ","End":"01:22.340","Text":"1 with a sign, there\u0027s really actually, 5 categories."},{"Start":"01:22.340 ","End":"01:24.530","Text":"I\u0027m going to list the categories,"},{"Start":"01:24.530 ","End":"01:27.150","Text":"and we\u0027ll take them 1 by 1."},{"Start":"01:27.410 ","End":"01:31.625","Text":"K and l will be positive integers."},{"Start":"01:31.625 ","End":"01:37.950","Text":"The first case is 1 of the cosines with the constant function 1."},{"Start":"01:39.490 ","End":"01:42.425","Text":"We have to check if it\u0027s 0."},{"Start":"01:42.425 ","End":"01:46.265","Text":"In other words, is this integral equal to 0?"},{"Start":"01:46.265 ","End":"01:47.800","Text":"Should put question marks,"},{"Start":"01:47.800 ","End":"01:50.710","Text":"really, because this is what we have to show."},{"Start":"01:50.710 ","End":"01:57.160","Text":"The next combination is 1 of the sine functions with the constant 1."},{"Start":"01:57.160 ","End":"02:04.495","Text":"Let\u0027s see, there\u0027s 3 more in a product of a cosine with a cosine,"},{"Start":"02:04.495 ","End":"02:06.040","Text":"but they have to be different."},{"Start":"02:06.040 ","End":"02:08.170","Text":"That\u0027s why we write here,"},{"Start":"02:08.170 ","End":"02:15.060","Text":"k not equal to l. This is the integral we have to show that it is 0."},{"Start":"02:15.060 ","End":"02:19.275","Text":"The 4th case is 1 of the sines with another 1 of the sines."},{"Start":"02:19.275 ","End":"02:22.440","Text":"Again, they have to be different and that\u0027s means"},{"Start":"02:22.440 ","End":"02:26.490","Text":"that k not equal to l. Other than that,"},{"Start":"02:26.490 ","End":"02:33.070","Text":"k and l are positive integers here, everywhere."},{"Start":"02:33.110 ","End":"02:37.400","Text":"The last case, 1 of the cosines is 1 of the sines."},{"Start":"02:37.400 ","End":"02:38.930","Text":"K could equal l here,"},{"Start":"02:38.930 ","End":"02:40.975","Text":"no problem with that."},{"Start":"02:40.975 ","End":"02:43.080","Text":"That\u0027s quite a bit of work."},{"Start":"02:43.080 ","End":"02:47.525","Text":"We have to solve 5 integrals and hope that they\u0027re all 0."},{"Start":"02:47.525 ","End":"02:49.685","Text":"Let\u0027s start."},{"Start":"02:49.685 ","End":"02:52.460","Text":"We\u0027ll go through these integrals quickly because"},{"Start":"02:52.460 ","End":"02:55.250","Text":"our topic is not trigonometric integrals,"},{"Start":"02:55.250 ","End":"03:00.940","Text":"we\u0027re in inner product spaces, orthogonality, orthonormality."},{"Start":"03:00.940 ","End":"03:05.430","Text":"Let\u0027s quickly see the first case was a cosine kx,"},{"Start":"03:05.430 ","End":"03:07.860","Text":"the integrals 1/k sine kx."},{"Start":"03:07.860 ","End":"03:11.805","Text":"If you plug in 2 Pi and 0 and subtract, we get 0."},{"Start":"03:11.805 ","End":"03:16.140","Text":"Then we add the k\u0027s with the sine kx times 1."},{"Start":"03:16.140 ","End":"03:19.745","Text":"The integral of sine is minus cosine and divide by the k,"},{"Start":"03:19.745 ","End":"03:23.740","Text":"plugin, substitute, get 0."},{"Start":"03:23.740 ","End":"03:25.970","Text":"For the next 3 integrals,"},{"Start":"03:25.970 ","End":"03:28.840","Text":"we\u0027ll need some trigonometric identities."},{"Start":"03:28.840 ","End":"03:30.995","Text":"Here, we have a cosine, cosine."},{"Start":"03:30.995 ","End":"03:32.870","Text":"There\u0027s also a formula for sine,"},{"Start":"03:32.870 ","End":"03:35.575","Text":"sine, and for sine, cosine."},{"Start":"03:35.575 ","End":"03:38.960","Text":"Here are some formulas that will help us."},{"Start":"03:38.960 ","End":"03:41.825","Text":"In this case, we\u0027re using the cosine, cosine."},{"Start":"03:41.825 ","End":"03:47.540","Text":"We look at this and we get this."},{"Start":"03:47.540 ","End":"03:50.670","Text":"I\u0027m not going to go into detail."},{"Start":"03:52.160 ","End":"03:56.510","Text":"This part is after the trigonometric identity,"},{"Start":"03:56.510 ","End":"04:00.200","Text":"and this part is the integral."},{"Start":"04:00.200 ","End":"04:04.280","Text":"Here, we substitute the upper limit and the lower limit,"},{"Start":"04:04.280 ","End":"04:09.350","Text":"and we subtract, though it\u0027s a minus so it comes out plus."},{"Start":"04:09.350 ","End":"04:14.000","Text":"I need to rephrase that. When you put in x equals 0, you get nothing."},{"Start":"04:14.000 ","End":"04:18.730","Text":"This is what you get after you plug in the 2 Pi."},{"Start":"04:18.730 ","End":"04:25.640","Text":"This also comes out 0 because the sine of any multiple of Pi is 0."},{"Start":"04:25.640 ","End":"04:29.219","Text":"Now, we have 2 more to go."},{"Start":"04:30.790 ","End":"04:35.435","Text":"Here, we need the formula for sine times sine,"},{"Start":"04:35.435 ","End":"04:43.350","Text":"but k is going to be different from l. If k were equal to l,"},{"Start":"04:43.350 ","End":"04:46.050","Text":"this thing here would become,"},{"Start":"04:46.050 ","End":"04:49.485","Text":"this would be 0 and cosine 0 is 1,"},{"Start":"04:49.485 ","End":"04:51.750","Text":"so it wouldn\u0027t be a cosine."},{"Start":"04:51.750 ","End":"04:57.890","Text":"It\u0027s important that k not equal l. After we use the identities,"},{"Start":"04:57.890 ","End":"05:00.870","Text":"I\u0027m just going to have to scroll them off-screen,"},{"Start":"05:00.870 ","End":"05:03.430","Text":"and you have them somewhere."},{"Start":"05:03.980 ","End":"05:09.695","Text":"This is after the trigonometric identity,"},{"Start":"05:09.695 ","End":"05:14.525","Text":"this is after the integration, the indefinite integral."},{"Start":"05:14.525 ","End":"05:18.085","Text":"Next, you have to plug in upper and lower limits."},{"Start":"05:18.085 ","End":"05:20.570","Text":"As before, we plug in 0,"},{"Start":"05:20.570 ","End":"05:22.250","Text":"sine 0 is 0,"},{"Start":"05:22.250 ","End":"05:25.370","Text":"so we just have to plug in 2 Pi."},{"Start":"05:25.370 ","End":"05:29.900","Text":"But just like before, this is 0 minus 0 because the sine of"},{"Start":"05:29.900 ","End":"05:34.884","Text":"any whole multiple of Pi is 0."},{"Start":"05:34.884 ","End":"05:37.530","Text":"If I were doing okay, we got 4 zeros."},{"Start":"05:37.530 ","End":"05:40.825","Text":"Let\u0027s try the very last 1 now."},{"Start":"05:40.825 ","End":"05:44.630","Text":"Applying the formula for cosine times sine,"},{"Start":"05:44.630 ","End":"05:48.320","Text":"the identity, I mean, we get this."},{"Start":"05:48.320 ","End":"05:52.850","Text":"Now, here, it\u0027s a bit more delicate, the substitution."},{"Start":"05:52.850 ","End":"05:56.545","Text":"Let\u0027s look at each of the 2 terms separately."},{"Start":"05:56.545 ","End":"05:59.655","Text":"If I put x equals 0,"},{"Start":"05:59.655 ","End":"06:03.215","Text":"here, I have cosine 0, which is 1."},{"Start":"06:03.215 ","End":"06:05.120","Text":"If I put x equals 2 Pi,"},{"Start":"06:05.120 ","End":"06:10.360","Text":"I have cosine times a multiple of 2 Pi, that\u0027s also 1."},{"Start":"06:10.360 ","End":"06:13.140","Text":"When I do the subtraction, I\u0027ll get 0,"},{"Start":"06:13.140 ","End":"06:15.450","Text":"I\u0027ll get minus 1/k plus l,"},{"Start":"06:15.450 ","End":"06:17.970","Text":"and I\u0027ll get minus 1/k plus l again,"},{"Start":"06:17.970 ","End":"06:20.640","Text":"and I subtract, it\u0027ll be 0."},{"Start":"06:20.640 ","End":"06:23.085","Text":"Similarly here, plugin 0,"},{"Start":"06:23.085 ","End":"06:25.350","Text":"cosine 0 is 1,"},{"Start":"06:25.350 ","End":"06:27.380","Text":"or plugin 2 Pi,"},{"Start":"06:27.380 ","End":"06:30.155","Text":"cosine of a multiple of 2 Pi is also 1."},{"Start":"06:30.155 ","End":"06:33.215","Text":"Again, when I subtract the upper minus lower, I\u0027ll get 0."},{"Start":"06:33.215 ","End":"06:38.120","Text":"Altogether, we get a 0 in the last 1 also,"},{"Start":"06:38.120 ","End":"06:41.840","Text":"and we\u0027re done with proving the orthogonality."},{"Start":"06:41.840 ","End":"06:44.930","Text":"Next, you want to check orthonormality."},{"Start":"06:44.930 ","End":"06:53.720","Text":"We\u0027re going to check if each member in the set is a unit vector,"},{"Start":"06:53.720 ","End":"06:57.580","Text":"which means that its norm is equal to 1."},{"Start":"06:57.580 ","End":"07:03.245","Text":"But checking that norm is 1 is the same as checking that the norm squared is 1,"},{"Start":"07:03.245 ","End":"07:06.730","Text":"and this is easier to compute."},{"Start":"07:06.730 ","End":"07:11.510","Text":"Then there are going to be 3 ks, there was that special k\u0027s of the function 1,"},{"Start":"07:11.510 ","End":"07:15.655","Text":"the constant function, then there will be the sines and then the cosines."},{"Start":"07:15.655 ","End":"07:17.335","Text":"If f is 1,"},{"Start":"07:17.335 ","End":"07:22.715","Text":"the norm of f squared is the inner product of the function 1 with itself,"},{"Start":"07:22.715 ","End":"07:27.860","Text":"which is the integral from 0-2 Pi of 1 times 1,"},{"Start":"07:27.860 ","End":"07:29.950","Text":"it\u0027s 1 squared, but it\u0027s 1,"},{"Start":"07:29.950 ","End":"07:32.760","Text":"which is x from 0-2 Pi,"},{"Start":"07:32.760 ","End":"07:35.025","Text":"which is 2 Pi."},{"Start":"07:35.025 ","End":"07:39.410","Text":"Remember, we\u0027re finding here the norm squared so later,"},{"Start":"07:39.410 ","End":"07:42.565","Text":"we\u0027ll have to take the square root if we want just the norm."},{"Start":"07:42.565 ","End":"07:45.140","Text":"Next, we have the cosine series."},{"Start":"07:45.140 ","End":"07:46.340","Text":"Cosine x, cosine 2x,"},{"Start":"07:46.340 ","End":"07:48.140","Text":"cosine 3x, and so on."},{"Start":"07:48.140 ","End":"07:53.675","Text":"The norm of such a function squared in a product of it with itself,"},{"Start":"07:53.675 ","End":"07:57.375","Text":"which is the inner product of cosine kx, cosine kx."},{"Start":"07:57.375 ","End":"08:02.665","Text":"The inner product is the integral from 0-2Pi of the product of the functions."},{"Start":"08:02.665 ","End":"08:05.235","Text":"This is cosine squared kx."},{"Start":"08:05.235 ","End":"08:07.564","Text":"I guess there\u0027s another trigonometric formula."},{"Start":"08:07.564 ","End":"08:11.930","Text":"I will just write it. Cosine squared of Alpha is"},{"Start":"08:11.930 ","End":"08:18.540","Text":"1 plus cosine of 2 Alpha over 2."},{"Start":"08:18.540 ","End":"08:21.470","Text":"This over 2 is the half a pool in front of the integral,"},{"Start":"08:21.470 ","End":"08:25.900","Text":"this is 1 plus cosine 2 Alpha, Alpha is kx."},{"Start":"08:25.900 ","End":"08:29.525","Text":"Here, we do the indefinite integral,"},{"Start":"08:29.525 ","End":"08:31.400","Text":"1 comes out x,"},{"Start":"08:31.400 ","End":"08:35.165","Text":"and cosine comes out some form of sine."},{"Start":"08:35.165 ","End":"08:40.025","Text":"Now, I claim that this comes out to be just 2 Pi."},{"Start":"08:40.025 ","End":"08:44.625","Text":"First of all, the 0 gives 0 in both of these, so we can ignore that."},{"Start":"08:44.625 ","End":"08:46.765","Text":"We just have to plug in 2 Pi."},{"Start":"08:46.765 ","End":"08:50.070","Text":"From here, we get 2 Pi, but from here,"},{"Start":"08:50.070 ","End":"08:55.045","Text":"we get a whole number times 2 Pi and the sine of that is 0."},{"Start":"08:55.045 ","End":"08:59.370","Text":"We\u0027re just left with the 2 Pi from this first term."},{"Start":"08:59.370 ","End":"09:03.675","Text":"Then 1/2 times 2 Pi is Pi."},{"Start":"09:03.675 ","End":"09:06.080","Text":"Again, I\u0027m reminding you, this as a norm squared."},{"Start":"09:06.080 ","End":"09:07.220","Text":"If we want the norm of f,"},{"Start":"09:07.220 ","End":"09:11.120","Text":"we need to take the square root of Pi. That\u0027s 2 down."},{"Start":"09:11.120 ","End":"09:15.140","Text":"Let\u0027s do the 3rd 1 which will be the sign family."},{"Start":"09:15.140 ","End":"09:17.680","Text":"Let me clear some space."},{"Start":"09:17.680 ","End":"09:20.400","Text":"If f is from the sine series,"},{"Start":"09:20.400 ","End":"09:22.680","Text":"sine x, sine 2x, sine 3x, and so on,"},{"Start":"09:22.680 ","End":"09:26.470","Text":"then the norm squared which is the inner product with itself,"},{"Start":"09:26.470 ","End":"09:29.925","Text":"sine kx times sine kx,"},{"Start":"09:29.925 ","End":"09:34.370","Text":"the inner product is just the integral."},{"Start":"09:35.000 ","End":"09:38.350","Text":"From here to here, a trigonometric identity,"},{"Start":"09:38.350 ","End":"09:40.000","Text":"this is sine squared."},{"Start":"09:40.000 ","End":"09:49.410","Text":"There is a formula that sine squared Alpha is 1 minus cosine 2 Alpha over 2."},{"Start":"09:49.410 ","End":"09:52.635","Text":"The 1/2 comes out in front, there\u0027s 2 here."},{"Start":"09:52.635 ","End":"09:54.795","Text":"This is what we get."},{"Start":"09:54.795 ","End":"09:57.585","Text":"Then we do the indefinite integral."},{"Start":"09:57.585 ","End":"10:02.005","Text":"Now, we just have to substitute upper limit and lower limit and subtract."},{"Start":"10:02.005 ","End":"10:04.905","Text":"It\u0027s the same as in the previous 1."},{"Start":"10:04.905 ","End":"10:11.685","Text":"This thing is going to be 0 when x is 0 or 2 Pi,"},{"Start":"10:11.685 ","End":"10:14.550","Text":"and this is going to be 0 when x is 0."},{"Start":"10:14.550 ","End":"10:21.105","Text":"The only thing substantial is when this x is equal to 2 Pi."},{"Start":"10:21.105 ","End":"10:24.240","Text":"We have 1/2 of 2 Pi, which is Pi."},{"Start":"10:24.240 ","End":"10:28.250","Text":"Again, that was the same as what we got with the cosines."},{"Start":"10:28.250 ","End":"10:33.500","Text":"Again, I want to remind you that the norm of f is square root of Pi."},{"Start":"10:33.500 ","End":"10:36.515","Text":"Now, this is not orthonormal."},{"Start":"10:36.515 ","End":"10:40.270","Text":"In fact, the norm of non of them is 1."},{"Start":"10:40.270 ","End":"10:46.510","Text":"Let\u0027s normalize it by dividing each function by its norm."},{"Start":"10:46.510 ","End":"10:49.050","Text":"Remember, we had 2 Pi at first,"},{"Start":"10:49.050 ","End":"10:50.610","Text":"then Pi, then Pi,"},{"Start":"10:50.610 ","End":"10:53.295","Text":"but we need the square roots of those."},{"Start":"10:53.295 ","End":"10:55.380","Text":"In short, the 1,"},{"Start":"10:55.380 ","End":"10:57.670","Text":"we divide by square root of 2 Pi,"},{"Start":"10:57.670 ","End":"11:02.099","Text":"but all the rest get divided by square root of Pi."},{"Start":"11:03.710 ","End":"11:08.510","Text":"It\u0027s not the same S so I just put a hat on it."},{"Start":"11:08.510 ","End":"11:10.880","Text":"Also the brackets are wrong, sorry,"},{"Start":"11:10.880 ","End":"11:15.665","Text":"it should be curly braces for a set."},{"Start":"11:15.665 ","End":"11:18.390","Text":"Other than that, we\u0027re done."}],"ID":10174},{"Watched":false,"Name":"Exercise 7","Duration":"3m 51s","ChapterTopicVideoID":9720,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:07.455","Text":"In this exercise, we\u0027re given a set of 3 vectors in R^3,"},{"Start":"00:07.455 ","End":"00:14.835","Text":"and we take R^3 with the usual inner product, the dot product."},{"Start":"00:14.835 ","End":"00:19.350","Text":"The question is, is S an orthogonal set?"},{"Start":"00:19.350 ","End":"00:24.630","Text":"If so, then check further, is it orthonormal?"},{"Start":"00:24.630 ","End":"00:27.570","Text":"Is it a basis of R^3?"},{"Start":"00:27.570 ","End":"00:30.690","Text":"In which case it would be an orthogonal basis?"},{"Start":"00:30.690 ","End":"00:32.490","Text":"If it\u0027s orthogonal,"},{"Start":"00:32.490 ","End":"00:36.225","Text":"but not orthonormal, then normalize it."},{"Start":"00:36.225 ","End":"00:43.745","Text":"The first condition on the set is that it should not contain the 0 vector and it doesn\u0027t."},{"Start":"00:43.745 ","End":"00:48.740","Text":"Next, we\u0027re going to check that each pair is orthogonal."},{"Start":"00:48.740 ","End":"00:52.700","Text":"Every pair of different vectors is only 3 combinations."},{"Start":"00:52.700 ","End":"00:53.930","Text":"The first with the second,"},{"Start":"00:53.930 ","End":"00:56.190","Text":"the first with the third, and the second with the third."},{"Start":"00:56.190 ","End":"00:58.115","Text":"We have to see if they\u0027re orthogonal,"},{"Start":"00:58.115 ","End":"01:01.720","Text":"that their inner product which means dot product is 0."},{"Start":"01:01.720 ","End":"01:04.290","Text":"First, we\u0027ll take this with this."},{"Start":"01:04.290 ","End":"01:06.390","Text":"2 times 4 is 8,"},{"Start":"01:06.390 ","End":"01:11.040","Text":"minus 4, minus 4 is 0, that one\u0027s okay."},{"Start":"01:11.040 ","End":"01:16.260","Text":"I\u0027ll leave you to check that the other 2 also give a 0."},{"Start":"01:16.260 ","End":"01:18.810","Text":"That\u0027s the first part, it is orthogonal."},{"Start":"01:18.810 ","End":"01:22.390","Text":"Now, we have to say whether it\u0027s a basis,"},{"Start":"01:22.390 ","End":"01:25.390","Text":"and the answer is that it is a basis because"},{"Start":"01:25.390 ","End":"01:28.954","Text":"an orthogonal set just has to have the right number of elements."},{"Start":"01:28.954 ","End":"01:33.110","Text":"It has to have exactly the same as the dimension of the space."},{"Start":"01:33.110 ","End":"01:36.705","Text":"The dimension of R^3 is 3,"},{"Start":"01:36.705 ","End":"01:40.440","Text":"and we have exactly 3 vectors in S,"},{"Start":"01:40.440 ","End":"01:45.910","Text":"and so it\u0027s a basis and hence an orthogonal basis."},{"Start":"01:45.910 ","End":"01:48.880","Text":"Now, the check orthonormality,"},{"Start":"01:48.880 ","End":"01:56.990","Text":"we just have to"},{"Start":"01:56.990 ","End":"02:00.350","Text":"check that the norm of each vector is 1,"},{"Start":"02:00.350 ","End":"02:03.890","Text":"but it\u0027s easier to work with the norm squared is the same thing."},{"Start":"02:03.890 ","End":"02:05.030","Text":"If the norm squared is 1,"},{"Start":"02:05.030 ","End":"02:07.225","Text":"the norm is 1 and vice versa."},{"Start":"02:07.225 ","End":"02:09.000","Text":"Here\u0027s the first 1,"},{"Start":"02:09.000 ","End":"02:10.440","Text":"2, 4, 4,"},{"Start":"02:10.440 ","End":"02:15.935","Text":"norm squared is the dot product of this with itself."},{"Start":"02:15.935 ","End":"02:21.715","Text":"2 times 2 plus 4 times 4 plus 4 times 4 comes out to be 36."},{"Start":"02:21.715 ","End":"02:23.850","Text":"36 is not 1,"},{"Start":"02:23.850 ","End":"02:26.130","Text":"so already we know we\u0027re not orthonormal."},{"Start":"02:26.130 ","End":"02:32.765","Text":"But let\u0027s continue to compute the norm squared of each of the members of S. Well,"},{"Start":"02:32.765 ","End":"02:34.720","Text":"here they are the other 2,"},{"Start":"02:34.720 ","End":"02:37.975","Text":"I\u0027ll leave you to check the calculations."},{"Start":"02:37.975 ","End":"02:41.070","Text":"Just simple dot product."},{"Start":"02:41.070 ","End":"02:43.790","Text":"Now, as we said, S is not orthonormal,"},{"Start":"02:43.790 ","End":"02:47.165","Text":"but what we can do is normalize it."},{"Start":"02:47.165 ","End":"02:51.550","Text":"To normalize it, we divide each vector by its norm."},{"Start":"02:51.550 ","End":"02:53.370","Text":"The first 1 was 2, 4,"},{"Start":"02:53.370 ","End":"02:56.250","Text":"4, and we divide it by root 36."},{"Start":"02:56.250 ","End":"02:59.250","Text":"Yeah, I know this is equal to 6 exactly, it doesn\u0027t matter."},{"Start":"02:59.250 ","End":"03:04.260","Text":"This 1, we divide by square root of 8 because these are all the norm squares,"},{"Start":"03:04.260 ","End":"03:05.970","Text":"you can take a square root of them all."},{"Start":"03:05.970 ","End":"03:10.120","Text":"The last 1, we divide by square root of 18."},{"Start":"03:10.120 ","End":"03:12.920","Text":"I gave it a different name than S,"},{"Start":"03:12.920 ","End":"03:14.690","Text":"let\u0027s call it S with a hat on,"},{"Start":"03:14.690 ","End":"03:20.415","Text":"that\u0027s the normalized S. It\u0027s still a basis,"},{"Start":"03:20.415 ","End":"03:24.620","Text":"should have mentioned if we have a base and we multiply or divide"},{"Start":"03:24.620 ","End":"03:30.140","Text":"the elements by a nonzero scalar could be a different one for each,"},{"Start":"03:30.140 ","End":"03:33.400","Text":"doesn\u0027t change the fact of it being a basis."},{"Start":"03:33.400 ","End":"03:35.360","Text":"Sign, as a matter of fact,"},{"Start":"03:35.360 ","End":"03:38.670","Text":"it also doesn\u0027t change there being orthogonal."},{"Start":"03:39.700 ","End":"03:42.350","Text":"I\u0027m just putting words."},{"Start":"03:42.350 ","End":"03:45.965","Text":"If you have an orthogonal basis and you normalize it,"},{"Start":"03:45.965 ","End":"03:48.590","Text":"you get an orthonormal basis."},{"Start":"03:48.590 ","End":"03:51.150","Text":"We are done."}],"ID":10176},{"Watched":false,"Name":"Exercise 8","Duration":"1m 30s","ChapterTopicVideoID":9721,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:08.805","Text":"In this exercise, we\u0027re now in the space of polynomials of degree 3 or less,"},{"Start":"00:08.805 ","End":"00:12.090","Text":"and it\u0027s an inner product space if we define"},{"Start":"00:12.090 ","End":"00:15.880","Text":"the inner product to be the integral in a product,"},{"Start":"00:15.880 ","End":"00:22.230","Text":"just like we did in the continuous functions on 0, 1."},{"Start":"00:22.570 ","End":"00:25.655","Text":"So we borrow the inner product from there."},{"Start":"00:25.655 ","End":"00:27.080","Text":"Now the question is,"},{"Start":"00:27.080 ","End":"00:29.455","Text":"is S an orthogonal set?"},{"Start":"00:29.455 ","End":"00:33.705","Text":"If so, is it orthonormal,"},{"Start":"00:33.705 ","End":"00:36.505","Text":"and is it a basis?"},{"Start":"00:36.505 ","End":"00:39.770","Text":"If it\u0027s orthogonal but not orthonormal,"},{"Start":"00:39.770 ","End":"00:43.190","Text":"then we have to normalize it."},{"Start":"00:43.190 ","End":"00:48.205","Text":"I\u0027m going to show you in a moment that the set is not orthogonal,"},{"Start":"00:48.205 ","End":"00:50.195","Text":"and once it\u0027s not orthogonal,"},{"Start":"00:50.195 ","End":"00:51.860","Text":"there\u0027s no point in continuing,"},{"Start":"00:51.860 ","End":"00:53.510","Text":"there\u0027s nothing more we can do,"},{"Start":"00:53.510 ","End":"00:54.785","Text":"and we stop right here."},{"Start":"00:54.785 ","End":"00:58.400","Text":"I just want to explain why I wrote that it\u0027s not orthogonal."},{"Start":"00:58.400 ","End":"01:01.500","Text":"I\u0027m going to show that 1 and x are not orthogonal,"},{"Start":"01:01.500 ","End":"01:04.980","Text":"and all I need is 1 pair to violate it."},{"Start":"01:04.980 ","End":"01:07.825","Text":"Now I still have to show you why 1,"},{"Start":"01:07.825 ","End":"01:11.405","Text":"x inner product is not 0."},{"Start":"01:11.405 ","End":"01:14.650","Text":"The inner product is the integral in the product on 0, 1,"},{"Start":"01:14.650 ","End":"01:19.320","Text":"so we take the integral from 0 to 1 of 1 times x."},{"Start":"01:19.320 ","End":"01:24.705","Text":"1 times x is just xdx and x squared over 2 from 0 to 1,"},{"Start":"01:24.705 ","End":"01:30.670","Text":"comes at a half which is not 0 and so we just stop here, that\u0027s it."}],"ID":10177},{"Watched":false,"Name":"Exercise 9","Duration":"8m 46s","ChapterTopicVideoID":10020,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.020 ","End":"00:04.830","Text":"This exercise takes place in P_2 over R,"},{"Start":"00:04.830 ","End":"00:08.700","Text":"the space of polynomials of degree 2 or"},{"Start":"00:08.700 ","End":"00:12.840","Text":"less over the reals and it\u0027s an inner product space."},{"Start":"00:12.840 ","End":"00:14.700","Text":"We take the inner product,"},{"Start":"00:14.700 ","End":"00:18.315","Text":"the integral in the product,"},{"Start":"00:18.315 ","End":"00:23.070","Text":"the integral of the product of the functions from 0 to"},{"Start":"00:23.070 ","End":"00:31.380","Text":"1 and we have a set S with 3 polynomials in it."},{"Start":"00:31.380 ","End":"00:35.339","Text":"They\u0027re all of degree 2 or less constant,"},{"Start":"00:35.339 ","End":"00:37.065","Text":"a linear and a quadratic."},{"Start":"00:37.065 ","End":"00:41.835","Text":"The question is, is as an orthogonal set?"},{"Start":"00:41.835 ","End":"00:45.540","Text":"If it is, we check 2 things,"},{"Start":"00:45.540 ","End":"00:49.930","Text":"is it orthonormal and is it a basis?"},{"Start":"00:49.970 ","End":"00:54.065","Text":"If it\u0027s orthogonal but not orthonormal,"},{"Start":"00:54.065 ","End":"00:58.610","Text":"then we are asked to normalize it."},{"Start":"00:58.610 ","End":"01:02.045","Text":"I\u0027m going to show you that it is orthogonal."},{"Start":"01:02.045 ","End":"01:08.210","Text":"Remember there\u0027s also a condition that 0 doesn\u0027t belong to the set,"},{"Start":"01:08.210 ","End":"01:11.195","Text":"0 being the 0 polynomial."},{"Start":"01:11.195 ","End":"01:12.935","Text":"But it\u0027s not in there,"},{"Start":"01:12.935 ","End":"01:18.350","Text":"not 1 of the 3 and then we need to show pairwise orthogonality."},{"Start":"01:18.350 ","End":"01:20.630","Text":"The 3 possible pairs."},{"Start":"01:20.630 ","End":"01:22.640","Text":"First with the second, first with the third,"},{"Start":"01:22.640 ","End":"01:23.675","Text":"second with the third."},{"Start":"01:23.675 ","End":"01:26.480","Text":"Because of the symmetry, there\u0027s only 3 and we\u0027re going to check."},{"Start":"01:26.480 ","End":"01:28.070","Text":"Each pair is orthogonal,"},{"Start":"01:28.070 ","End":"01:31.055","Text":"meaning the inner product of the 2 is 0."},{"Start":"01:31.055 ","End":"01:34.250","Text":"We\u0027ll take the inner product of 1 with 2x minus 1"},{"Start":"01:34.250 ","End":"01:37.160","Text":"first and this is the integral inner product."},{"Start":"01:37.160 ","End":"01:39.170","Text":"We take this times this."},{"Start":"01:39.170 ","End":"01:42.500","Text":"We don\u0027t see the 1 because 1 times 2x minus 1 is"},{"Start":"01:42.500 ","End":"01:45.620","Text":"just itself and we take the integral from 0 to 1."},{"Start":"01:45.620 ","End":"01:47.840","Text":"Simple integration gives us x squared minus x,"},{"Start":"01:47.840 ","End":"01:51.660","Text":"substitute 0 and 1 and we do in fact get 0."},{"Start":"01:51.660 ","End":"01:53.475","Text":"1 squared minus 1 is 0,"},{"Start":"01:53.475 ","End":"01:56.670","Text":"so good that\u0027s these 2."},{"Start":"01:56.670 ","End":"01:59.175","Text":"Next, the first and the last."},{"Start":"01:59.175 ","End":"02:04.805","Text":"Again, the integral from 0 to 1 of 1 times this just looks like this."},{"Start":"02:04.805 ","End":"02:08.055","Text":"1 gets swallowed."},{"Start":"02:08.055 ","End":"02:13.785","Text":"Okay, we get 2x cubed minus 3x squared and then plus x,"},{"Start":"02:13.785 ","End":"02:17.035","Text":"and you plug in 0, 0."},{"Start":"02:17.035 ","End":"02:23.195","Text":"If you plug in 1, you get 2 minus 3 plus 1 is 0 also, so that\u0027s fine."},{"Start":"02:23.195 ","End":"02:26.215","Text":"Now the third one,"},{"Start":"02:26.215 ","End":"02:29.465","Text":"which is this inner product with this,"},{"Start":"02:29.465 ","End":"02:31.475","Text":"so integral from 0 to 1,"},{"Start":"02:31.475 ","End":"02:36.245","Text":"the product of the 2 functions dx."},{"Start":"02:36.245 ","End":"02:40.610","Text":"Then just basic algebra we multiply out."},{"Start":"02:40.610 ","End":"02:42.020","Text":"I\u0027ll just show you, for example,"},{"Start":"02:42.020 ","End":"02:49.860","Text":"where we get the minus 18 from and we get 2x times minus"},{"Start":"02:49.860 ","End":"02:53.970","Text":"6x is minus 12x squared and this with this is"},{"Start":"02:53.970 ","End":"02:59.965","Text":"minus 6x squared minus 4 minus 6 is minus 18 and similarly for the others."},{"Start":"02:59.965 ","End":"03:02.839","Text":"Now we integrate this polynomial."},{"Start":"03:02.839 ","End":"03:03.980","Text":"You know how to do that."},{"Start":"03:03.980 ","End":"03:08.915","Text":"The next thing to do is just to substitute the limits of integration."},{"Start":"03:08.915 ","End":"03:10.760","Text":"We plug in 0, we get nothing."},{"Start":"03:10.760 ","End":"03:15.685","Text":"If you plug 1, we get 3 minus 6 plus 4 minus 1."},{"Start":"03:15.685 ","End":"03:18.120","Text":"The pluses are 3 and 4 is 7,"},{"Start":"03:18.120 ","End":"03:21.690","Text":"the minus is a 6 and 1 is 7, there is 0."},{"Start":"03:21.690 ","End":"03:24.870","Text":"We\u0027ve got 0 for each of them, so good."},{"Start":"03:24.870 ","End":"03:27.555","Text":"It\u0027s orthogonal. It\u0027s also"},{"Start":"03:27.555 ","End":"03:31.715","Text":"an orthogonal basis because it has the right number of vectors."},{"Start":"03:31.715 ","End":"03:33.650","Text":"It contains 3 vectors,"},{"Start":"03:33.650 ","End":"03:36.080","Text":"which is the dimension of the space wherein"},{"Start":"03:36.080 ","End":"03:40.655","Text":"the dimension of polynomials of degree n or less is n plus 1,"},{"Start":"03:40.655 ","End":"03:43.715","Text":"2 plus 1 is 3."},{"Start":"03:43.715 ","End":"03:48.005","Text":"Now we were asked to do an orthonormal check,"},{"Start":"03:48.005 ","End":"03:50.165","Text":"so for each f,"},{"Start":"03:50.165 ","End":"03:52.820","Text":"we have to check that it has a norm of 1,"},{"Start":"03:52.820 ","End":"03:56.860","Text":"but it\u0027s easier to check that the norm squared is 1, same thing."},{"Start":"03:56.860 ","End":"03:59.575","Text":"Let\u0027s do some computations."},{"Start":"03:59.575 ","End":"04:02.540","Text":"Want to compute the norm squared of each of the 3."},{"Start":"04:02.540 ","End":"04:05.110","Text":"The first one was 1,"},{"Start":"04:05.110 ","End":"04:09.050","Text":"so the norm of 1 squared is the inner product of 1 with itself."},{"Start":"04:09.050 ","End":"04:10.340","Text":"1 is not number,"},{"Start":"04:10.340 ","End":"04:12.920","Text":"it\u0027s the polynomial 1 or function."},{"Start":"04:12.920 ","End":"04:17.120","Text":"Anyway, the integral from 0 to 1 of 1 times 1 is 1,"},{"Start":"04:17.120 ","End":"04:20.415","Text":"so it just comes out to be 1."},{"Start":"04:20.415 ","End":"04:22.050","Text":"So far so good,"},{"Start":"04:22.050 ","End":"04:26.720","Text":"so this one does have the right norm, so let\u0027s continue."},{"Start":"04:26.720 ","End":"04:31.490","Text":"Okay, the second one comes out to be not 1,"},{"Start":"04:31.490 ","End":"04:33.845","Text":"comes out to be a third that\u0027s to show you a few details."},{"Start":"04:33.845 ","End":"04:38.285","Text":"It was 2x minus 1 and the norm squared is the inner product with itself."},{"Start":"04:38.285 ","End":"04:42.440","Text":"That\u0027s the integral from 0 to 1 of this times this,"},{"Start":"04:42.440 ","End":"04:45.695","Text":"which is the squared dx."},{"Start":"04:45.695 ","End":"04:49.160","Text":"We don\u0027t have to expand brackets."},{"Start":"04:49.160 ","End":"04:54.050","Text":"We can use the property that the integral of x squared is x cubed over 3."},{"Start":"04:54.050 ","End":"04:56.240","Text":"Because it\u0027s not x, it\u0027s 2x minus 1,"},{"Start":"04:56.240 ","End":"04:58.235","Text":"we divide by the inner derivative."},{"Start":"04:58.235 ","End":"05:06.300","Text":"Then we plug in 1 and we plug in 0 and subtract."},{"Start":"05:06.300 ","End":"05:09.784","Text":"If we plug in 1,"},{"Start":"05:09.784 ","End":"05:14.405","Text":"we get 1 cubed over 3 times a half is 1/6."},{"Start":"05:14.405 ","End":"05:16.730","Text":"Plug in minus 1, you get minus the 1/6,"},{"Start":"05:16.730 ","End":"05:18.770","Text":"so altogether 1/6 minus,"},{"Start":"05:18.770 ","End":"05:23.370","Text":"minus a 1/6, which is a third and it\u0027s not 1."},{"Start":"05:23.370 ","End":"05:27.185","Text":"Already it\u0027s not an orthonormal set."},{"Start":"05:27.185 ","End":"05:29.630","Text":"Let\u0027s just compute the norm of the last one,"},{"Start":"05:29.630 ","End":"05:31.410","Text":"we\u0027ll need it later."},{"Start":"05:31.410 ","End":"05:33.909","Text":"The last one is this quadratic,"},{"Start":"05:33.909 ","End":"05:36.429","Text":"so it\u0027ll be a bit more work."},{"Start":"05:36.429 ","End":"05:39.220","Text":"The norm squared is the inner product of this with"},{"Start":"05:39.220 ","End":"05:42.730","Text":"itself and the integral inner product means you multiply this by this,"},{"Start":"05:42.730 ","End":"05:45.160","Text":"i.e., this,"},{"Start":"05:45.160 ","End":"05:48.390","Text":"and take the integral from 0 to 1."},{"Start":"05:48.390 ","End":"05:51.065","Text":"But this times this is squared."},{"Start":"05:51.065 ","End":"05:53.650","Text":"Now I won\u0027t use the fact that it\u0027s this squared."},{"Start":"05:53.650 ","End":"06:00.545","Text":"We\u0027ll just go ahead and multiply out and the result comes out to be this."},{"Start":"06:00.545 ","End":"06:03.490","Text":"I won\u0027t go into the details."},{"Start":"06:03.510 ","End":"06:09.160","Text":"We do the indefinite integral of this function first and now we"},{"Start":"06:09.160 ","End":"06:11.230","Text":"make it a definite integral by plugging in"},{"Start":"06:11.230 ","End":"06:14.740","Text":"the limits of integration, doing a subtraction,"},{"Start":"06:14.740 ","End":"06:17.380","Text":"0 doesn\u0027t contribute anything,"},{"Start":"06:17.380 ","End":"06:21.625","Text":"so we just plug in the 1 and"},{"Start":"06:21.625 ","End":"06:27.705","Text":"we get 36/5 is 7 and 1/5,"},{"Start":"06:27.705 ","End":"06:30.760","Text":"so I can write that as 1/5 plus 7."},{"Start":"06:30.760 ","End":"06:38.420","Text":"Then just the coefficient minus 18 plus 16 minus 6 plus 1 and if we do all this,"},{"Start":"06:39.410 ","End":"06:43.630","Text":"these will cancel each other out."},{"Start":"06:44.120 ","End":"06:50.070","Text":"The positives are 1 and 7 and 16, which is 24."},{"Start":"06:50.070 ","End":"06:53.790","Text":"The negatives are 18 and 6 also 24 cancels out,"},{"Start":"06:53.790 ","End":"06:55.575","Text":"just leaving the 1/5,"},{"Start":"06:55.575 ","End":"06:58.020","Text":"which is not one."},{"Start":"06:58.020 ","End":"07:01.930","Text":"Only 1 of the 3 had norm of 1."},{"Start":"07:01.940 ","End":"07:06.625","Text":"Our S is, it\u0027s orthogonal,"},{"Start":"07:06.625 ","End":"07:11.110","Text":"but it\u0027s not orthonormal and we would like an orthonormal basis,"},{"Start":"07:11.110 ","End":"07:14.560","Text":"so what we do in such cases is we normalize it."},{"Start":"07:14.560 ","End":"07:23.695","Text":"Normalizing means dividing each vector by its norm or vectors here or polynomials."},{"Start":"07:23.695 ","End":"07:26.230","Text":"I remember the results for the norm squared."},{"Start":"07:26.230 ","End":"07:30.760","Text":"We had 1 and then we had 1/5."},{"Start":"07:30.760 ","End":"07:34.490","Text":"No, that was the last one."},{"Start":"07:36.230 ","End":"07:38.505","Text":"The middle one was 1/3,"},{"Start":"07:38.505 ","End":"07:40.140","Text":"the last time was 1/3,"},{"Start":"07:40.140 ","End":"07:42.120","Text":"so the first one,"},{"Start":"07:42.120 ","End":"07:45.814","Text":"1 we divide by 1,"},{"Start":"07:45.814 ","End":"07:48.315","Text":"the square root of 1 if you like."},{"Start":"07:48.315 ","End":"07:51.710","Text":"The norms are the square roots of these."},{"Start":"07:51.710 ","End":"07:54.590","Text":"The second one 2x minus 1,"},{"Start":"07:54.590 ","End":"07:57.980","Text":"we divide by the square root of"},{"Start":"07:57.980 ","End":"08:01.805","Text":"this and the last one we divide by the square root of this."},{"Start":"08:01.805 ","End":"08:04.790","Text":"We can simplify a bit because the denominator of"},{"Start":"08:04.790 ","End":"08:08.510","Text":"the denominator is the numerator. There we are."},{"Start":"08:08.510 ","End":"08:13.250","Text":"This tidies it up a bit and now we"},{"Start":"08:13.250 ","End":"08:18.350","Text":"do have an orthonormal basis because S is an orthogonal basis."},{"Start":"08:18.350 ","End":"08:19.895","Text":"I want to remind you,"},{"Start":"08:19.895 ","End":"08:25.730","Text":"if we have a basis and we divide each of the vectors by a nonzero scalar,"},{"Start":"08:25.730 ","End":"08:27.365","Text":"it still stays a basis,"},{"Start":"08:27.365 ","End":"08:33.180","Text":"so this process of normalization doesn\u0027t spoil the basisness or whatever."},{"Start":"08:33.180 ","End":"08:39.890","Text":"Also I gave it a different name because technically I can\u0027t use the same label twice,"},{"Start":"08:39.890 ","End":"08:46.770","Text":"so I called it S with a hat on. We are done."}],"ID":10178},{"Watched":false,"Name":"Exercise 10","Duration":"7m 53s","ChapterTopicVideoID":10021,"CourseChapterTopicPlaylistID":7313,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.580","Text":"This exercise takes place in the space of 3 by"},{"Start":"00:05.580 ","End":"00:12.260","Text":"3 matrices over the real numbers with a standard inner product for matrices."},{"Start":"00:12.260 ","End":"00:17.190","Text":"We have a set containing these 3 matrices."},{"Start":"00:17.190 ","End":"00:18.825","Text":"This means a subset of."},{"Start":"00:18.825 ","End":"00:23.865","Text":"Each of these is in this because each of them is a 3 by 3 real matrix."},{"Start":"00:23.865 ","End":"00:28.095","Text":"Question is, is S orthogonal set?"},{"Start":"00:28.095 ","End":"00:29.580","Text":"Then the usual questions,"},{"Start":"00:29.580 ","End":"00:31.755","Text":"if it is, is it orthonormal?"},{"Start":"00:31.755 ","End":"00:33.780","Text":"Is it a basis?"},{"Start":"00:33.780 ","End":"00:35.779","Text":"If it\u0027s orthogonal but not orthonormal,"},{"Start":"00:35.779 ","End":"00:37.955","Text":"then we should normalize it."},{"Start":"00:37.955 ","End":"00:41.150","Text":"The answer to the first question is yes,"},{"Start":"00:41.150 ","End":"00:43.310","Text":"it is an orthogonal set."},{"Start":"00:43.310 ","End":"00:47.360","Text":"Remember, we have to check that the 0 vector, in this case,"},{"Start":"00:47.360 ","End":"00:49.815","Text":"the 0 matrix with all 0,"},{"Start":"00:49.815 ","End":"00:52.590","Text":"is not in S. Clearly, it isn\u0027t."},{"Start":"00:52.590 ","End":"00:56.285","Text":"Then we need to show pairwise orthogonality."},{"Start":"00:56.285 ","End":"00:59.420","Text":"Each pair of these has to be orthogonal,"},{"Start":"00:59.420 ","End":"01:02.810","Text":"meaning that the inner product of them is 0."},{"Start":"01:02.810 ","End":"01:05.240","Text":"Let\u0027s take this pair,"},{"Start":"01:05.240 ","End":"01:06.965","Text":"the first and the second."},{"Start":"01:06.965 ","End":"01:09.800","Text":"Now, the inner product is defined as"},{"Start":"01:09.800 ","End":"01:15.295","Text":"the trace of the second transposed times the first matrix."},{"Start":"01:15.295 ","End":"01:20.660","Text":"The second 1 transposed happens to be the same as the second 1,"},{"Start":"01:20.660 ","End":"01:22.535","Text":"it just happens to be symmetric,"},{"Start":"01:22.535 ","End":"01:26.825","Text":"but this is actually the transpose of this anyway."},{"Start":"01:26.825 ","End":"01:30.215","Text":"This just goes as is here."},{"Start":"01:30.215 ","End":"01:32.480","Text":"Then we have to multiply and take the trace."},{"Start":"01:32.480 ","End":"01:34.880","Text":"But we don\u0027t really have to multiply the matrices,"},{"Start":"01:34.880 ","End":"01:38.645","Text":"we just have to find the diagonal because we\u0027re going to take the trace."},{"Start":"01:38.645 ","End":"01:41.560","Text":"Better clear some space here."},{"Start":"01:41.560 ","End":"01:44.330","Text":"The way we do this in practice is you take"},{"Start":"01:44.330 ","End":"01:48.200","Text":"the first row with the first column and you get the first entry,"},{"Start":"01:48.200 ","End":"01:50.315","Text":"1 times 2, 0 times 0, and so on."},{"Start":"01:50.315 ","End":"01:53.465","Text":"Then the second row with the second column,"},{"Start":"01:53.465 ","End":"01:56.605","Text":"0 times 4, minus 1 times 2."},{"Start":"01:56.605 ","End":"02:01.155","Text":"Then the third row with the third column."},{"Start":"02:01.155 ","End":"02:05.190","Text":"It\u0027s all 0, that\u0027s going to be 0. Then we just add them up."},{"Start":"02:05.190 ","End":"02:11.070","Text":"Of course, 2 minus 2 plus 0 is 0. the first 1, we\u0027ve got 0."},{"Start":"02:11.070 ","End":"02:14.925","Text":"Good. Let\u0027s go and do the other 2."},{"Start":"02:14.925 ","End":"02:18.015","Text":"We need the inner product of this 1 with this 1."},{"Start":"02:18.015 ","End":"02:20.925","Text":"We take the second 1, transpose."},{"Start":"02:20.925 ","End":"02:23.300","Text":"Here you can see there\u0027s a difference here."},{"Start":"02:23.300 ","End":"02:25.475","Text":"It happens to be symmetric here."},{"Start":"02:25.475 ","End":"02:29.145","Text":"Transpose. Look, the minus 1 here is the minus 1 here."},{"Start":"02:29.145 ","End":"02:35.100","Text":"The first 1 as is in the second place."},{"Start":"02:35.100 ","End":"02:42.705","Text":"Now, we need to compute the trace of the product."},{"Start":"02:42.705 ","End":"02:44.630","Text":"This is what it comes out."},{"Start":"02:44.630 ","End":"02:49.085","Text":"Remember the method I showed you?"},{"Start":"02:49.085 ","End":"02:51.530","Text":"Take this first row with the first column,"},{"Start":"02:51.530 ","End":"02:52.855","Text":"that gives us the 2."},{"Start":"02:52.855 ","End":"02:56.720","Text":"The second row with the second column gives 1 times 4,"},{"Start":"02:56.720 ","End":"02:58.265","Text":"1 times 2, is 6."},{"Start":"02:58.265 ","End":"03:01.730","Text":"Then the third row with the third column gives us minus 6,"},{"Start":"03:01.730 ","End":"03:05.135","Text":"minus 4, plus 2, that\u0027s minus 8."},{"Start":"03:05.135 ","End":"03:06.515","Text":"If we add these up,"},{"Start":"03:06.515 ","End":"03:08.355","Text":"we do indeed get 0."},{"Start":"03:08.355 ","End":"03:11.160","Text":"That\u0027s twice we\u0027ve gotten 0."},{"Start":"03:11.160 ","End":"03:15.820","Text":"I just hope that the third 1 will also give us a 0."},{"Start":"03:16.490 ","End":"03:18.885","Text":"I exposed it all."},{"Start":"03:18.885 ","End":"03:21.810","Text":"I won\u0027t go into it in detail."},{"Start":"03:21.810 ","End":"03:23.580","Text":"On the third 1 to is enough,"},{"Start":"03:23.580 ","End":"03:26.770","Text":"you can check these calculations."},{"Start":"03:27.740 ","End":"03:30.810","Text":"We\u0027ve got 0 3 times."},{"Start":"03:30.810 ","End":"03:35.575","Text":"Yes, our set S is indeed orthogonal."},{"Start":"03:35.575 ","End":"03:40.035","Text":"However, it is not a basis."},{"Start":"03:40.035 ","End":"03:42.170","Text":"There\u0027s a very simple reason."},{"Start":"03:42.170 ","End":"03:45.470","Text":"It doesn\u0027t have the right number of elements."},{"Start":"03:45.470 ","End":"03:53.540","Text":"The set is too small because we need as many members as the dimension of the space."},{"Start":"03:53.540 ","End":"04:00.485","Text":"The dimension of this space of 3 by 3 matrices is 3 squared, which is 9."},{"Start":"04:00.485 ","End":"04:04.325","Text":"We only have 3, not even close."},{"Start":"04:04.325 ","End":"04:08.720","Text":"But even if it was 8, it still wouldn\u0027t be enough to make it a basis."},{"Start":"04:08.720 ","End":"04:11.870","Text":"It has to be exactly 9 elements,"},{"Start":"04:11.870 ","End":"04:14.455","Text":"so not a basis."},{"Start":"04:14.455 ","End":"04:18.595","Text":"Next, let\u0027s check if it\u0027s orthonormal."},{"Start":"04:18.595 ","End":"04:23.615","Text":"Orthonormality means that the norm of each of the vectors,"},{"Start":"04:23.615 ","End":"04:26.600","Text":"in this case, matrices, is 1."},{"Start":"04:26.600 ","End":"04:28.520","Text":"But we won\u0027t compute the norm,"},{"Start":"04:28.520 ","End":"04:32.460","Text":"we\u0027ll compute the norm squared, it\u0027s easier."},{"Start":"04:32.480 ","End":"04:38.435","Text":"The norm of something squared is the inner product of it with itself."},{"Start":"04:38.435 ","End":"04:41.350","Text":"I\u0027m going to use, of course,"},{"Start":"04:41.350 ","End":"04:43.525","Text":"the inner product of matrices,"},{"Start":"04:43.525 ","End":"04:46.790","Text":"the second 1 transpose times the first."},{"Start":"04:46.790 ","End":"04:49.650","Text":"Here is the transpose of this."},{"Start":"04:49.650 ","End":"04:51.210","Text":"This 1 is put here."},{"Start":"04:51.210 ","End":"04:54.720","Text":"There\u0027s no need to even draw a matrix,"},{"Start":"04:54.720 ","End":"05:00.220","Text":"we just have to add the 3 entries on the diagonal."},{"Start":"05:00.220 ","End":"05:04.360","Text":"The first 1 is this row with this column that gives us the 4."},{"Start":"05:04.360 ","End":"05:10.285","Text":"This row with this column gives us 4 times 4 plus 2 times 2 is 20."},{"Start":"05:10.285 ","End":"05:12.220","Text":"The last row with the last column,"},{"Start":"05:12.220 ","End":"05:15.805","Text":"36 plus 16 plus 4 is 56."},{"Start":"05:15.805 ","End":"05:17.875","Text":"That comes out, 80."},{"Start":"05:17.875 ","End":"05:19.770","Text":"It\u0027s not equal to 1,"},{"Start":"05:19.770 ","End":"05:22.820","Text":"so already we\u0027re not orthonormal."},{"Start":"05:22.820 ","End":"05:24.230","Text":"If you want the norm of this,"},{"Start":"05:24.230 ","End":"05:25.895","Text":"we just take the square root of 80."},{"Start":"05:25.895 ","End":"05:27.630","Text":"We\u0027ll do that later."},{"Start":"05:27.630 ","End":"05:30.585","Text":"Going a bit quicker on the second 1."},{"Start":"05:30.585 ","End":"05:34.740","Text":"The same thing, inner product of it with itself."},{"Start":"05:34.740 ","End":"05:38.805","Text":"The norm of this times this."},{"Start":"05:38.805 ","End":"05:40.815","Text":"It\u0027s a symmetric matrix."},{"Start":"05:40.815 ","End":"05:44.220","Text":"This looks exactly the same as this."},{"Start":"05:44.220 ","End":"05:47.120","Text":"Then, first row with first column gives 1,"},{"Start":"05:47.120 ","End":"05:48.785","Text":"second row with second column,"},{"Start":"05:48.785 ","End":"05:51.190","Text":"also minus 1 times minus 1 is 1,"},{"Start":"05:51.190 ","End":"05:54.420","Text":"then 0, so that\u0027s 2."},{"Start":"05:54.420 ","End":"05:56.280","Text":"That\u0027s 2 out of the 3,"},{"Start":"05:56.280 ","End":"05:58.065","Text":"we have 1 more to go."},{"Start":"05:58.065 ","End":"06:00.480","Text":"Again, we\u0027ll do it quickly."},{"Start":"06:00.480 ","End":"06:02.190","Text":"Norm squared."},{"Start":"06:02.190 ","End":"06:04.204","Text":"Inner product of it with itself."},{"Start":"06:04.204 ","End":"06:07.510","Text":"The transpose of this times this."},{"Start":"06:07.510 ","End":"06:10.605","Text":"It\u0027s not symmetric, so the transpose is this,"},{"Start":"06:10.605 ","End":"06:11.700","Text":"this 1 is here."},{"Start":"06:11.700 ","End":"06:14.675","Text":"Then, first row with first column is 1,"},{"Start":"06:14.675 ","End":"06:17.870","Text":"second row with second column is 1 plus 1 is 2."},{"Start":"06:17.870 ","End":"06:23.315","Text":"This with this comes out to be minus 1 times minus 1, 1,"},{"Start":"06:23.315 ","End":"06:26.690","Text":"another 1, and another 1 is 3,"},{"Start":"06:26.690 ","End":"06:29.910","Text":"and 1 and 2 and 3 is 6."},{"Start":"06:30.140 ","End":"06:33.105","Text":"Now we have all the norms squared."},{"Start":"06:33.105 ","End":"06:35.800","Text":"We know it\u0027s not orthonormal."},{"Start":"06:36.770 ","End":"06:43.635","Text":"When we have something orthogonal"},{"Start":"06:43.635 ","End":"06:52.160","Text":"but not orthonormal then we normalize it to make it orthonormal."},{"Start":"06:52.160 ","End":"06:56.410","Text":"Normalizing means dividing each 1 by its norm."},{"Start":"06:56.410 ","End":"07:00.120","Text":"The norms we got were 80,"},{"Start":"07:00.120 ","End":"07:02.610","Text":"2, and 6."},{"Start":"07:02.610 ","End":"07:04.660","Text":"I still see the 6."},{"Start":"07:06.830 ","End":"07:10.485","Text":"These are not the norms, these are the squares of the norms."},{"Start":"07:10.485 ","End":"07:12.635","Text":"We have to take the square root to get the norm,"},{"Start":"07:12.635 ","End":"07:14.540","Text":"when we divide each 1 by its norm,"},{"Start":"07:14.540 ","End":"07:16.160","Text":"it\u0027s 1 over the square root of 80,"},{"Start":"07:16.160 ","End":"07:17.540","Text":"1 over the square root of 2,"},{"Start":"07:17.540 ","End":"07:21.120","Text":"and 1 over the square root of 6, times this."},{"Start":"07:21.410 ","End":"07:25.350","Text":"This is not S, it\u0027s a normalized S. I\u0027ll"},{"Start":"07:25.350 ","End":"07:30.425","Text":"just put a little hat on it to show it\u0027s not S it\u0027s something else."},{"Start":"07:30.425 ","End":"07:32.840","Text":"But this is now normalized."},{"Start":"07:32.840 ","End":"07:35.210","Text":"It\u0027s still not a basis though."},{"Start":"07:35.210 ","End":"07:37.840","Text":"It\u0027s orthonormal but it\u0027s not a basis."},{"Start":"07:37.840 ","End":"07:39.860","Text":"Yes, it\u0027s orthonormal, I mean,"},{"Start":"07:39.860 ","End":"07:45.560","Text":"once you take a vector and divide by its norm then is a unit vector,"},{"Start":"07:45.560 ","End":"07:49.495","Text":"has norm 1 and so everything has norm 1 now,"},{"Start":"07:49.495 ","End":"07:51.225","Text":"so yeah, it\u0027s orthonormal."},{"Start":"07:51.225 ","End":"07:53.860","Text":"Okay, we are done."}],"ID":10179}],"Thumbnail":null,"ID":7313},{"Name":"Gram Schmitt Process","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Gram Schmidt Process","Duration":"4m 55s","ChapterTopicVideoID":26205,"CourseChapterTopicPlaylistID":253224,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.230","Text":"In this clip, we\u0027ll talk about the Gram-Schmidt process."},{"Start":"00:04.230 ","End":"00:08.940","Text":"It\u0027s a method, a technique developed by 2 mathematicians,"},{"Start":"00:08.940 ","End":"00:11.760","Text":"Gram and Schmidt, although, apparently,"},{"Start":"00:11.760 ","End":"00:13.530","Text":"Laplace knew about it,"},{"Start":"00:13.530 ","End":"00:15.000","Text":"maybe even before them."},{"Start":"00:15.000 ","End":"00:19.335","Text":"Anyway, it\u0027s a method for orthogonalizing,"},{"Start":"00:19.335 ","End":"00:20.910","Text":"I don\u0027t know if there is such a word,"},{"Start":"00:20.910 ","End":"00:22.080","Text":"but you\u0027ll see what I mean,"},{"Start":"00:22.080 ","End":"00:28.035","Text":"a set of vectors in an inner product space V of finite dimension n. Typically,"},{"Start":"00:28.035 ","End":"00:31.410","Text":"that will be our R^n, euclidean space."},{"Start":"00:31.410 ","End":"00:33.390","Text":"Now what this process does,"},{"Start":"00:33.390 ","End":"00:37.290","Text":"it starts off with the linearly independent set of vectors,"},{"Start":"00:37.290 ","End":"00:41.835","Text":"call it S, consisting of v_1 up to v_k."},{"Start":"00:41.835 ","End":"00:46.205","Text":"k has to be less than or equal to n of course."},{"Start":"00:46.205 ","End":"00:50.510","Text":"It uses it to churn out another set,"},{"Start":"00:50.510 ","End":"00:53.570","Text":"this time orthogonal, call it S prime,"},{"Start":"00:53.570 ","End":"00:56.735","Text":"and that consists of the same number of vectors,"},{"Start":"00:56.735 ","End":"00:58.985","Text":"w_1, to w_k,"},{"Start":"00:58.985 ","End":"01:03.080","Text":"but the important thing is that these new ones"},{"Start":"01:03.080 ","End":"01:08.360","Text":"span the same k-dimensional subspace of V as S does."},{"Start":"01:08.360 ","End":"01:11.270","Text":"So we have a linearly independent set and we"},{"Start":"01:11.270 ","End":"01:16.020","Text":"convert to linearly independent orthogonal set."},{"Start":"01:16.020 ","End":"01:17.989","Text":"Orthogonal is always linearly independent."},{"Start":"01:17.989 ","End":"01:21.905","Text":"I expands the same subspace as the original vectors."},{"Start":"01:21.905 ","End":"01:24.470","Text":"In particular, if S is a basis,"},{"Start":"01:24.470 ","End":"01:26.575","Text":"meaning that k equals n,"},{"Start":"01:26.575 ","End":"01:30.004","Text":"so we have n linearly independent vectors,"},{"Start":"01:30.004 ","End":"01:34.145","Text":"then S prime will be an orthogonal basis."},{"Start":"01:34.145 ","End":"01:39.250","Text":"Now let\u0027s describe this process algorithm method recipe."},{"Start":"01:39.250 ","End":"01:43.580","Text":"Let V be an inner product space of dimension n and v_1,"},{"Start":"01:43.580 ","End":"01:48.230","Text":"v_2 up to v_k is a non-empty linearly independent set in V."},{"Start":"01:48.230 ","End":"01:56.229","Text":"Here\u0027s how we get the w_1 to w_k that we described above."},{"Start":"01:56.300 ","End":"02:01.920","Text":"We start off with letting w_1 equaling v_1."},{"Start":"02:01.920 ","End":"02:03.420","Text":"So far, no chain."},{"Start":"02:03.420 ","End":"02:11.660","Text":"Then w_2 is obtained by using this formula v_2 minus this is the dot product,"},{"Start":"02:11.660 ","End":"02:14.450","Text":"the inner product of v_2 with w_1,"},{"Start":"02:14.450 ","End":"02:20.000","Text":"w_1 we got from here over the norm squared of w_1 times w_1."},{"Start":"02:20.000 ","End":"02:24.470","Text":"This denominator will not be 0 because in a linearly independent set,"},{"Start":"02:24.470 ","End":"02:28.520","Text":"nothing can be 0. Then w_3."},{"Start":"02:28.520 ","End":"02:33.225","Text":"Well, I\u0027ll let you pause and look at this."},{"Start":"02:33.225 ","End":"02:39.365","Text":"Then to make sure you have the pattern w_4 is equal to this."},{"Start":"02:39.365 ","End":"02:43.950","Text":"Each time we use the ones that we previously generated, w_1,"},{"Start":"02:43.950 ","End":"02:49.205","Text":"w_2, and w_3 feature here in generating w_4."},{"Start":"02:49.205 ","End":"02:51.805","Text":"If we generalize this,"},{"Start":"02:51.805 ","End":"02:55.485","Text":"this is the formula how to get w_k"},{"Start":"02:55.485 ","End":"03:01.810","Text":"from w_k minus 1 and all the other ones up to w_k minus 1."},{"Start":"03:01.810 ","End":"03:04.460","Text":"That is the description of the process."},{"Start":"03:04.460 ","End":"03:08.750","Text":"We just keep going until we reach the k that we want."},{"Start":"03:08.750 ","End":"03:13.295","Text":"There will be several examples after this tutorial."},{"Start":"03:13.295 ","End":"03:19.025","Text":"I want to emphasize again that the important thing is that they span the same subspace,"},{"Start":"03:19.025 ","End":"03:23.990","Text":"span of w_1 to w_k is the same as the old span of v_1 to v_k."},{"Start":"03:23.990 ","End":"03:27.425","Text":"It works at all levels like the span of w_1, w_2,"},{"Start":"03:27.425 ","End":"03:30.410","Text":"w_3 is the same as the span of v_1,"},{"Start":"03:30.410 ","End":"03:33.300","Text":"v_2, v_3, same for 4,"},{"Start":"03:33.300 ","End":"03:41.165","Text":"so on up to k. Your remark now for manual computations in this division,"},{"Start":"03:41.165 ","End":"03:47.400","Text":"you often get fractions and you can get rid of fractions by multiplying w_i,"},{"Start":"03:47.400 ","End":"03:50.495","Text":"typical 1 of these by some scalar,"},{"Start":"03:50.495 ","End":"03:54.245","Text":"like a common denominator of all the fractions that appear."},{"Start":"03:54.245 ","End":"03:58.640","Text":"If you multiply a vector by a non-0 scalar,"},{"Start":"03:58.640 ","End":"04:03.590","Text":"it doesn\u0027t affect the orthogonality of the set of vectors."},{"Start":"04:03.590 ","End":"04:09.005","Text":"We multiply them out by constant to get rid of fractions if we want."},{"Start":"04:09.005 ","End":"04:10.935","Text":"Now an important note,"},{"Start":"04:10.935 ","End":"04:13.565","Text":"sometimes we want an orthonormal set,"},{"Start":"04:13.565 ","End":"04:16.250","Text":"not merely an orthogonal 1."},{"Start":"04:16.250 ","End":"04:21.830","Text":"You remember where each 1 has a norm of 1 and they are mutually perpendicular,"},{"Start":"04:21.830 ","End":"04:25.490","Text":"meaning that the scalar product of any 2 different ones is 0."},{"Start":"04:25.490 ","End":"04:29.150","Text":"How do we get an orthonormal basis for an orthogonal 1?"},{"Start":"04:29.150 ","End":"04:31.370","Text":"We just normalize each vector."},{"Start":"04:31.370 ","End":"04:35.165","Text":"Normalizing means that you divide it by its norm,"},{"Start":"04:35.165 ","End":"04:39.085","Text":"and then the norm of u_i will be 1."},{"Start":"04:39.085 ","End":"04:42.070","Text":"If you divide each w_i by its norm,"},{"Start":"04:42.070 ","End":"04:45.395","Text":"you get a set u_i which is orthonormal."},{"Start":"04:45.395 ","End":"04:47.030","Text":"Of course, it still spans"},{"Start":"04:47.030 ","End":"04:52.535","Text":"the same subspace because dividing by a constant doesn\u0027t affect that."},{"Start":"04:52.535 ","End":"04:55.650","Text":"That\u0027s it for this introduction."}],"ID":27109},{"Watched":false,"Name":"Exercise 1","Duration":"3m 55s","ChapterTopicVideoID":26201,"CourseChapterTopicPlaylistID":253224,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:07.620","Text":"In this exercise, U is a subspace of R^3 that\u0027s spanned by these three-row vectors."},{"Start":"00:07.620 ","End":"00:12.615","Text":"Our task is to find an orthonormal basis for this subspace U."},{"Start":"00:12.615 ","End":"00:15.960","Text":"First thing we\u0027re going to do is just find a basis."},{"Start":"00:15.960 ","End":"00:20.160","Text":"We don\u0027t know that these three are linearly independent and in fact, they\u0027re not."},{"Start":"00:20.160 ","End":"00:23.490","Text":"One way of doing that is writing these three as the rows of"},{"Start":"00:23.490 ","End":"00:28.350","Text":"a three-by-three matrix and we\u0027ll bring it to row-echelon form."},{"Start":"00:28.350 ","End":"00:33.000","Text":"We\u0027ll start with this matrix and we\u0027re going to make it"},{"Start":"00:33.000 ","End":"00:35.820","Text":"0 below the 1 here by subtracting"},{"Start":"00:35.820 ","End":"00:40.065","Text":"4 times this row from this row and 7 times this row from this row."},{"Start":"00:40.065 ","End":"00:43.550","Text":"We get this, you can check the computations."},{"Start":"00:43.550 ","End":"00:47.060","Text":"Then we\u0027re going to make a 0 here,"},{"Start":"00:47.060 ","End":"00:52.945","Text":"we can subtract twice the second row from the third row,"},{"Start":"00:52.945 ","End":"00:55.320","Text":"and that gives us this."},{"Start":"00:55.320 ","End":"00:59.790","Text":"Already we have 0 row so there\u0027s only 2 vectors in a basis."},{"Start":"00:59.790 ","End":"01:03.440","Text":"You can also divide this one by minus 3,"},{"Start":"01:03.440 ","End":"01:05.135","Text":"just be nicer that way."},{"Start":"01:05.135 ","End":"01:08.155","Text":"This actually reduced row echelon form."},{"Start":"01:08.155 ","End":"01:16.695","Text":"We\u0027ll take these 2 vectors as a basis for U and we\u0027ll call them v_1 and v_2."},{"Start":"01:16.695 ","End":"01:22.235","Text":"Now we can apply the Gram-Schmidt process to get an orthogonal basis for you"},{"Start":"01:22.235 ","End":"01:25.085","Text":"now that we know that these are linearly independent."},{"Start":"01:25.085 ","End":"01:29.730","Text":"Here\u0027s a reminder of the Gram-Schmidt process for three vectors,"},{"Start":"01:29.730 ","End":"01:33.975","Text":"we actually don\u0027t even need three, we just need 2 anyway."},{"Start":"01:33.975 ","End":"01:39.310","Text":"w_1 equals v_1, so that\u0027s this 1, 2, 3."},{"Start":"01:41.060 ","End":"01:43.180","Text":"Well, what\u0027s written here?"},{"Start":"01:43.180 ","End":"01:45.275","Text":"We have to do some computations."},{"Start":"01:45.275 ","End":"01:49.970","Text":"We have to figure out what is v_2 dot-product with w_1,"},{"Start":"01:49.970 ","End":"01:53.880","Text":"and we\u0027ll also need the norm squared of w_1,"},{"Start":"01:53.880 ","End":"01:57.055","Text":"which is also w_1 dot w_1."},{"Start":"01:57.055 ","End":"01:58.795","Text":"We can do it in our heads,"},{"Start":"01:58.795 ","End":"02:03.730","Text":"0 times 1 plus 1 times 2 plus 2 times 3 is 8."},{"Start":"02:03.730 ","End":"02:09.090","Text":"The norm squared we get by adding 1 squared plus 2 squared plus 3 squared."},{"Start":"02:09.090 ","End":"02:12.560","Text":"This comes out to be 14 in the denominator,"},{"Start":"02:12.560 ","End":"02:15.710","Text":"we can divide top and bottom by 2 and we get 0,"},{"Start":"02:15.710 ","End":"02:18.620","Text":"1, 2 minus 4/7 of 1, 2, 3."},{"Start":"02:18.620 ","End":"02:22.535","Text":"You might remember I said something about getting rid of denominators."},{"Start":"02:22.535 ","End":"02:25.610","Text":"If this is the second orthogonal vector,"},{"Start":"02:25.610 ","End":"02:27.560","Text":"we can also multiply it by a constant."},{"Start":"02:27.560 ","End":"02:30.665","Text":"We can just multiply it by 7 and get rid of the fraction."},{"Start":"02:30.665 ","End":"02:36.585","Text":"We\u0027ll take another w_2 with a caret on top,"},{"Start":"02:36.585 ","End":"02:40.050","Text":"and that will give us 7 times this minus 4 times this,"},{"Start":"02:40.050 ","End":"02:42.210","Text":"which comes out to be this."},{"Start":"02:42.210 ","End":"02:43.980","Text":"Now we have this one,"},{"Start":"02:43.980 ","End":"02:47.975","Text":"and this one will be on orthogonal basis,"},{"Start":"02:47.975 ","End":"02:51.615","Text":"1, 2, 3 and minus 4,1, 2."},{"Start":"02:51.615 ","End":"02:55.300","Text":"Let\u0027s just check that these two are orthogonal to each other."},{"Start":"02:55.300 ","End":"02:58.535","Text":"Take the dot product or the inner product,"},{"Start":"02:58.535 ","End":"03:03.445","Text":"1 times minus 4 plus 2 times minus 1."},{"Start":"03:03.445 ","End":"03:09.160","Text":"That\u0027s already minus 6 plus 3 times 2 is 0. We\u0027re not done yet."},{"Start":"03:09.160 ","End":"03:11.260","Text":"This is an orthogonal basis,"},{"Start":"03:11.260 ","End":"03:13.990","Text":"we were asked for an orthonormal basis."},{"Start":"03:13.990 ","End":"03:17.890","Text":"What we\u0027re going to do is normalize these two basis vectors."},{"Start":"03:17.890 ","End":"03:25.250","Text":"We do that by taking 1, 2, 3 and dividing it by the norm of 1, 2, 3."},{"Start":"03:25.250 ","End":"03:29.630","Text":"Similarly, minus 4 minus 1, 2 divided by the norm of this."},{"Start":"03:29.630 ","End":"03:33.575","Text":"The norm of this one is 1 square plus 2 squared plus 3 squared,"},{"Start":"03:33.575 ","End":"03:35.820","Text":"take the square root."},{"Start":"03:36.200 ","End":"03:40.745","Text":"This came out to be 14, so it\u0027s square root of 14,"},{"Start":"03:40.745 ","End":"03:44.120","Text":"and here 16 plus 1 plus 4 is 21."},{"Start":"03:44.120 ","End":"03:49.190","Text":"Here we have square root of 21, and these are the two"},{"Start":"03:49.190 ","End":"03:52.880","Text":"vectors that form an orthonormal basis for U."},{"Start":"03:52.880 ","End":"03:55.770","Text":"We are done."}],"ID":27105},{"Watched":false,"Name":"Exercise 2","Duration":"5m 27s","ChapterTopicVideoID":26202,"CourseChapterTopicPlaylistID":253224,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.475","Text":"In this exercise, we have a subspace U of R^4,"},{"Start":"00:05.475 ","End":"00:09.780","Text":"which is spanned by these 3, 4-dimensional vectors."},{"Start":"00:09.780 ","End":"00:15.255","Text":"We have to find an orthonormal basis for the subspace U."},{"Start":"00:15.255 ","End":"00:19.055","Text":"The first thing we\u0027re going to do is just find a basis."},{"Start":"00:19.055 ","End":"00:22.490","Text":"If we knew that these 3 were linearly independent, we\u0027d have a basis"},{"Start":"00:22.490 ","End":"00:23.825","Text":"but we don\u0027t know that."},{"Start":"00:23.825 ","End":"00:25.505","Text":"Let\u0027s check."},{"Start":"00:25.505 ","End":"00:31.730","Text":"We can put these 3 as rows of a matrix and then bring it to Echelon form."},{"Start":"00:31.730 ","End":"00:37.010","Text":"From here, we can divide first of all by 2 to put a 1 here."},{"Start":"00:37.010 ","End":"00:40.310","Text":"Then we can make a 0 here and here by subtracting"},{"Start":"00:40.310 ","End":"00:43.760","Text":"the first row from the second and also from the third."},{"Start":"00:43.760 ","End":"00:46.220","Text":"That gives us this."},{"Start":"00:46.220 ","End":"00:51.780","Text":"The thing to do here is just to switch the second and third rows."},{"Start":"00:51.780 ","End":"00:55.230","Text":"In that way, we have the 1 here and also a 0 here."},{"Start":"00:55.230 ","End":"00:59.900","Text":"Really, this already is in row Echelon form."},{"Start":"00:59.900 ","End":"01:03.560","Text":"In fact, it\u0027s even in reduced row-Echelon form because we have"},{"Start":"01:03.560 ","End":"01:09.780","Text":"the leading coefficient is the 1 in each case."},{"Start":"01:09.780 ","End":"01:15.945","Text":"These 3-row vectors, 1, 1, 1, 1, 0, 1 minus 5, 4,"},{"Start":"01:15.945 ","End":"01:20.930","Text":"0, 0, 1, 3, they form the 3 vectors for a basis of U."},{"Start":"01:20.930 ","End":"01:22.550","Text":"Now that we have a basis,"},{"Start":"01:22.550 ","End":"01:24.845","Text":"you want to find an orthogonal basis."},{"Start":"01:24.845 ","End":"01:27.830","Text":"We could, by the way, I\u0027ve just kept the 3 original ones now"},{"Start":"01:27.830 ","End":"01:30.945","Text":"that we know that the dimension is 3,"},{"Start":"01:30.945 ","End":"01:33.285","Text":"anyway, we\u0027ll work with these."},{"Start":"01:33.285 ","End":"01:38.825","Text":"We\u0027ll apply the Gram-Schmidt process to get the orthogonal basis."},{"Start":"01:38.825 ","End":"01:44.495","Text":"The first row in the Gram-Schmidt process is w_1 equals v_1,"},{"Start":"01:44.495 ","End":"01:47.605","Text":"so that\u0027s 1, 1, 1, 1."},{"Start":"01:47.605 ","End":"01:53.070","Text":"Next 1 is that w_2 is what\u0027s written here."},{"Start":"01:53.070 ","End":"01:57.265","Text":"We\u0027ll substitute w_1, we already have from here."},{"Start":"01:57.265 ","End":"01:58.700","Text":"We put that here, here, and here."},{"Start":"01:58.700 ","End":"02:02.680","Text":"V_2 we have from here."},{"Start":"02:02.680 ","End":"02:05.620","Text":"This is the expression we get."},{"Start":"02:05.620 ","End":"02:10.970","Text":"We need to do a dot-product here and a norm squared."},{"Start":"02:10.970 ","End":"02:15.035","Text":"Well, this norm squared is 1 square plus 1 square plus 1 square plus 1 squared is 4."},{"Start":"02:15.035 ","End":"02:21.600","Text":"Here we have 0, 1 times 1 is 1 minus 5 minus 4."},{"Start":"02:21.600 ","End":"02:25.025","Text":"We have 1 minus 5 minus 4, minus 8."},{"Start":"02:25.025 ","End":"02:27.260","Text":"This norm squared is 4,"},{"Start":"02:27.260 ","End":"02:32.870","Text":"and what we get is a minus with a minus cancel is an 8 over 4 is 2."},{"Start":"02:32.870 ","End":"02:40.050","Text":"We get this plus 2, 2, 2, 2, and that is equal to this vector."},{"Start":"02:40.050 ","End":"02:41.970","Text":"That\u0027s our w_2."},{"Start":"02:41.970 ","End":"02:44.130","Text":"Now we need the third 1."},{"Start":"02:44.130 ","End":"02:47.630","Text":"We\u0027ll use this formula for the third vector."},{"Start":"02:47.630 ","End":"02:52.780","Text":"We already have w_1 and w_2 from here and here."},{"Start":"02:52.780 ","End":"02:58.114","Text":"We plug them in. This is the expression that we get."},{"Start":"02:58.114 ","End":"03:01.265","Text":"We need to perform some computations."},{"Start":"03:01.265 ","End":"03:07.055","Text":"Now, this inner product is 0 plus 0 plus 1 plus 3,"},{"Start":"03:07.055 ","End":"03:11.440","Text":"so this comes out 4, so this is minus 4 over 4 times this."},{"Start":"03:11.440 ","End":"03:16.170","Text":"Here, this denominator is 2 squared plus 3 squared plus 3 squared plus 2 squared,"},{"Start":"03:16.170 ","End":"03:19.590","Text":"which is 4 and 9 and 9 and 4 is 26."},{"Start":"03:19.590 ","End":"03:28.140","Text":"Here, we have 1 times minus 3 and 3 times minus 2 is minus 9 times this."},{"Start":"03:28.140 ","End":"03:29.760","Text":"4 with the 4 cancels,"},{"Start":"03:29.760 ","End":"03:35.840","Text":"so it\u0027s this vector minus this vector plus 9 over 26 times this."},{"Start":"03:35.840 ","End":"03:41.200","Text":"But we can multiply by 26 to get rid of fractions,"},{"Start":"03:41.200 ","End":"03:45.590","Text":"and the computation here is 0, 0, 1, 3 minus 1, 1, 1, 1."},{"Start":"03:45.590 ","End":"03:46.880","Text":"It gives us this vector."},{"Start":"03:46.880 ","End":"03:50.620","Text":"26 times this plus 9 times this,"},{"Start":"03:50.620 ","End":"03:53.435","Text":"and that comes out to be the following."},{"Start":"03:53.435 ","End":"03:58.680","Text":"Now we have w_1, w_2, and w_3."},{"Start":"03:58.680 ","End":"04:02.090","Text":"Well, w_3 is multiplied by a scalar."},{"Start":"04:02.090 ","End":"04:05.120","Text":"We can take it as an orthogonal basis,"},{"Start":"04:05.120 ","End":"04:10.200","Text":"following 3 vectors, we had the 1,1,1,1 from here,"},{"Start":"04:10.200 ","End":"04:15.225","Text":"and then we had the 2, 3 minus 3, minus 2 from here,"},{"Start":"04:15.225 ","End":"04:18.585","Text":"and then minus 8, 1, minus 27, 34 from here."},{"Start":"04:18.585 ","End":"04:22.475","Text":"Let\u0027s check that these 3 vectors are orthogonal to each other."},{"Start":"04:22.475 ","End":"04:23.990","Text":"We\u0027ll do a 3 dot products."},{"Start":"04:23.990 ","End":"04:27.200","Text":"The first, with the second, the first with the third,"},{"Start":"04:27.200 ","End":"04:28.625","Text":"the second with the third."},{"Start":"04:28.625 ","End":"04:30.590","Text":"I\u0027ll leave you to do the computations."},{"Start":"04:30.590 ","End":"04:31.805","Text":"Let\u0027s just do 1 of them."},{"Start":"04:31.805 ","End":"04:36.360","Text":"The first 1, this is just 1 times 2 plus 1 times 3,"},{"Start":"04:36.360 ","End":"04:41.990","Text":"so we get 2 plus 3 plus minus 3 plus minus 2 is 0,"},{"Start":"04:41.990 ","End":"04:43.535","Text":"and check the other 2."},{"Start":"04:43.535 ","End":"04:46.010","Text":"That\u0027s our orthogonal basis."},{"Start":"04:46.010 ","End":"04:48.530","Text":"For the orthonormal basis,"},{"Start":"04:48.530 ","End":"04:51.535","Text":"we need to divide each 1 by its norm."},{"Start":"04:51.535 ","End":"04:54.995","Text":"Divide the first 1 by this square root"},{"Start":"04:54.995 ","End":"04:58.200","Text":"once because it\u0027s 1 squared and here 2 squared"},{"Start":"04:58.200 ","End":"05:01.805","Text":"plus 3 squared and so on and here 8 squared plus 1 squared."},{"Start":"05:01.805 ","End":"05:04.475","Text":"The computations are a bit boring."},{"Start":"05:04.475 ","End":"05:07.790","Text":"Anyway, this 1 comes out to be, we\u0027ve done this before."},{"Start":"05:07.790 ","End":"05:09.740","Text":"Square root of 4 is 2."},{"Start":"05:09.740 ","End":"05:15.960","Text":"Here, square root of 26 and the calculator is for this purpose."},{"Start":"05:15.960 ","End":"05:18.220","Text":"This is what we get."},{"Start":"05:18.220 ","End":"05:20.450","Text":"A bit suspicious of the large number,"},{"Start":"05:20.450 ","End":"05:25.550","Text":"but maybe okay and I will declare that this is our answer."},{"Start":"05:25.550 ","End":"05:28.320","Text":"We are done."}],"ID":27106},{"Watched":false,"Name":"Exercise 3","Duration":"6m 18s","ChapterTopicVideoID":26203,"CourseChapterTopicPlaylistID":253224,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.380","Text":"This exercise involves the space P_3 of x,"},{"Start":"00:04.380 ","End":"00:11.580","Text":"which is the vector space of real polynomials of degree less than or equal to 3."},{"Start":"00:11.580 ","End":"00:17.445","Text":"By the way, it\u0027s sometimes denoted with square brackets and sometimes just as P_3."},{"Start":"00:17.445 ","End":"00:21.410","Text":"Note that we have a basis of 1, x,"},{"Start":"00:21.410 ","End":"00:22.895","Text":"x squared and x cubed,"},{"Start":"00:22.895 ","End":"00:25.220","Text":"so that it has dimension 4 in fact,"},{"Start":"00:25.220 ","End":"00:27.055","Text":"even though it\u0027s P_3."},{"Start":"00:27.055 ","End":"00:31.040","Text":"Our task is to find an orthonormal basis for P3 of"},{"Start":"00:31.040 ","End":"00:37.135","Text":"x with the integral in a product on the interval minus 1,1."},{"Start":"00:37.135 ","End":"00:40.840","Text":"I\u0027ll remind you what the integral inner product is."},{"Start":"00:40.840 ","End":"00:43.880","Text":"In general, on an interval a, b,"},{"Start":"00:43.880 ","End":"00:50.210","Text":"it\u0027s defined by f inner product with g is the integral of f of x,"},{"Start":"00:50.210 ","End":"00:53.110","Text":"g of x dx on the interval."},{"Start":"00:53.110 ","End":"00:54.860","Text":"Just a note;"},{"Start":"00:54.860 ","End":"01:00.235","Text":"careful to distinguish between the number 1 on the polynomial function 1."},{"Start":"01:00.235 ","End":"01:03.675","Text":"I\u0027m going to use the Gram-Schmidt process."},{"Start":"01:03.675 ","End":"01:07.490","Text":"We\u0027ll start with our basis that was given above: 1,"},{"Start":"01:07.490 ","End":"01:09.020","Text":"x, x squared, x cubed,"},{"Start":"01:09.020 ","End":"01:10.160","Text":"and we\u0027ll label them v_1,"},{"Start":"01:10.160 ","End":"01:12.085","Text":"v_2, v_3, v_4."},{"Start":"01:12.085 ","End":"01:15.870","Text":"Then we use Gram-Schmidt to get an orthogonal basis,"},{"Start":"01:15.870 ","End":"01:18.475","Text":"w_1, w_2, w_3, w_4."},{"Start":"01:18.475 ","End":"01:22.415","Text":"Later we will convert the orthogonal to an orthonormal basis."},{"Start":"01:22.415 ","End":"01:27.330","Text":"The formula that we had is as follows,"},{"Start":"01:27.330 ","End":"01:28.590","Text":"that\u0027s how we get w_1,"},{"Start":"01:28.590 ","End":"01:31.305","Text":"w_2, w_3, w_4."},{"Start":"01:31.305 ","End":"01:35.720","Text":"The inner product is the integral in general,"},{"Start":"01:35.720 ","End":"01:39.320","Text":"for an interval is from a to b here our interval is minus 1 to 1."},{"Start":"01:39.320 ","End":"01:42.095","Text":"This is the dot product or inner product."},{"Start":"01:42.095 ","End":"01:44.160","Text":"Now do the computation,"},{"Start":"01:44.160 ","End":"01:46.500","Text":"w_1 is v_1 is 1,"},{"Start":"01:46.500 ","End":"01:50.310","Text":"that\u0027s the v_1, w_2 is this."},{"Start":"01:50.310 ","End":"01:53.745","Text":"We have w_1 already from here."},{"Start":"01:53.745 ","End":"01:57.060","Text":"This is equal to just substituting everything,"},{"Start":"01:57.060 ","End":"02:02.255","Text":"v_2 is x and w_1 is 1, v_2 is x."},{"Start":"02:02.255 ","End":"02:08.555","Text":"We get this expression and we need the inner product of x with 1."},{"Start":"02:08.555 ","End":"02:12.140","Text":"I\u0027ll leave you to check this integral comes out to be 0."},{"Start":"02:12.140 ","End":"02:14.555","Text":"It doesn\u0027t matter what the denominator is,"},{"Start":"02:14.555 ","End":"02:17.419","Text":"and this thing just comes out to be x."},{"Start":"02:17.419 ","End":"02:19.465","Text":"What about w_3?"},{"Start":"02:19.465 ","End":"02:21.215","Text":"This is equal to this."},{"Start":"02:21.215 ","End":"02:23.900","Text":"Again, we\u0027ll need some computations."},{"Start":"02:23.900 ","End":"02:28.820","Text":"This was plug in v_3 is equal to x squared,"},{"Start":"02:28.820 ","End":"02:31.065","Text":"and then we have w_1 and w_2."},{"Start":"02:31.065 ","End":"02:38.440","Text":"We need to compute some inner products and some norms. New page."},{"Start":"02:38.440 ","End":"02:43.505","Text":"I claim that this is equal to 2/3, this is equal to 2."},{"Start":"02:43.505 ","End":"02:47.300","Text":"I\u0027ll do the computations at the side here,"},{"Start":"02:47.300 ","End":"02:52.150","Text":"x squared dot-product with 1 is this integral."},{"Start":"02:52.150 ","End":"02:54.980","Text":"I\u0027m going to leave you to check the integrals on your own,"},{"Start":"02:54.980 ","End":"02:56.870","Text":"not a course in integrals."},{"Start":"02:56.870 ","End":"03:00.275","Text":"This comes out to be 2/3, like I said."},{"Start":"03:00.275 ","End":"03:05.395","Text":"The norm of 1 squared is dot-product of 1 with 1."},{"Start":"03:05.395 ","End":"03:06.900","Text":"That comes at 2,"},{"Start":"03:06.900 ","End":"03:11.165","Text":"so we have this and the dot-product of x squared with x,"},{"Start":"03:11.165 ","End":"03:13.370","Text":"the inner product comes out to be 0."},{"Start":"03:13.370 ","End":"03:16.880","Text":"We don\u0027t need to compute this denominator, at least not yet."},{"Start":"03:16.880 ","End":"03:21.295","Text":"We get this, which is x squared minus 1/3,"},{"Start":"03:21.295 ","End":"03:24.290","Text":"I would like to get rid of denominators of fractions,"},{"Start":"03:24.290 ","End":"03:27.170","Text":"so we can multiply this by 3."},{"Start":"03:27.170 ","End":"03:28.730","Text":"It\u0027s not exactly w_3,"},{"Start":"03:28.730 ","End":"03:32.350","Text":"we\u0027ll call it w_3 with a hat on it, caret."},{"Start":"03:32.350 ","End":"03:35.490","Text":"Going to be 3x squared minus 1."},{"Start":"03:35.490 ","End":"03:38.190","Text":"That\u0027ll be our w_3."},{"Start":"03:38.190 ","End":"03:40.920","Text":"Next, w_4."},{"Start":"03:40.920 ","End":"03:43.165","Text":"This is the expression."},{"Start":"03:43.165 ","End":"03:49.040","Text":"This is what happens when we plug in v_4 and w_1,"},{"Start":"03:49.040 ","End":"03:50.945","Text":"w_2, w_3, etc."},{"Start":"03:50.945 ","End":"03:53.060","Text":"Do the computations over here."},{"Start":"03:53.060 ","End":"03:57.255","Text":"X cubed with 1 is 0."},{"Start":"03:57.255 ","End":"03:59.230","Text":"That takes care of this one."},{"Start":"03:59.230 ","End":"04:04.280","Text":"Then x cubed with x integral of x to the 4th,"},{"Start":"04:04.280 ","End":"04:06.860","Text":"and so on is 2/5."},{"Start":"04:06.860 ","End":"04:11.015","Text":"We also need this expression."},{"Start":"04:11.015 ","End":"04:14.360","Text":"This comes out to be the integral."},{"Start":"04:14.360 ","End":"04:17.555","Text":"Well, comes out to be 2/3."},{"Start":"04:17.555 ","End":"04:20.390","Text":"This dot-product we still have to compute is equal to 0,"},{"Start":"04:20.390 ","End":"04:22.220","Text":"so we don\u0027t need this."},{"Start":"04:22.220 ","End":"04:24.940","Text":"I\u0027ll show you that this is 0."},{"Start":"04:24.940 ","End":"04:27.735","Text":"Well, like I said, the integrals,"},{"Start":"04:27.735 ","End":"04:29.210","Text":"you can follow on your own,"},{"Start":"04:29.210 ","End":"04:31.215","Text":"I\u0027ll just show you the computation."},{"Start":"04:31.215 ","End":"04:34.270","Text":"Basically, the integral of an odd function,"},{"Start":"04:34.270 ","End":"04:37.730","Text":"if it\u0027s just odd powers from minus 1 to 1,"},{"Start":"04:37.730 ","End":"04:40.745","Text":"or any symmetric interval will be 0."},{"Start":"04:40.745 ","End":"04:45.510","Text":"That\u0027s w_4. Yeah,"},{"Start":"04:45.510 ","End":"04:49.100","Text":"this over this is 3/5 and we get rid of fractions."},{"Start":"04:49.100 ","End":"04:51.695","Text":"The denominator multiply by 5,"},{"Start":"04:51.695 ","End":"04:57.900","Text":"and we\u0027ll take w_4 adjusted to be 5x cubed minus 3x."},{"Start":"04:57.900 ","End":"05:00.980","Text":"Collecting all together, w_1,"},{"Start":"05:00.980 ","End":"05:02.720","Text":"w_2, w_3, w_4,"},{"Start":"05:02.720 ","End":"05:04.970","Text":"this is our orthogonal basis."},{"Start":"05:04.970 ","End":"05:09.200","Text":"We\u0027re still not done because we were asked for an orthonormal basis."},{"Start":"05:09.200 ","End":"05:13.640","Text":"What we do is just divide each one of these;"},{"Start":"05:13.640 ","End":"05:15.350","Text":"1x, 3x squared minus 1,"},{"Start":"05:15.350 ","End":"05:18.214","Text":"5x cubed minus 3x by its norm."},{"Start":"05:18.214 ","End":"05:20.835","Text":"Now earlier on we computed this,"},{"Start":"05:20.835 ","End":"05:22.470","Text":"came out to be 2."},{"Start":"05:22.470 ","End":"05:24.390","Text":"Let\u0027s go back and see."},{"Start":"05:24.390 ","End":"05:27.645","Text":"Yeah, this is equal to 2,"},{"Start":"05:27.645 ","End":"05:32.890","Text":"and this is equal to 2/3."},{"Start":"05:32.890 ","End":"05:35.270","Text":"I stand corrected."},{"Start":"05:35.270 ","End":"05:38.330","Text":"This is the square root of 2 because we computed the norm"},{"Start":"05:38.330 ","End":"05:42.655","Text":"squared and this will be the square root of 2/3."},{"Start":"05:42.655 ","End":"05:44.850","Text":"Now we need this."},{"Start":"05:44.850 ","End":"05:51.350","Text":"It\u0027s computed, that you study the computations on your own comes out to be 8/5 here,"},{"Start":"05:51.350 ","End":"05:56.055","Text":"and the other one comes out to be 8/7."},{"Start":"05:56.055 ","End":"05:59.435","Text":"Now plug all those in and don\u0027t forget the square root"},{"Start":"05:59.435 ","End":"06:03.260","Text":"because the norm is the square root of the dot-product with itself."},{"Start":"06:03.260 ","End":"06:05.100","Text":"We have 1 over, like I said,"},{"Start":"06:05.100 ","End":"06:06.990","Text":"square root of 2, then square root of 2/3,"},{"Start":"06:06.990 ","End":"06:11.550","Text":"here square root of 8/5 and here square root of 8/7."},{"Start":"06:11.550 ","End":"06:13.455","Text":"That\u0027s the answer."},{"Start":"06:13.455 ","End":"06:19.150","Text":"Maybe just highlight it and we are done."}],"ID":27107},{"Watched":false,"Name":"Exercise 4","Duration":"9m 4s","ChapterTopicVideoID":26204,"CourseChapterTopicPlaylistID":253224,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:10.650","Text":"In this exercise, U is a subspace of M_2 of R. This is the space of 2 by 2 real matrices."},{"Start":"00:10.650 ","End":"00:19.350","Text":"U is defined as the span of these 3 matrices considered as vectors in the vector space."},{"Start":"00:19.350 ","End":"00:24.120","Text":"Our task is to find an orthonormal basis of view with respect to"},{"Start":"00:24.120 ","End":"00:28.530","Text":"the usual inner product of this space to remind"},{"Start":"00:28.530 ","End":"00:38.055","Text":"you the inner product is defined as A.B is the trace of B transpose A."},{"Start":"00:38.055 ","End":"00:39.740","Text":"To make it easier,"},{"Start":"00:39.740 ","End":"00:43.685","Text":"you can assume that these 3 vectors,"},{"Start":"00:43.685 ","End":"00:47.630","Text":"matrices form a linearly independent set."},{"Start":"00:47.630 ","End":"00:50.165","Text":"But if you don\u0027t want to take my word for it,"},{"Start":"00:50.165 ","End":"00:53.255","Text":"I\u0027ll show you why at the end of the clip."},{"Start":"00:53.255 ","End":"00:55.915","Text":"Let\u0027s get started."},{"Start":"00:55.915 ","End":"01:00.375","Text":"We have this basis B of U,"},{"Start":"01:00.375 ","End":"01:02.130","Text":"call it v_1, v_2,"},{"Start":"01:02.130 ","End":"01:09.095","Text":"v_3 and we\u0027ll use Gram-Schmidt process to get an orthogonal basis."},{"Start":"01:09.095 ","End":"01:11.030","Text":"This is the formula,"},{"Start":"01:11.030 ","End":"01:13.190","Text":"3 formulas that we\u0027re going to use."},{"Start":"01:13.190 ","End":"01:17.395","Text":"We\u0027ll start with w_1 on a new page."},{"Start":"01:17.395 ","End":"01:19.875","Text":"W_1 is v_1,"},{"Start":"01:19.875 ","End":"01:21.750","Text":"so we have the first 1,"},{"Start":"01:21.750 ","End":"01:23.035","Text":"that\u0027s the easy 1."},{"Start":"01:23.035 ","End":"01:28.570","Text":"Then w_2 using this formula and we have w_1 from here."},{"Start":"01:28.570 ","End":"01:31.640","Text":"This is the computation that we need."},{"Start":"01:31.640 ","End":"01:35.975","Text":"We need this inner product and this norm,"},{"Start":"01:35.975 ","End":"01:38.210","Text":"do the computations on the other side."},{"Start":"01:38.210 ","End":"01:43.115","Text":"I want to show that this is 2 and this is 30."},{"Start":"01:43.115 ","End":"01:48.960","Text":"V_2.w_1, the trace of w_1 transpose, v_2."},{"Start":"01:48.960 ","End":"01:51.360","Text":"W_1 transpose is this,"},{"Start":"01:51.360 ","End":"02:01.055","Text":"we just switched the 3 and the 2 around and v_2 is this."},{"Start":"02:01.055 ","End":"02:03.440","Text":"Then multiply the 2 matrices,"},{"Start":"02:03.440 ","End":"02:04.580","Text":"but we don\u0027t need everything."},{"Start":"02:04.580 ","End":"02:08.555","Text":"These 2 are don\u0027t care entry because we just want the diagonal."},{"Start":"02:08.555 ","End":"02:11.140","Text":"Don\u0027t need the diagonal for the trace."},{"Start":"02:11.140 ","End":"02:14.270","Text":"1, 3 times 1 minus 1,"},{"Start":"02:14.270 ","End":"02:17.180","Text":"1 minus 3 minus 2 and 2,"},{"Start":"02:17.180 ","End":"02:20.170","Text":"4 with 2, 0 gives 4."},{"Start":"02:20.170 ","End":"02:23.805","Text":"Then minus 2 plus 4 is 2."},{"Start":"02:23.805 ","End":"02:25.495","Text":"That\u0027s the 2 here."},{"Start":"02:25.495 ","End":"02:27.180","Text":"Now, the 30,"},{"Start":"02:27.180 ","End":"02:33.120","Text":"norm of w_1 squared is w_1.w_1 in"},{"Start":"02:33.120 ","End":"02:37.050","Text":"a product w_1 with itself trace of w_1 transpose"},{"Start":"02:37.050 ","End":"02:41.135","Text":"w. This is w_1 and this is its transpose,"},{"Start":"02:41.135 ","End":"02:43.865","Text":"switching the 3 and the 2 around."},{"Start":"02:43.865 ","End":"02:47.190","Text":"Then here, we have 1, 3 with 1,"},{"Start":"02:47.190 ","End":"02:48.855","Text":"3 gives us 10,"},{"Start":"02:48.855 ","End":"02:50.190","Text":"and 2, 4 with 2,"},{"Start":"02:50.190 ","End":"02:52.350","Text":"4 gives us 20,"},{"Start":"02:52.350 ","End":"02:54.390","Text":"2 squared plus 4 squared,"},{"Start":"02:54.390 ","End":"02:56.240","Text":"4 plus 16 altogether,"},{"Start":"02:56.240 ","End":"03:00.830","Text":"we have 30, and that\u0027s the 30 here, so that\u0027s w_2."},{"Start":"03:00.830 ","End":"03:02.209","Text":"Now, we don\u0027t want fractions,"},{"Start":"03:02.209 ","End":"03:04.400","Text":"so normal, not normalized,"},{"Start":"03:04.400 ","End":"03:09.650","Text":"and I\u0027ll adjust it, call it w_2 bar multiplied by 15."},{"Start":"03:09.650 ","End":"03:14.765","Text":"Then we have 15 times this minus 1 times this"},{"Start":"03:14.765 ","End":"03:21.465","Text":"and this comes out to be 15 times 1 minus 1 is 14, etc."},{"Start":"03:21.465 ","End":"03:24.965","Text":"We can do some more adjusting because these are all even numbers."},{"Start":"03:24.965 ","End":"03:31.725","Text":"We can divide it by 2 and get another w_2 time with a hat on."},{"Start":"03:31.725 ","End":"03:34.010","Text":"That is equal to half of this,"},{"Start":"03:34.010 ","End":"03:35.945","Text":"half of this, half of this, half of this."},{"Start":"03:35.945 ","End":"03:40.460","Text":"Next, we want w_3 use this formula."},{"Start":"03:40.460 ","End":"03:44.730","Text":"We\u0027ll use not the w_1,"},{"Start":"03:44.730 ","End":"03:48.510","Text":"w_2, but we\u0027ll use a w_2 with the hat on."},{"Start":"03:48.510 ","End":"03:51.060","Text":"Now, we know v_3,"},{"Start":"03:51.060 ","End":"03:52.350","Text":"we know w_1,"},{"Start":"03:52.350 ","End":"03:54.090","Text":"we know w_2,"},{"Start":"03:54.090 ","End":"03:56.255","Text":"just put those in."},{"Start":"03:56.255 ","End":"04:01.015","Text":"Let\u0027s compute the inner product of v_3 with w_1."},{"Start":"04:01.015 ","End":"04:06.190","Text":"By definition, it\u0027s the trace of w_1 transpose v_3."},{"Start":"04:06.190 ","End":"04:08.670","Text":"This is what it comes out as."},{"Start":"04:08.670 ","End":"04:11.525","Text":"This is the w_1 transpose."},{"Start":"04:11.525 ","End":"04:13.820","Text":"We just want the diagonal of the product."},{"Start":"04:13.820 ","End":"04:15.440","Text":"1, 3 with 0,"},{"Start":"04:15.440 ","End":"04:18.480","Text":"1 gives us 3, and 2,"},{"Start":"04:18.480 ","End":"04:20.490","Text":"4 with 2, 1,"},{"Start":"04:20.490 ","End":"04:23.550","Text":"gives us 4 plus 4 is 8."},{"Start":"04:23.550 ","End":"04:25.860","Text":"The trace of this is 11."},{"Start":"04:25.860 ","End":"04:30.520","Text":"We also need the 3 with w_2,"},{"Start":"04:30.660 ","End":"04:36.880","Text":"which is equal to the trace of w_2 transpose v_3 transpose."},{"Start":"04:36.880 ","End":"04:42.900","Text":"This is the transpose of w_2 with the hat."},{"Start":"04:42.900 ","End":"04:53.020","Text":"Here, just flip the minus 9 and the 14 and v_3 multiply out just the diagonals we need."},{"Start":"04:53.020 ","End":"04:59.400","Text":"We get minus 9 here and 26 here and then the sum of the diagonal is 17."},{"Start":"04:59.400 ","End":"05:03.055","Text":"Now that we have our 11 and 17,"},{"Start":"05:03.055 ","End":"05:05.725","Text":"we can put them here and here."},{"Start":"05:05.725 ","End":"05:11.465","Text":"This we already know is 30, from here."},{"Start":"05:11.465 ","End":"05:13.545","Text":"We just need this."},{"Start":"05:13.545 ","End":"05:15.510","Text":"I\u0027ll tell you now, it\u0027s 330,"},{"Start":"05:15.510 ","End":"05:17.359","Text":"do the computation in a moment,"},{"Start":"05:17.359 ","End":"05:23.720","Text":"just not to break the flow going over 17 over 330 and that"},{"Start":"05:23.720 ","End":"05:31.390","Text":"comes out if you multiply w_3 by 330 to get rid of the denominator."},{"Start":"05:31.390 ","End":"05:37.785","Text":"This is what we get because 330 over 30 is 11 times 11 is 121."},{"Start":"05:37.785 ","End":"05:42.620","Text":"Here, we have 17 and we do this computation."},{"Start":"05:42.620 ","End":"05:44.690","Text":"Well, I\u0027ll just show you what it is."},{"Start":"05:44.690 ","End":"05:48.030","Text":"It\u0027s just boring arithmetic."},{"Start":"05:48.040 ","End":"05:54.530","Text":"We get this and then we see that we can simplify it further."},{"Start":"05:54.530 ","End":"05:57.650","Text":"We can take 60 out and we have minus 4, 3,"},{"Start":"05:57.650 ","End":"06:02.895","Text":"2 minus 2 and we\u0027ll let this be the new w_3."},{"Start":"06:02.895 ","End":"06:06.840","Text":"I will call this matrix w_3 with a hat."},{"Start":"06:06.840 ","End":"06:12.240","Text":"I still need to show the computation for the 330."},{"Start":"06:13.880 ","End":"06:19.575","Text":"It\u0027s a trice of w_2 with itself from here."},{"Start":"06:19.575 ","End":"06:23.780","Text":"This is the w_2 and this is its transpose,"},{"Start":"06:23.780 ","End":"06:26.225","Text":"flipping the diagonal, multiply out,"},{"Start":"06:26.225 ","End":"06:28.715","Text":"just compute what\u0027s on the diagonal,"},{"Start":"06:28.715 ","End":"06:33.115","Text":"and then add the diagonal 330, like we said."},{"Start":"06:33.115 ","End":"06:40.140","Text":"Now, we have an orthogonal basis and here it is w_1,"},{"Start":"06:40.140 ","End":"06:44.250","Text":"w_2, and the adjusted w_3."},{"Start":"06:44.250 ","End":"06:50.874","Text":"Orthogonal, we\u0027ll make it orthonormal by dividing each 1 by its norm."},{"Start":"06:50.874 ","End":"06:58.625","Text":"Now, we already know the norm squared if w_1 is 30 and for w_2,"},{"Start":"06:58.625 ","End":"07:02.425","Text":"it\u0027s 330, we\u0027re missing the last 1."},{"Start":"07:02.425 ","End":"07:08.069","Text":"It\u0027s going to be the trace of w_3 transpose with w_3"},{"Start":"07:08.069 ","End":"07:14.505","Text":"and it comes out to be, let\u0027s say 33."},{"Start":"07:14.505 ","End":"07:23.270","Text":"We have the norm squared of w_1 and w_2 and w_3."},{"Start":"07:23.270 ","End":"07:28.415","Text":"We need to take the square roots of these for the denominators here."},{"Start":"07:28.415 ","End":"07:32.810","Text":"Our orthonormal basis is like the orthogonal basis,"},{"Start":"07:32.810 ","End":"07:36.000","Text":"but we divide by root 30 here,"},{"Start":"07:36.000 ","End":"07:39.090","Text":"root 330, and root 33."},{"Start":"07:39.090 ","End":"07:44.240","Text":"We\u0027re basically done except that I have a debt to you to show you that"},{"Start":"07:44.240 ","End":"07:50.780","Text":"the original 3 matrices in that set are linearly independent."},{"Start":"07:50.780 ","End":"07:52.505","Text":"You just took my word for it, but yeah,"},{"Start":"07:52.505 ","End":"08:01.330","Text":"now I\u0027ll show you using matrices and row operations to bring it to row echelon form."},{"Start":"08:01.330 ","End":"08:03.920","Text":"If flatten this out,"},{"Start":"08:03.920 ","End":"08:06.020","Text":"so this is 1, 2, 3, 4, 1,"},{"Start":"08:06.020 ","End":"08:07.880","Text":"2 minus 1, 0, 0,"},{"Start":"08:07.880 ","End":"08:09.155","Text":"2, 1, 1,"},{"Start":"08:09.155 ","End":"08:11.310","Text":"and those are here."},{"Start":"08:11.570 ","End":"08:18.250","Text":"Then we subtract this row from this row to get a 0 here."},{"Start":"08:18.250 ","End":"08:23.555","Text":"Then we can subtract this row from this row to get a 0 here."},{"Start":"08:23.555 ","End":"08:28.820","Text":"We can divide this by minus 4 to get a 1 and a 1."},{"Start":"08:28.820 ","End":"08:36.155","Text":"Then we can add twice this row to the last row,"},{"Start":"08:36.155 ","End":"08:38.390","Text":"and we get a minus 1 here,"},{"Start":"08:38.390 ","End":"08:39.770","Text":"then multiply by minus 1."},{"Start":"08:39.770 ","End":"08:41.375","Text":"You can make it plus 1."},{"Start":"08:41.375 ","End":"08:45.095","Text":"This is, in fact, the reduced row echelon form"},{"Start":"08:45.095 ","End":"08:47.480","Text":"with the leading coefficients are actually 1,"},{"Start":"08:47.480 ","End":"08:51.305","Text":"but we have 3 rows and we didn\u0027t get a row of zeros."},{"Start":"08:51.305 ","End":"08:55.565","Text":"These are linearly independent so the rank really is 3"},{"Start":"08:55.565 ","End":"09:01.410","Text":"and means that these 3 or these 3 were linearly independent."},{"Start":"09:01.410 ","End":"09:05.580","Text":"With that, we\u0027ve concluded this exercise."}],"ID":27108}],"Thumbnail":null,"ID":253224},{"Name":"Orthogonal Matrices","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Introduction and Definition","Duration":"4m 54s","ChapterTopicVideoID":26210,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.340","Text":"In this step, we\u0027ll introduce a new concept, orthogonal matrices."},{"Start":"00:05.340 ","End":"00:07.695","Text":"We\u0027ll start with the definition."},{"Start":"00:07.695 ","End":"00:15.045","Text":"Suppose we have an n by n matrix and mostly we\u0027ll just be considering real matrices."},{"Start":"00:15.045 ","End":"00:18.210","Text":"We\u0027ll call A an orthogonal matrix,"},{"Start":"00:18.210 ","End":"00:23.820","Text":"if and only if its columns form an orthonormal set in R^n."},{"Start":"00:23.820 ","End":"00:25.670","Text":"Notice that here it says orthogonal,"},{"Start":"00:25.670 ","End":"00:27.605","Text":"but here it says orthonormal."},{"Start":"00:27.605 ","End":"00:30.145","Text":"Note there are n columns."},{"Start":"00:30.145 ","End":"00:35.740","Text":"If we have n vectors which are an orthonormal set,"},{"Start":"00:35.740 ","End":"00:39.680","Text":"then they are, in fact, an orthonormal basis because"},{"Start":"00:39.680 ","End":"00:45.320","Text":"n linearly independent vectors in an n-dimensional vector space is a basis."},{"Start":"00:45.320 ","End":"00:48.425","Text":"I could have even said basis in the definition."},{"Start":"00:48.425 ","End":"00:54.035","Text":"Another note and we\u0027ll prove it in the exercises,"},{"Start":"00:54.035 ","End":"00:57.755","Text":"is that you might think why columns, why not rows?"},{"Start":"00:57.755 ","End":"01:02.580","Text":"Well, it would work equally well if you said rows and it\u0027s if and only if."},{"Start":"01:02.580 ","End":"01:06.405","Text":"A is orthogonal if and only if its rows are orthonormal."},{"Start":"01:06.405 ","End":"01:09.700","Text":"Now, an example of an orthogonal matrix."},{"Start":"01:09.700 ","End":"01:15.670","Text":"Let\u0027s take 1 that\u0027s 2 by 2, 1, 0, 0, minus 1, it\u0027s orthogonal."},{"Start":"01:15.670 ","End":"01:24.880","Text":"This column vector is orthogonal to this column vector because the dot product is 0,"},{"Start":"01:24.880 ","End":"01:27.700","Text":"1 times 0 plus 0 times minus 1 is 0."},{"Start":"01:27.700 ","End":"01:30.440","Text":"Also, each vector in itself has norm 1."},{"Start":"01:30.440 ","End":"01:33.625","Text":"Square root of 1 squared plus 0 squared is 1."},{"Start":"01:33.625 ","End":"01:38.495","Text":"Similarly, 0 squared, plus minus 1 squared is also 1."},{"Start":"01:38.495 ","End":"01:43.450","Text":"The columns are orthonormal, so the matrix is orthogonal."},{"Start":"01:43.450 ","End":"01:46.300","Text":"Here, we just wrote it out."},{"Start":"01:46.300 ","End":"01:48.830","Text":"Let\u0027s take another example."},{"Start":"01:48.830 ","End":"01:52.840","Text":"The matrix B, this time a 3 by 3 example."},{"Start":"01:52.840 ","End":"01:55.640","Text":"This 1 is orthogonal."},{"Start":"01:55.710 ","End":"02:01.615","Text":"We can note that the dot product of any 2 different ones is 0,"},{"Start":"02:01.615 ","End":"02:05.520","Text":"because here I have 1, 0, 0, and here I have 0s."},{"Start":"02:05.520 ","End":"02:10.480","Text":"This is orthogonal to this and this meaning the dot product is 0,"},{"Start":"02:10.480 ","End":"02:14.790","Text":"also between these 2, we take this times this,"},{"Start":"02:14.790 ","End":"02:17.590","Text":"plus this times this we\u0027ll get 0 because it\u0027s"},{"Start":"02:17.590 ","End":"02:21.985","Text":"the same product here and here just has a minus."},{"Start":"02:21.985 ","End":"02:24.999","Text":"Now, what about the norm of each column?"},{"Start":"02:24.999 ","End":"02:27.520","Text":"This has norm 1, and this,"},{"Start":"02:27.520 ","End":"02:30.235","Text":"we take squares and add them."},{"Start":"02:30.235 ","End":"02:38.415","Text":"0 squared is 0 plus 0.75 plus 0.25 is 1 and similarly here."},{"Start":"02:38.415 ","End":"02:40.984","Text":"This is an orthogonal matrix."},{"Start":"02:40.984 ","End":"02:45.830","Text":"The columns are orthonormal and here, I just wrote that out."},{"Start":"02:45.830 ","End":"02:51.155","Text":"Next theorem, a square matrix is orthogonal"},{"Start":"02:51.155 ","End":"02:57.335","Text":"if and only if a transpose times A is the identity matrix."},{"Start":"02:57.335 ","End":"03:00.785","Text":"This will be proven in the exercises."},{"Start":"03:00.785 ","End":"03:06.920","Text":"But I\u0027ll remark that sometimes this is taken as a definition of an orthogonal matrix,"},{"Start":"03:06.920 ","End":"03:12.905","Text":"meaning A is orthogonal if and only if A transpose times A is the identity."},{"Start":"03:12.905 ","End":"03:17.700","Text":"Then, the fact that it\u0027s with orthonormal columns is proven."},{"Start":"03:17.700 ","End":"03:18.870","Text":"Anyway, they\u0027re equivalent,"},{"Start":"03:18.870 ","End":"03:20.810","Text":"so you can take either 1 as the definition."},{"Start":"03:20.810 ","End":"03:23.315","Text":"Either it say the columns are orthonormal or that"},{"Start":"03:23.315 ","End":"03:26.870","Text":"A transpose times A is the identity, both are good."},{"Start":"03:26.870 ","End":"03:31.710","Text":"Another example, is this matrix orthogonal?"},{"Start":"03:31.710 ","End":"03:34.700","Text":"This is the same example that we had up here,"},{"Start":"03:34.700 ","End":"03:38.965","Text":"but this time, we\u0027re going to do it using the theorem."},{"Start":"03:38.965 ","End":"03:41.700","Text":"Instead of checking if the columns are orthonormal,"},{"Start":"03:41.700 ","End":"03:44.760","Text":"we\u0027ll check if A transpose A equals I."},{"Start":"03:44.760 ","End":"03:46.590","Text":"Well, we know the answer is going to be yes,"},{"Start":"03:46.590 ","End":"03:48.375","Text":"but let\u0027s show it."},{"Start":"03:48.375 ","End":"03:54.225","Text":"Now, this is B. A transpose A is B transpose B."},{"Start":"03:54.225 ","End":"03:57.050","Text":"The transpose here, it\u0027s almost the same,"},{"Start":"03:57.050 ","End":"03:59.915","Text":"just the minus here, goes over here,"},{"Start":"03:59.915 ","End":"04:02.720","Text":"everything else is symmetric along the diagonal."},{"Start":"04:02.720 ","End":"04:05.245","Text":"B transpose times B."},{"Start":"04:05.245 ","End":"04:08.305","Text":"The computation that you verify,"},{"Start":"04:08.305 ","End":"04:12.655","Text":"it does indeed come out the identity matrix."},{"Start":"04:12.655 ","End":"04:17.180","Text":"The answer is yes, B is orthogonal."},{"Start":"04:17.180 ","End":"04:21.575","Text":"The final thing is going to be a corollary of this theorem."},{"Start":"04:21.575 ","End":"04:26.275","Text":"This corollary is yet another way of characterizing an orthogonal matrix."},{"Start":"04:26.275 ","End":"04:32.490","Text":"This square matrix A is orthogonal if and only if it\u0027s invertible and furthermore,"},{"Start":"04:32.490 ","End":"04:36.240","Text":"that A transpose equals A inverse."},{"Start":"04:36.240 ","End":"04:37.950","Text":"We\u0027ll prove it in the exercises,"},{"Start":"04:37.950 ","End":"04:42.255","Text":"but he idea is to say that if A transpose times A is I,"},{"Start":"04:42.255 ","End":"04:44.900","Text":"then A has to be invertible because something times it is"},{"Start":"04:44.900 ","End":"04:48.200","Text":"the identity and A transpose is the inverse of A."},{"Start":"04:48.200 ","End":"04:51.275","Text":"But we\u0027ll do it more detailed in the exercises."},{"Start":"04:51.275 ","End":"04:54.180","Text":"That concludes this clip."}],"ID":27114},{"Watched":false,"Name":"Rotation and Reflection Matrices (2D)","Duration":"8m 49s","ChapterTopicVideoID":26211,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.540","Text":"In this clip, we\u0027ll talk about rotation and reflection matrices in 2-dimensions."},{"Start":"00:06.540 ","End":"00:10.605","Text":"I want to remind you we\u0027re still under orthogonal matrices."},{"Start":"00:10.605 ","End":"00:16.110","Text":"However, most of you will be doing the section on orthogonal transformations."},{"Start":"00:16.110 ","End":"00:20.145","Text":"If so, you should skip this clip."},{"Start":"00:20.145 ","End":"00:23.460","Text":"There\u0027s a lot more depth on rotation and reflection"},{"Start":"00:23.460 ","End":"00:27.300","Text":"matrices in orthogonal transformations."},{"Start":"00:27.300 ","End":"00:30.330","Text":"Having said that, let\u0027s continue."},{"Start":"00:30.330 ","End":"00:35.580","Text":"Rotation matrices in the plane do rotations first and then reflections,"},{"Start":"00:35.580 ","End":"00:37.800","Text":"and they\u0027ll just give you the result,"},{"Start":"00:37.800 ","End":"00:42.265","Text":"in this clip we\u0027re not going to prove it at if Theta is some angle,"},{"Start":"00:42.265 ","End":"00:49.560","Text":"then the rotation matrix by angle Theta is cosine Theta minus sine Theta,"},{"Start":"00:49.560 ","End":"00:52.690","Text":"sine Theta cosine Theta."},{"Start":"00:52.720 ","End":"00:57.110","Text":"If you apply this to a vector in the plane,"},{"Start":"00:57.110 ","End":"01:03.050","Text":"it rotates that vector by an angle of Theta counterclockwise, of course,"},{"Start":"01:03.050 ","End":"01:04.550","Text":"usually it goes without saying it."},{"Start":"01:04.550 ","End":"01:08.640","Text":"In mathematics, positive angle is counterclockwise."},{"Start":"01:08.830 ","End":"01:11.015","Text":"Here\u0027s an example."},{"Start":"01:11.015 ","End":"01:18.470","Text":"The rotation matrix for 45 degrees is just using this, cosine 45 minus sine 45,"},{"Start":"01:18.470 ","End":"01:20.960","Text":"sine 45 cosine 45."},{"Start":"01:20.960 ","End":"01:24.980","Text":"Remark, in this case, we can use degrees or radians."},{"Start":"01:24.980 ","End":"01:29.005","Text":"It doesn\u0027t matter because we\u0027re not going to use the angle itself only the cosine,"},{"Start":"01:29.005 ","End":"01:33.810","Text":"so whatever is convenient, radians or degrees."},{"Start":"01:33.810 ","End":"01:40.700","Text":"This comes out to be cosine 45 is root 2 over 2 and so is sine 45 degrees,"},{"Start":"01:40.700 ","End":"01:43.025","Text":"so we get this matrix."},{"Start":"01:43.025 ","End":"01:49.295","Text":"I might say, what does it mean to rotate a vector by 45 degrees?"},{"Start":"01:49.295 ","End":"01:54.800","Text":"It means that if you multiply the matrix times the vector,"},{"Start":"01:54.800 ","End":"01:58.850","Text":"you get a new vector and you can get this new vector by"},{"Start":"01:58.850 ","End":"02:03.810","Text":"rotating the old vector by 45 degrees counterclockwise."},{"Start":"02:03.810 ","End":"02:10.705","Text":"For example, if I take this matrix and apply it to the vector 2,"},{"Start":"02:10.705 ","End":"02:15.160","Text":"1, then an easy computation shows that we get this."},{"Start":"02:15.160 ","End":"02:20.290","Text":"Here\u0027s a diagram, we have the vector 2, 1 here,"},{"Start":"02:20.290 ","End":"02:22.870","Text":"and you rotate this by 45 degrees,"},{"Start":"02:22.870 ","End":"02:28.845","Text":"we get the vector that\u0027s listed here, 1/2 root 2, 3/2 root 2."},{"Start":"02:28.845 ","End":"02:30.325","Text":"In case you\u0027re wondering,"},{"Start":"02:30.325 ","End":"02:33.550","Text":"what does orthogonal matrices have to do with this?"},{"Start":"02:33.550 ","End":"02:37.030","Text":"Turns out that both the rotation and the reflection that we\u0027ll"},{"Start":"02:37.030 ","End":"02:40.960","Text":"see are both examples of orthogonal matrices."},{"Start":"02:40.960 ","End":"02:42.865","Text":"Let\u0027s show you why."},{"Start":"02:42.865 ","End":"02:46.685","Text":"The matrix is this and the columns of it,"},{"Start":"02:46.685 ","End":"02:49.320","Text":"are cosine Theta sine Theta,"},{"Start":"02:49.320 ","End":"02:53.100","Text":"lets call that u_1 and minus sine Theta cosine Theta,"},{"Start":"02:53.100 ","End":"02:54.585","Text":"we\u0027ll call that u_2."},{"Start":"02:54.585 ","End":"02:58.295","Text":"I claim that u_1, u_2 are an orthonormal set."},{"Start":"02:58.295 ","End":"03:01.025","Text":"That\u0027s the definition of orthogonal matrix,"},{"Start":"03:01.025 ","End":"03:04.895","Text":"that its columns form an orthonormal set. Let\u0027s check."},{"Start":"03:04.895 ","End":"03:09.815","Text":"First of all, each 1 has to have a norm of 1 or dot product with itself is 1,"},{"Start":"03:09.815 ","End":"03:15.668","Text":"so u_1.u_1 is cosine Theta cosine Theta plus sine Theta sine Theta."},{"Start":"03:15.668 ","End":"03:18.150","Text":"That gives us 1, the other 1,"},{"Start":"03:18.150 ","End":"03:22.340","Text":"u_2 dot product with itself gives us this squared plus this squared,"},{"Start":"03:22.340 ","End":"03:24.185","Text":"which is also 1,"},{"Start":"03:24.185 ","End":"03:28.730","Text":"and then they have to be mutually orthogonal."},{"Start":"03:28.730 ","End":"03:32.480","Text":"We have to check that the dot product of this with this is 0,"},{"Start":"03:32.480 ","End":"03:35.330","Text":"the dot-product is cosine Theta times minus sine"},{"Start":"03:35.330 ","End":"03:39.050","Text":"Theta plus sine Theta times cosine Theta,"},{"Start":"03:39.050 ","End":"03:40.730","Text":"and these 2 are opposites of each other,"},{"Start":"03:40.730 ","End":"03:43.775","Text":"gives us 0, so that\u0027s okay."},{"Start":"03:43.775 ","End":"03:49.055","Text":"That was rotations and now reflections through a line,"},{"Start":"03:49.055 ","End":"03:52.505","Text":"but not any line, a line has to pass through the origin."},{"Start":"03:52.505 ","End":"03:56.990","Text":"A reflection matrix is a matrix of the form cosine Theta sine Theta,"},{"Start":"03:56.990 ","End":"03:59.230","Text":"sine Theta minus cosine Theta."},{"Start":"03:59.230 ","End":"04:03.120","Text":"But Theta is an angle, doesn\u0027t matter degrees or radians,"},{"Start":"04:03.120 ","End":"04:06.260","Text":"we\u0027re just taking the cosine or sine."},{"Start":"04:06.260 ","End":"04:10.450","Text":"The mirror line is through the origin."},{"Start":"04:10.450 ","End":"04:14.465","Text":"Let\u0027s say it makes an angle of Theta over 2."},{"Start":"04:14.465 ","End":"04:17.030","Text":"It\u0027s important the over 2,"},{"Start":"04:17.030 ","End":"04:20.419","Text":"it just makes the formula come out easier."},{"Start":"04:20.419 ","End":"04:24.290","Text":"Makes an angle of Theta over 2 with the positive x-axis."},{"Start":"04:24.290 ","End":"04:26.700","Text":"If we took Theta here,"},{"Start":"04:26.700 ","End":"04:29.360","Text":"then we\u0027d have to put 2 Theta here instead of Theta."},{"Start":"04:29.360 ","End":"04:32.885","Text":"The point is this angle here is double the angle here."},{"Start":"04:32.885 ","End":"04:37.580","Text":"Line\u0027s equation is y equals the slope times mx,"},{"Start":"04:37.580 ","End":"04:40.250","Text":"and the slope is tangent of the angle."},{"Start":"04:40.250 ","End":"04:44.060","Text":"There was an exception if the angle is 90 degrees doesn\u0027t have a slope,"},{"Start":"04:44.060 ","End":"04:48.155","Text":"it\u0027s a vertical line, and then the equation of the mirror,"},{"Start":"04:48.155 ","End":"04:51.520","Text":"it\u0027s just the y-axis or x equals 0."},{"Start":"04:51.520 ","End":"04:56.645","Text":"If you apply this matrix to a vector in the plane,"},{"Start":"04:56.645 ","End":"05:03.845","Text":"then it reflects the vector in the mirror line that\u0027s given by this formula or this."},{"Start":"05:03.845 ","End":"05:09.470","Text":"As an example, let\u0027s take the mirror line as y equals x,"},{"Start":"05:09.470 ","End":"05:11.480","Text":"which certainly passes through the origin."},{"Start":"05:11.480 ","End":"05:15.005","Text":"In fact, the angle is 45 degrees,"},{"Start":"05:15.005 ","End":"05:18.845","Text":"so if we want it in the form like this,"},{"Start":"05:18.845 ","End":"05:22.605","Text":"you need tangent Theta over 2 equals 1."},{"Start":"05:22.605 ","End":"05:25.520","Text":"I guess just to say we always take the Theta over"},{"Start":"05:25.520 ","End":"05:30.020","Text":"2 to be an angle between 0 and 180 degrees,"},{"Start":"05:30.020 ","End":"05:34.510","Text":"this is acute or obtuse but doesn\u0027t go over 180 degrees."},{"Start":"05:34.510 ","End":"05:41.220","Text":"An angle between 0 and 180 that gives us a tangent of 1 has to be 45 degrees,"},{"Start":"05:41.220 ","End":"05:44.130","Text":"and that makes Theta to be 90 degrees,"},{"Start":"05:44.130 ","End":"05:46.830","Text":"so I can put Theta equals 90 in here,"},{"Start":"05:46.830 ","End":"05:51.935","Text":"and then we get the reflection matrix as cosine 90 sine 90,"},{"Start":"05:51.935 ","End":"05:54.500","Text":"sine 90 minus cosine 90."},{"Start":"05:54.500 ","End":"05:58.150","Text":"This comes out to be 0, 1, 1, 0."},{"Start":"05:58.150 ","End":"06:04.715","Text":"What do we mean that this matrix is a reflection in the line y equals x?"},{"Start":"06:04.715 ","End":"06:09.080","Text":"What it means is if we take the matrix times the vector,"},{"Start":"06:09.080 ","End":"06:11.945","Text":"that gives us another vector, a new vector,"},{"Start":"06:11.945 ","End":"06:15.469","Text":"and we get the new vector from the original vector"},{"Start":"06:15.469 ","End":"06:20.515","Text":"by reflecting it in the line y equals x."},{"Start":"06:20.515 ","End":"06:25.295","Text":"For example, if you apply the matrix to the vector 2,1,"},{"Start":"06:25.295 ","End":"06:30.765","Text":"we get the vector 1,2 and diagram will help here,"},{"Start":"06:30.765 ","End":"06:37.875","Text":"so y equals x is the mirror line, our vector is 2,1."},{"Start":"06:37.875 ","End":"06:43.115","Text":"If we reflect it, the mirror image of it in this mirror line,"},{"Start":"06:43.115 ","End":"06:44.960","Text":"it comes out to be 1,2."},{"Start":"06:44.960 ","End":"06:48.790","Text":"You can see in general, if you reverse x and y,"},{"Start":"06:48.790 ","End":"06:51.739","Text":"that\u0027s a reflection in this line."},{"Start":"06:51.739 ","End":"06:56.345","Text":"Another way of describing the reflection is to say that"},{"Start":"06:56.345 ","End":"07:03.035","Text":"this segment here is equal to this segment here,"},{"Start":"07:03.035 ","End":"07:05.720","Text":"and this is 90 degrees,"},{"Start":"07:05.720 ","End":"07:11.660","Text":"meaning that the mirror is the perpendicular bisector of this segment,"},{"Start":"07:11.660 ","End":"07:15.350","Text":"and you could also say geometrically that this is the angle bisector,"},{"Start":"07:15.350 ","End":"07:19.960","Text":"that this angle is equal to this angle,"},{"Start":"07:19.960 ","End":"07:22.805","Text":"and that\u0027s the property of mirror image."},{"Start":"07:22.805 ","End":"07:28.915","Text":"Now, what\u0027s the connection between reflections and orthogonal matrices?"},{"Start":"07:28.915 ","End":"07:32.300","Text":"The answer is that every reflection matrix happens"},{"Start":"07:32.300 ","End":"07:35.330","Text":"to be an example of an orthogonal matrix."},{"Start":"07:35.330 ","End":"07:42.510","Text":"Let\u0027s check that the columns of this matrix are cosine Theta sine Theta,"},{"Start":"07:42.510 ","End":"07:46.800","Text":"we\u0027ll call that u_1 and sine Theta minus cosine Theta,"},{"Start":"07:46.800 ","End":"07:50.870","Text":"we\u0027ll call that u_2, and these form an orthonormal set."},{"Start":"07:50.870 ","End":"07:56.125","Text":"That\u0027s what it means for a matrix to be orthogonal, that its columns are orthonormal."},{"Start":"07:56.125 ","End":"08:01.760","Text":"Let\u0027s check that the dot product of u_1 with itself is the norm squared."},{"Start":"08:01.760 ","End":"08:05.090","Text":"That has to be 1, and it does come out to be 1."},{"Start":"08:05.090 ","End":"08:10.790","Text":"Similarly, u_2 sine squared minus cosine squared"},{"Start":"08:10.790 ","End":"08:15.860","Text":"is plus cosine squared is 1 and the dot product of these 2 is 0."},{"Start":"08:15.860 ","End":"08:21.770","Text":"The last thing I want to say in this clip is to show you this theorem that"},{"Start":"08:21.770 ","End":"08:30.140","Text":"every orthogonal matrix of order 2 is either a reflection matrix or rotation matrix."},{"Start":"08:30.140 ","End":"08:33.860","Text":"Not guaranteeing that this is true in dimensions higher than 2,"},{"Start":"08:33.860 ","End":"08:36.560","Text":"but it\u0027s true in dimension 2."},{"Start":"08:36.560 ","End":"08:38.870","Text":"Those are the only orthogonal matrices,"},{"Start":"08:38.870 ","End":"08:42.785","Text":"rotation of angle Theta about the origin"},{"Start":"08:42.785 ","End":"08:47.044","Text":"or a reflection in a line that passes through the origin."},{"Start":"08:47.044 ","End":"08:49.680","Text":"That\u0027s it for this clip."}],"ID":27115},{"Watched":false,"Name":"Exercise 1","Duration":"2m 26s","ChapterTopicVideoID":26212,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:01.550 ","End":"00:06.715","Text":"In this exercise, we\u0027re given 3 square matrices and for each 1,"},{"Start":"00:06.715 ","End":"00:09.630","Text":"we have to decide if it\u0027s orthogonal or not."},{"Start":"00:09.630 ","End":"00:14.115","Text":"If it is orthogonal, then we have to find its inverse."},{"Start":"00:14.115 ","End":"00:20.625","Text":"The plan is to check whether or not A transpose times A is the identity matrix."},{"Start":"00:20.625 ","End":"00:22.440","Text":"If so, then it\u0027s orthogonal."},{"Start":"00:22.440 ","End":"00:24.000","Text":"If not, it\u0027s not orthogonal."},{"Start":"00:24.000 ","End":"00:25.650","Text":"If we do get equality here,"},{"Start":"00:25.650 ","End":"00:31.250","Text":"then we can find the inverse that\u0027s just precisely equaling the transpose."},{"Start":"00:31.250 ","End":"00:37.040","Text":"We start with the 1st 1 A, what is A transpose times A?"},{"Start":"00:37.040 ","End":"00:39.545","Text":"First of all, compute the transpose."},{"Start":"00:39.545 ","End":"00:45.050","Text":"Well, the transpose is just the matrix itself because it\u0027s symmetric about the diagonal."},{"Start":"00:45.050 ","End":"00:47.660","Text":"Now, we need to do the multiplication."},{"Start":"00:47.660 ","End":"00:52.670","Text":"This row with this column gives us cosine squared plus sine squared."},{"Start":"00:52.670 ","End":"00:57.665","Text":"These 2 are 0 and this also gives us cosine squared plus sine squared theta."},{"Start":"00:57.665 ","End":"01:01.145","Text":"We all know from trigonometry that this is equal to 1."},{"Start":"01:01.145 ","End":"01:05.335","Text":"We have 1, 0, 0, 1 which is the identity matrix."},{"Start":"01:05.335 ","End":"01:09.920","Text":"The matrix is orthogonal, and because it\u0027s orthogonal,"},{"Start":"01:09.920 ","End":"01:12.630","Text":"the inverse is the transpose, which is this."},{"Start":"01:12.630 ","End":"01:16.150","Text":"It just happens to be the same as the matrix A itself."},{"Start":"01:16.150 ","End":"01:19.780","Text":"The next 1, B transpose times B."},{"Start":"01:19.780 ","End":"01:23.155","Text":"This is the matrix B, so we need to transpose it."},{"Start":"01:23.155 ","End":"01:25.200","Text":"This is not the same as this,"},{"Start":"01:25.200 ","End":"01:29.960","Text":"if you notice there\u0027s a minus here and it\u0027s disappeared here and it\u0027s here."},{"Start":"01:29.960 ","End":"01:33.665","Text":"We\u0027ve taken the mirror image along the diagonal."},{"Start":"01:33.665 ","End":"01:35.685","Text":"Now, do the product."},{"Start":"01:35.685 ","End":"01:37.550","Text":"Do all the boring details."},{"Start":"01:37.550 ","End":"01:41.870","Text":"You can check that it comes out to be all 0s except for 1"},{"Start":"01:41.870 ","End":"01:46.775","Text":"here and cosine squared plus sine squared Alpha here."},{"Start":"01:46.775 ","End":"01:49.820","Text":"Since this is equal to 1 from trigonometry,"},{"Start":"01:49.820 ","End":"01:55.370","Text":"we have that this is the identity matrix of all the 3."},{"Start":"01:55.370 ","End":"02:02.340","Text":"B is orthogonal and its inverse is the same as its transpose which is this,"},{"Start":"02:02.340 ","End":"02:04.965","Text":"which we got from here."},{"Start":"02:04.965 ","End":"02:07.060","Text":"Let\u0027s go on to the 3rd."},{"Start":"02:07.060 ","End":"02:11.030","Text":"C transpose times C. C is symmetric,"},{"Start":"02:11.030 ","End":"02:15.755","Text":"so its transpose is itself. Do the multiplication."},{"Start":"02:15.755 ","End":"02:20.060","Text":"Well, whatever we get is not the identity matrix, it\u0027s this thing."},{"Start":"02:20.060 ","End":"02:24.245","Text":"C is not orthogonal because it fails this test."},{"Start":"02:24.245 ","End":"02:27.210","Text":"That concludes this exercise."}],"ID":27116},{"Watched":false,"Name":"Exercise 2","Duration":"3m 53s","ChapterTopicVideoID":26213,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.340","Text":"In this exercise, we\u0027re going to prove the theorem that"},{"Start":"00:03.340 ","End":"00:07.000","Text":"if A is a square matrix of order n,"},{"Start":"00:07.000 ","End":"00:15.440","Text":"then A is orthogonal if and only if A transpose times A is equal to the identity matrix."},{"Start":"00:15.440 ","End":"00:21.055","Text":"Then we\u0027ll also show that if A is an orthogonal matrix of order n,"},{"Start":"00:21.055 ","End":"00:27.415","Text":"then A is invertible and its inverse is the same as its transpose."},{"Start":"00:27.415 ","End":"00:30.010","Text":"So start with part a."},{"Start":"00:30.010 ","End":"00:34.380","Text":"Now if we write A in column form, let\u0027s call the columns U_1,"},{"Start":"00:34.380 ","End":"00:36.225","Text":"U_2 up to U_n."},{"Start":"00:36.225 ","End":"00:39.005","Text":"These are column vectors."},{"Start":"00:39.005 ","End":"00:42.615","Text":"Now, if we take the transpose of this,"},{"Start":"00:42.615 ","End":"00:50.000","Text":"then a column becomes a row and we get the matrix which has rows"},{"Start":"00:50.000 ","End":"00:53.480","Text":"U_1 transpose, U_2 transpose, and U_n transpose."},{"Start":"00:53.480 ","End":"00:56.230","Text":"These are row vectors."},{"Start":"00:56.230 ","End":"00:59.950","Text":"We\u0027re going to multiply A transpose times A."},{"Start":"00:59.950 ","End":"01:03.050","Text":"That will give us the rows U_1,"},{"Start":"01:03.050 ","End":"01:05.645","Text":"U_2 up to U_n times the columns."},{"Start":"01:05.645 ","End":"01:08.870","Text":"What we have is that the product in the i,"},{"Start":"01:08.870 ","End":"01:15.210","Text":"j position will be the matrix product of row U_i"},{"Start":"01:15.210 ","End":"01:23.200","Text":"transpose times column u_j is the same as the dot-product, and I\u0027ll explain this."},{"Start":"01:23.200 ","End":"01:27.815","Text":"This is a row vector matrix."},{"Start":"01:27.815 ","End":"01:34.325","Text":"Let\u0027s say we have a_1 up to a_n in general times the column b_1 to b_n."},{"Start":"01:34.325 ","End":"01:40.905","Text":"Then what we get is a_1 times b_1 plus a_2 times b_2 and so on up to a_n times b_n."},{"Start":"01:40.905 ","End":"01:44.810","Text":"This is exactly the dot product or the scalar product."},{"Start":"01:44.810 ","End":"01:47.755","Text":"You just multiply component wise and add."},{"Start":"01:47.755 ","End":"01:51.435","Text":"We\u0027ve got the i, j position of the product."},{"Start":"01:51.435 ","End":"01:58.515","Text":"We can say in general that the product A transpose times A is, well,"},{"Start":"01:58.515 ","End":"02:02.910","Text":"u_i dot u_j in general, but it\u0027s u_1, u_1, u_1, u_2,"},{"Start":"02:02.910 ","End":"02:06.400","Text":"and so on, n by n matrix."},{"Start":"02:06.400 ","End":"02:11.450","Text":"Now the question is, when is this the identity matrix?"},{"Start":"02:11.450 ","End":"02:18.655","Text":"It means that the diagonals are all 1s and off the diagonal it\u0027s 0."},{"Start":"02:18.655 ","End":"02:22.200","Text":"What that means is that this is 1, 1, 1,"},{"Start":"02:22.200 ","End":"02:27.810","Text":"1 and the rest 0s if and only if u_i times u_j is 1,"},{"Start":"02:27.810 ","End":"02:33.415","Text":"if i equals j, that means the diagonal where i equals j and 0 otherwise."},{"Start":"02:33.415 ","End":"02:39.440","Text":"Now, this is exactly the condition for u_1 to u_n to be an orthonormal set."},{"Start":"02:39.440 ","End":"02:45.930","Text":"Each 1 dot product with itself is 1 and any 2 different 1 as a dot product is 0."},{"Start":"02:45.930 ","End":"02:50.480","Text":"We have our condition that A transpose A is the identity if and only"},{"Start":"02:50.480 ","End":"02:56.225","Text":"if the columns of A form an orthonormal set."},{"Start":"02:56.225 ","End":"03:00.170","Text":"But this exactly means that A is an orthogonal matrix."},{"Start":"03:00.170 ","End":"03:04.835","Text":"That\u0027s the definition that the columns are orthonormal."},{"Start":"03:04.835 ","End":"03:06.860","Text":"That does part a."},{"Start":"03:06.860 ","End":"03:09.085","Text":"We have the if and only if."},{"Start":"03:09.085 ","End":"03:13.710","Text":"In part B, the question was, \"Let A be an orthogonal matrix."},{"Start":"03:13.710 ","End":"03:18.230","Text":"Prove that A is invertible and its inverse is its transpose.\""},{"Start":"03:18.230 ","End":"03:22.045","Text":"From part a we have that A transpose A is i."},{"Start":"03:22.045 ","End":"03:27.860","Text":"This shows that A transpose is a left inverse of A."},{"Start":"03:27.860 ","End":"03:31.250","Text":"Under the theorem in linear algebra that works only"},{"Start":"03:31.250 ","End":"03:34.790","Text":"for finite order matrices and A is n by n,"},{"Start":"03:34.790 ","End":"03:37.790","Text":"it\u0027s not an infinite matrix, so it\u0027s finite,"},{"Start":"03:37.790 ","End":"03:44.045","Text":"so any left inverse or right inverse is a 2 sided inverse, just an inverse."},{"Start":"03:44.045 ","End":"03:47.840","Text":"So A transpose is actually a 2 sided inverse,"},{"Start":"03:47.840 ","End":"03:50.720","Text":"meaning A to the minus 1, A inverse."},{"Start":"03:50.720 ","End":"03:54.090","Text":"That\u0027s what we had to show and we\u0027re done."}],"ID":27117},{"Watched":false,"Name":"Exercise 3 parts a-e","Duration":"4m 20s","ChapterTopicVideoID":26215,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.800","Text":"In this exercise, all the matrices are a finite order."},{"Start":"00:04.800 ","End":"00:07.305","Text":"That\u0027s why I say n by n, not infinite."},{"Start":"00:07.305 ","End":"00:10.390","Text":"It\u0027s all about orthogonal matrix."},{"Start":"00:10.610 ","End":"00:16.215","Text":"Part A says that if A is an orthogonal matrix,"},{"Start":"00:16.215 ","End":"00:19.950","Text":"then A transpose and A inverse are also orthogonal."},{"Start":"00:19.950 ","End":"00:21.795","Text":"Let\u0027s start with that 1."},{"Start":"00:21.795 ","End":"00:24.000","Text":"Since A is orthogonal,"},{"Start":"00:24.000 ","End":"00:27.120","Text":"A transpose times A is the identity"},{"Start":"00:27.120 ","End":"00:30.850","Text":"and the inverse of A is the same as the transpose of A."},{"Start":"00:30.850 ","End":"00:36.530","Text":"Now, this is useful because we don\u0027t have to show that A inverse is orthogonal,"},{"Start":"00:36.530 ","End":"00:39.410","Text":"we don\u0027t have to show that A transpose is orthogonal because they\u0027re equal."},{"Start":"00:39.410 ","End":"00:41.315","Text":"There\u0027s lots of be orthogonal."},{"Start":"00:41.315 ","End":"00:46.250","Text":"Use the condition that the matrix transpose times the matrix is the identity,"},{"Start":"00:46.250 ","End":"00:48.110","Text":"it\u0027s an if and only if, so let\u0027s check."},{"Start":"00:48.110 ","End":"00:52.805","Text":"A transpose transpose is just A,"},{"Start":"00:52.805 ","End":"00:54.845","Text":"so we have AA transpose,"},{"Start":"00:54.845 ","End":"00:57.590","Text":"we can\u0027t say right away that this is the identity."},{"Start":"00:57.590 ","End":"00:59.944","Text":"What we have here is the other way round."},{"Start":"00:59.944 ","End":"01:04.575","Text":"But I can replace A transposed by A inverse because they\u0027re equal."},{"Start":"01:04.575 ","End":"01:07.655","Text":"Then we have definitely this is the identity."},{"Start":"01:07.655 ","End":"01:09.485","Text":"That\u0027s Part A."},{"Start":"01:09.485 ","End":"01:11.275","Text":"Now Part B,"},{"Start":"01:11.275 ","End":"01:14.610","Text":"that A and B be orthogonal matrices."},{"Start":"01:14.610 ","End":"01:17.630","Text":"A transpose A is I and B transpose B is I."},{"Start":"01:17.630 ","End":"01:22.680","Text":"We have to show that AB transpose times AB is I,"},{"Start":"01:22.680 ","End":"01:24.730","Text":"then AB will be orthogonal."},{"Start":"01:24.730 ","End":"01:29.965","Text":"Let\u0027s compute AB transpose is B transpose A transpose."},{"Start":"01:29.965 ","End":"01:33.570","Text":"Now, A transpose times A is I."},{"Start":"01:33.570 ","End":"01:35.460","Text":"We have B transpose B,"},{"Start":"01:35.460 ","End":"01:37.290","Text":"but that\u0027s also I."},{"Start":"01:37.290 ","End":"01:40.100","Text":"That solves this part of B."},{"Start":"01:40.100 ","End":"01:41.710","Text":"B is orthogonal."},{"Start":"01:41.710 ","End":"01:47.660","Text":"But now we want to generalize this not just to 2 but to k orthogonal matrices,"},{"Start":"01:47.660 ","End":"01:50.075","Text":"that their product is orthogonal."},{"Start":"01:50.075 ","End":"01:52.460","Text":"We\u0027ll use induction on k."},{"Start":"01:52.460 ","End":"01:54.687","Text":"For k equals 2,"},{"Start":"01:54.687 ","End":"01:57.690","Text":"A_1, A_2 is just like AB."},{"Start":"01:57.690 ","End":"01:59.000","Text":"We\u0027ve proved that already."},{"Start":"01:59.000 ","End":"02:01.055","Text":"We just need the induction step,"},{"Start":"02:01.055 ","End":"02:04.015","Text":"the transition from k to k plus 1."},{"Start":"02:04.015 ","End":"02:07.545","Text":"Let A_1 to A_k plus 1 be orthogonal."},{"Start":"02:07.545 ","End":"02:13.220","Text":"We need to show the product A_1 times to A_k plus 1, that\u0027s orthogonal also."},{"Start":"02:13.220 ","End":"02:18.800","Text":"We can write this as the product of A_1 up to A_k times A_k plus 1."},{"Start":"02:18.800 ","End":"02:20.690","Text":"Call this 1 big A,"},{"Start":"02:20.690 ","End":"02:22.610","Text":"call this 1 big B."},{"Start":"02:22.610 ","End":"02:25.775","Text":"Then we can use the induction hypothesis,"},{"Start":"02:25.775 ","End":"02:28.490","Text":"say that this is orthogonal, that A is,"},{"Start":"02:28.490 ","End":"02:30.725","Text":"because there\u0027s only product of k things."},{"Start":"02:30.725 ","End":"02:33.685","Text":"B is orthogonal, it\u0027s given."},{"Start":"02:33.685 ","End":"02:41.035","Text":"By the first part, AB is orthogonal. It\u0027s a product of 2 orthogonal matrices,"},{"Start":"02:41.035 ","End":"02:42.690","Text":"that concludes Part B."},{"Start":"02:42.690 ","End":"02:44.010","Text":"Next on to Part C."},{"Start":"02:44.010 ","End":"02:47.270","Text":"Here we have to show that if A is orthogonal,"},{"Start":"02:47.270 ","End":"02:49.680","Text":"then its determinant must be plus or minus 1."},{"Start":"02:49.680 ","End":"02:52.850","Text":"We\u0027ll use the fact that A transpose A is identity,"},{"Start":"02:52.850 ","End":"02:55.590","Text":"and take the determinant of both sides."},{"Start":"02:55.590 ","End":"02:59.230","Text":"But the determinant of a product is the product of the determinants."},{"Start":"02:59.230 ","End":"03:03.565","Text":"The determinant of the transpose is the same as the determinant of the original matrix."},{"Start":"03:03.565 ","End":"03:06.520","Text":"This times this is equal to 1."},{"Start":"03:06.520 ","End":"03:10.475","Text":"Determinant of A squared is 1."},{"Start":"03:10.475 ","End":"03:15.340","Text":"Determinant of A must be plus or minus the square root of 1 plus or minus 1."},{"Start":"03:15.340 ","End":"03:16.600","Text":"That was Part c."},{"Start":"03:16.600 ","End":"03:20.870","Text":"Now Part D was to prove or disprove, true or false,"},{"Start":"03:20.870 ","End":"03:24.355","Text":"the sum of 2 orthogonal matrices is orthogonal."},{"Start":"03:24.355 ","End":"03:26.470","Text":"Definitely not true."},{"Start":"03:26.470 ","End":"03:28.900","Text":"You need to give 1 counterexample."},{"Start":"03:28.900 ","End":"03:32.514","Text":"Let\u0027s take A and B both to be the identity matrix."},{"Start":"03:32.514 ","End":"03:35.440","Text":"Then A and B are both orthogonal,"},{"Start":"03:35.440 ","End":"03:42.260","Text":"but the sum of them, A plus B is I plus I is 2I, is not orthogonal."},{"Start":"03:42.260 ","End":"03:46.190","Text":"We can check using the test of the transpose times the original matrix"},{"Start":"03:46.190 ","End":"03:47.270","Text":"and see if it\u0027s the identity."},{"Start":"03:47.270 ","End":"03:48.005","Text":"Well, it isn\u0027t."},{"Start":"03:48.005 ","End":"03:53.765","Text":"2I transpose times 2I is just 2I times 2I, 4I which is not the same as I."},{"Start":"03:53.765 ","End":"03:55.345","Text":"That\u0027s a counterexample."},{"Start":"03:55.345 ","End":"03:56.160","Text":"That was Part D."},{"Start":"03:56.160 ","End":"04:01.505","Text":"Part E, is a scalar multiple of an orthogonal matrix always orthogonal?"},{"Start":"04:01.505 ","End":"04:02.840","Text":"Again, no."},{"Start":"04:02.840 ","End":"04:09.535","Text":"We just take A to be I and the scalar to be 2,"},{"Start":"04:09.535 ","End":"04:12.480","Text":"and 2 times I isn\u0027t orthogonal."},{"Start":"04:12.480 ","End":"04:15.225","Text":"We can see that, we just showed that here."},{"Start":"04:15.225 ","End":"04:16.980","Text":"That\u0027s Part E."},{"Start":"04:16.980 ","End":"04:21.100","Text":"Now let\u0027s take a break and Part F in the next clip."}],"ID":27119},{"Watched":false,"Name":"Exercise 3 part f","Duration":"3m 8s","ChapterTopicVideoID":26214,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.140 ","End":"00:03.210","Text":"After the break, we\u0027re up to part f."},{"Start":"00:03.210 ","End":"00:09.390","Text":"We have to show that if a square matrix is orthogonal and triangular, then its diagonal."},{"Start":"00:09.390 ","End":"00:11.940","Text":"We\u0027ll do this by induction on n."},{"Start":"00:11.940 ","End":"00:18.220","Text":"If n equals 1, we have a 1-by-1matrix, surely that\u0027s diagonal."},{"Start":"00:18.220 ","End":"00:23.250","Text":"Let\u0027s just do the induction step that if it\u0027s true for a particular n,"},{"Start":"00:23.250 ","End":"00:25.710","Text":"then it\u0027s also true for the successive n."},{"Start":"00:25.710 ","End":"00:29.685","Text":"Let\u0027s assume that A is upper triangular."},{"Start":"00:29.685 ","End":"00:34.110","Text":"The case for lower triangular can be deduced from the upper triangular case."},{"Start":"00:34.110 ","End":"00:37.690","Text":"All you have to do is replace A with its transpose."},{"Start":"00:37.690 ","End":"00:39.830","Text":"Then if this is upper triangular,"},{"Start":"00:39.830 ","End":"00:45.230","Text":"this is lower triangular and it\u0027s still orthogonal, so this will be diagonal."},{"Start":"00:45.230 ","End":"00:47.930","Text":"If this is diagonal, then so is A."},{"Start":"00:47.930 ","End":"00:52.360","Text":"We can restrict to the case of upper triangular just for convenience,"},{"Start":"00:52.360 ","End":"00:54.770","Text":"and let\u0027s take A of order n plus 1"},{"Start":"00:54.770 ","End":"00:57.500","Text":"and we assume that we\u0027ve proved it for order n."},{"Start":"00:57.500 ","End":"00:59.360","Text":"We\u0027re on the induction step."},{"Start":"00:59.360 ","End":"01:05.320","Text":"So A is an n plus 1 by n plus 1 matrix."},{"Start":"01:05.320 ","End":"01:11.340","Text":"I\u0027m going to show now that there are 0s here and these are also 0."},{"Start":"01:11.340 ","End":"01:18.090","Text":"First of all, a_11 is not 0 because this column is part of an orthonormal set,"},{"Start":"01:18.090 ","End":"01:23.270","Text":"so the norm is 1 so this has to actually be plus or minus 1."},{"Start":"01:23.270 ","End":"01:26.150","Text":"But in any case, it\u0027s not 0."},{"Start":"01:26.150 ","End":"01:33.705","Text":"Now, these are orthogonal columns so this column dot-product with another column,"},{"Start":"01:33.705 ","End":"01:37.965","Text":"with the kth column, where k is not 1 is going to give 0."},{"Start":"01:37.965 ","End":"01:40.335","Text":"Now because it has all 0s here,"},{"Start":"01:40.335 ","End":"01:44.145","Text":"the dot product is just a_11 times a_1k."},{"Start":"01:44.145 ","End":"01:49.890","Text":"If this is 0 and a_11 is not 0,"},{"Start":"01:49.890 ","End":"01:52.800","Text":"then a_1k is 0."},{"Start":"01:52.800 ","End":"01:59.425","Text":"All these are 0 and all these are 0 and we\u0027re now up to a matrix of this form."},{"Start":"01:59.425 ","End":"02:05.310","Text":"It\u0027s upper triangular, below the diagonal, we also have 0s."},{"Start":"02:05.310 ","End":"02:10.800","Text":"We have that A is of the form, in block form a_11 0, 0, B,"},{"Start":"02:10.800 ","End":"02:13.545","Text":"where B is this block here,"},{"Start":"02:13.545 ","End":"02:18.005","Text":"it\u0027s n by n and it\u0027s triangular."},{"Start":"02:18.005 ","End":"02:19.700","Text":"To use the induction,"},{"Start":"02:19.700 ","End":"02:23.165","Text":"we have to show that it\u0027s also orthogonal."},{"Start":"02:23.165 ","End":"02:28.130","Text":"Now, it is orthogonal because these columns,"},{"Start":"02:28.130 ","End":"02:33.840","Text":"they start with a 0 and it\u0027s orthonormal in R^n plus 1."},{"Start":"02:33.840 ","End":"02:36.080","Text":"If you just cut off the 0,"},{"Start":"02:36.080 ","End":"02:38.720","Text":"there\u0027ll be orthonormal in R^n."},{"Start":"02:38.720 ","End":"02:42.545","Text":"What we\u0027ve shown is that this B is orthogonal."},{"Start":"02:42.545 ","End":"02:48.740","Text":"It\u0027s of order n, it goes from 2 to n plus 1, and it\u0027s triangular."},{"Start":"02:48.740 ","End":"02:54.305","Text":"We can use the induction hypothesis to say that B is diagonal,"},{"Start":"02:54.305 ","End":"02:56.035","Text":"this part is diagonal."},{"Start":"02:56.035 ","End":"02:59.525","Text":"If this is diagonal and the 0s here and here,"},{"Start":"02:59.525 ","End":"03:02.945","Text":"then our matrix a is also diagonal,"},{"Start":"03:02.945 ","End":"03:05.960","Text":"I mean a diagonal with just an extra element here, obviously diagonal,"},{"Start":"03:05.960 ","End":"03:09.390","Text":"and that concludes this exercise."}],"ID":27118},{"Watched":false,"Name":"Exercise 4","Duration":"2m 39s","ChapterTopicVideoID":26216,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.640","Text":"In this exercise, we have a square matrix A, a finite order,"},{"Start":"00:05.640 ","End":"00:11.339","Text":"and we have to prove or disprove the following."},{"Start":"00:11.339 ","End":"00:16.337","Text":"First of all, the columns of A are an orthonormal basis of R^n"},{"Start":"00:16.337 ","End":"00:18.660","Text":"if and only if its rows are,"},{"Start":"00:18.660 ","End":"00:20.055","Text":"and the second claim,"},{"Start":"00:20.055 ","End":"00:25.695","Text":"the columns of A are an orthogonal basis of R^n if and only if its rows are."},{"Start":"00:25.695 ","End":"00:29.695","Text":"The only difference is here it\u0027s orthonormal, here it\u0027s orthogonal."},{"Start":"00:29.695 ","End":"00:31.560","Text":"We\u0027ll start with part A."},{"Start":"00:31.560 ","End":"00:34.020","Text":"This happens to be true, so we\u0027ll prove it."},{"Start":"00:34.020 ","End":"00:36.905","Text":"It\u0027s an if and only if, so there\u0027s 2 parts of the proof."},{"Start":"00:36.905 ","End":"00:42.380","Text":"First part, we\u0027ll suppose that the columns of A are an orthonormal basis for R^n,"},{"Start":"00:42.380 ","End":"00:46.270","Text":"and we\u0027ll prove that so are its rows."},{"Start":"00:46.270 ","End":"00:49.115","Text":"By definition, A is an orthogonal matrix."},{"Start":"00:49.115 ","End":"00:53.045","Text":"This condition is precisely the definition of an orthogonal matrix."},{"Start":"00:53.045 ","End":"00:55.880","Text":"We\u0027ve already shown that if A is orthogonal,"},{"Start":"00:55.880 ","End":"00:58.060","Text":"so is A transpose."},{"Start":"00:58.060 ","End":"01:03.215","Text":"That means that the columns of A transpose are an orthonormal basis of R^n."},{"Start":"01:03.215 ","End":"01:08.200","Text":"But the columns of A transpose are precisely the rows of A,"},{"Start":"01:08.200 ","End":"01:11.780","Text":"so the rows of A are an orthonormal basis of R^n,"},{"Start":"01:11.780 ","End":"01:14.360","Text":"and that\u0027s the first direction."},{"Start":"01:14.360 ","End":"01:17.300","Text":"The other direction, we\u0027re going to suppose that the rows of A"},{"Start":"01:17.300 ","End":"01:20.000","Text":"are an orthonormal basis of R^n."},{"Start":"01:20.000 ","End":"01:22.130","Text":"Basically, we\u0027re just reversing the steps here."},{"Start":"01:22.130 ","End":"01:26.270","Text":"The columns of A transpose are an orthonormal basis."},{"Start":"01:26.270 ","End":"01:29.050","Text":"A transpose is orthogonal."},{"Start":"01:29.050 ","End":"01:30.890","Text":"If A transpose is orthogonal,"},{"Start":"01:30.890 ","End":"01:33.900","Text":"so is A transpose transpose which is A."},{"Start":"01:33.900 ","End":"01:37.405","Text":"The columns of A are an orthonormal basis of R^n."},{"Start":"01:37.405 ","End":"01:39.500","Text":"That was part a."},{"Start":"01:39.500 ","End":"01:42.650","Text":"Now part b, this turns out to be false."},{"Start":"01:42.650 ","End":"01:45.890","Text":"Here it is again, the columns of A are an orthogonal basis of R^n"},{"Start":"01:45.890 ","End":"01:47.600","Text":"if and only if its rows are."},{"Start":"01:47.600 ","End":"01:50.390","Text":"This is false and we\u0027ll disprove it with a counterexample."},{"Start":"01:50.390 ","End":"01:52.400","Text":"Take the following matrix."},{"Start":"01:52.400 ","End":"01:54.550","Text":"It\u0027s a 3 by 3 matrix."},{"Start":"01:54.550 ","End":"01:58.050","Text":"If you just look at it, the columns are an orthogonal set."},{"Start":"01:58.050 ","End":"02:00.920","Text":"A dot product of this with this is 0,"},{"Start":"02:00.920 ","End":"02:03.020","Text":"dot product of this with this is easily 0,"},{"Start":"02:03.020 ","End":"02:06.670","Text":"and this with this, minus 4 plus 4 also 0."},{"Start":"02:06.670 ","End":"02:09.465","Text":"Each of the columns is not 0."},{"Start":"02:09.465 ","End":"02:12.635","Text":"The columns are an orthogonal basis."},{"Start":"02:12.635 ","End":"02:14.435","Text":"What about the rows?"},{"Start":"02:14.435 ","End":"02:16.610","Text":"The rows turns out are not."},{"Start":"02:16.610 ","End":"02:21.920","Text":"For example, if you take the first row dot product with the last row,"},{"Start":"02:21.920 ","End":"02:24.900","Text":"we get something that\u0027s not 0."},{"Start":"02:24.900 ","End":"02:27.990","Text":"All we need is this 1 example to disprove it."},{"Start":"02:27.990 ","End":"02:30.770","Text":"I\u0027ll give you another example that you can check later."},{"Start":"02:30.770 ","End":"02:32.870","Text":"This 1 in a 2 by 2."},{"Start":"02:32.870 ","End":"02:35.240","Text":"Again, the columns are orthogonal in all 2,"},{"Start":"02:35.240 ","End":"02:36.740","Text":"but the rows aren\u0027t."},{"Start":"02:36.740 ","End":"02:39.660","Text":"That concludes this exercise."}],"ID":27120},{"Watched":false,"Name":"Exercise 5 parts a-b","Duration":"5m 36s","ChapterTopicVideoID":26206,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.280","Text":"In this exercise,"},{"Start":"00:02.280 ","End":"00:09.315","Text":"A is a square matrix of order n and its columns form an orthogonal basis of R^n."},{"Start":"00:09.315 ","End":"00:11.775","Text":"Let\u0027s call the columns v_1 to v_n."},{"Start":"00:11.775 ","End":"00:16.755","Text":"Let Lambda_i equal to dot product of v_i with itself."},{"Start":"00:16.755 ","End":"00:19.305","Text":"This is also the norm squared of v_i."},{"Start":"00:19.305 ","End":"00:23.219","Text":"This is, of course, not 0 because we have an orthogonal basis,"},{"Start":"00:23.219 ","End":"00:27.600","Text":"so each vector is non-zero."},{"Start":"00:27.600 ","End":"00:33.420","Text":"We have to prove that A transpose times A is a diagonal matrix"},{"Start":"00:33.420 ","End":"00:37.515","Text":"which has Lambda_1 through Lambda_n on the diagonal."},{"Start":"00:37.515 ","End":"00:40.950","Text":"Part b, which is related to part a,"},{"Start":"00:40.950 ","End":"00:44.205","Text":"says we have the matrix A as above."},{"Start":"00:44.205 ","End":"00:50.090","Text":"We have to prove that A can be inverted by performing the following 2 steps."},{"Start":"00:50.090 ","End":"00:55.475","Text":"First of all, divide each column by the sum of the squares of its elements,"},{"Start":"00:55.475 ","End":"00:58.900","Text":"and secondly, transpose the matrix."},{"Start":"00:58.900 ","End":"01:01.170","Text":"Let\u0027s get to part a."},{"Start":"01:01.170 ","End":"01:06.555","Text":"Because of the columns of A, v_1 to v_n are orthogonal,"},{"Start":"01:06.555 ","End":"01:12.500","Text":"the dot product of any pair of them that are different is 0"},{"Start":"01:12.500 ","End":"01:17.260","Text":"and v_i dot product with itself is Lambda_i."},{"Start":"01:17.260 ","End":"01:21.875","Text":"We\u0027re given that and like I said, this is not equal to 0."},{"Start":"01:21.875 ","End":"01:26.040","Text":"If we write A in more graphical form,"},{"Start":"01:26.040 ","End":"01:29.995","Text":"the v_i columns, so we\u0027ll write them this way."},{"Start":"01:29.995 ","End":"01:34.840","Text":"In that case, the transpose of A makes columns into rows."},{"Start":"01:34.840 ","End":"01:39.640","Text":"The transpose of a column is the row form of that vector."},{"Start":"01:39.640 ","End":"01:41.680","Text":"We have v_1 to v_n,"},{"Start":"01:41.680 ","End":"01:44.440","Text":"but transpose as the rows."},{"Start":"01:44.440 ","End":"01:46.960","Text":"Let\u0027s multiply them together."},{"Start":"01:46.960 ","End":"01:49.270","Text":"A transpose times A."},{"Start":"01:49.270 ","End":"01:51.460","Text":"In the ij position,"},{"Start":"01:51.460 ","End":"01:56.420","Text":"we have row i here multiplied by column j."},{"Start":"01:56.420 ","End":"01:59.110","Text":"That\u0027s like the dot product."},{"Start":"01:59.110 ","End":"02:02.090","Text":"What we have is, let\u0027s say here,"},{"Start":"02:02.090 ","End":"02:06.795","Text":"v_1 transpose with v_1 or v_1 transpose with v_n."},{"Start":"02:06.795 ","End":"02:12.129","Text":"In general, v_i transpose times v_j is the same as the dot product."},{"Start":"02:12.129 ","End":"02:15.900","Text":"That will be either Lambda_i, if i is equal to j,"},{"Start":"02:15.900 ","End":"02:19.615","Text":"or 0 if i is not equal to j."},{"Start":"02:19.615 ","End":"02:23.180","Text":"What that means is that along the diagonal,"},{"Start":"02:23.180 ","End":"02:25.670","Text":"we have Lambda_1, Lambda_2, up to Lambda_n,"},{"Start":"02:25.670 ","End":"02:27.830","Text":"and everywhere else is 0."},{"Start":"02:27.830 ","End":"02:30.625","Text":"This is the matrix that we get."},{"Start":"02:30.625 ","End":"02:36.980","Text":"This matrix is exactly the diagonal matrix Lambda_1 to Lambda_n."},{"Start":"02:36.980 ","End":"02:38.435","Text":"That concludes part a,"},{"Start":"02:38.435 ","End":"02:40.295","Text":"now on to part b."},{"Start":"02:40.295 ","End":"02:43.010","Text":"Here\u0027s part b again,"},{"Start":"02:43.010 ","End":"02:44.840","Text":"I won\u0027t read it out again."},{"Start":"02:44.840 ","End":"02:51.965","Text":"As above, let v_i be the ith column of A and Lambda_i is v_i dot v_i,"},{"Start":"02:51.965 ","End":"02:54.805","Text":"which as we said is not 0."},{"Start":"02:54.805 ","End":"02:59.040","Text":"We get that A transpose times A is a diagonal of Lambda_1 through Lambda_n."},{"Start":"02:59.040 ","End":"03:00.990","Text":"That\u0027s what we showed in part a."},{"Start":"03:00.990 ","End":"03:08.580","Text":"Lambda_i, which is this dot product, is a sum of squares of all elements in v_i."},{"Start":"03:08.860 ","End":"03:16.984","Text":"Now call this matrix D, this diagonal matrix, which is like so if we expand it."},{"Start":"03:16.984 ","End":"03:21.650","Text":"What we get is A transpose times A is this, which is D."},{"Start":"03:21.650 ","End":"03:25.685","Text":"If we take the inverse of both sides,"},{"Start":"03:25.685 ","End":"03:29.105","Text":"we have A transpose A, the inverse is D inverse."},{"Start":"03:29.105 ","End":"03:34.010","Text":"Now the inverse of a product is the reverse product of the inverse."},{"Start":"03:34.010 ","End":"03:35.165","Text":"We also change the order."},{"Start":"03:35.165 ","End":"03:38.860","Text":"It\u0027s A inverse A transpose inverse."},{"Start":"03:38.860 ","End":"03:43.670","Text":"We can multiply both sides on the right by A transpose."},{"Start":"03:43.670 ","End":"03:48.640","Text":"This A transpose basically comes over to the other side and we get this."},{"Start":"03:48.640 ","End":"03:55.715","Text":"D inverse is the inverse of Lambda_1 through Lambda_n on the diagonal, it\u0027s also diagonal."},{"Start":"03:55.715 ","End":"03:59.524","Text":"We just take the reciprocals of each and the non-zero."},{"Start":"03:59.524 ","End":"04:02.680","Text":"We\u0027re okay to take the reciprocals."},{"Start":"04:02.680 ","End":"04:04.395","Text":"This is also diagonal."},{"Start":"04:04.395 ","End":"04:08.160","Text":"Because it\u0027s diagonal, it transposes itself."},{"Start":"04:08.160 ","End":"04:09.810","Text":"We\u0027ll need this in a moment."},{"Start":"04:09.810 ","End":"04:13.604","Text":"But transpose of a diagonal matrix is itself."},{"Start":"04:13.604 ","End":"04:19.275","Text":"A inverse is also D minus 1 transpose,"},{"Start":"04:19.275 ","End":"04:21.720","Text":"like from here, times A transpose."},{"Start":"04:21.720 ","End":"04:26.240","Text":"Now we can write this as reverse the order"},{"Start":"04:26.240 ","End":"04:29.480","Text":"and take the transpose out the brackets, so we have this"},{"Start":"04:29.480 ","End":"04:35.885","Text":"and this is equal to A is the columns v_1 to v_n,"},{"Start":"04:35.885 ","End":"04:40.600","Text":"D inverse is this and the whole thing transpose."},{"Start":"04:40.600 ","End":"04:43.170","Text":"Now if you multiply this in the first column,"},{"Start":"04:43.170 ","End":"04:47.104","Text":"we\u0027ll have this matrix times the column vector,"},{"Start":"04:47.104 ","End":"04:51.235","Text":"which is all 0 except for Lambda_1 inverse here."},{"Start":"04:51.235 ","End":"04:52.585","Text":"What this does,"},{"Start":"04:52.585 ","End":"04:55.505","Text":"this multiplies the first column."},{"Start":"04:55.505 ","End":"05:01.040","Text":"Similarly, this column times this matrix gives us Lambda_2 inverse"},{"Start":"05:01.040 ","End":"05:03.920","Text":"times the second column and everything else unchanged."},{"Start":"05:03.920 ","End":"05:10.235","Text":"In short, what we get is v_1 over Lambda_1."},{"Start":"05:10.235 ","End":"05:13.425","Text":"I should really say, Lambda_i inverse v_i."},{"Start":"05:13.425 ","End":"05:16.830","Text":"But we can write this for short as v_i over Lambda_i."},{"Start":"05:16.830 ","End":"05:19.135","Text":"The transpose stays."},{"Start":"05:19.135 ","End":"05:25.860","Text":"Now in words, what this says is take each of the v_i divide it by Lambda_i,"},{"Start":"05:25.940 ","End":"05:29.210","Text":"which is the sum of the squares of the elements."},{"Start":"05:29.210 ","End":"05:32.015","Text":"Then at the end, take the transpose."},{"Start":"05:32.015 ","End":"05:33.830","Text":"This is what we had to show."},{"Start":"05:33.830 ","End":"05:37.530","Text":"We\u0027re done with part b and we\u0027ll take a break."}],"ID":27110},{"Watched":false,"Name":"Exercise 5 parst c-d","Duration":"3m 55s","ChapterTopicVideoID":26217,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.870","Text":"Well, after the break, now we come up to Part C,"},{"Start":"00:03.870 ","End":"00:09.960","Text":"which is to find the inverse of this matrix using the above, meaning use the 2 steps,"},{"Start":"00:09.960 ","End":"00:15.660","Text":"divide each column by the sum of squares of its elements and then transpose the matrix."},{"Start":"00:15.660 ","End":"00:22.390","Text":"But first, we have to check that the columns form an orthogonal basis."},{"Start":"00:22.460 ","End":"00:25.860","Text":"We start by checking the columns."},{"Start":"00:25.860 ","End":"00:29.715","Text":"Now, the columns are not 0, any of them."},{"Start":"00:29.715 ","End":"00:34.320","Text":"What remains to check is that the dot product of any 2 of them is 0."},{"Start":"00:34.320 ","End":"00:35.595","Text":"That\u0027s fairly easy to do."},{"Start":"00:35.595 ","End":"00:41.865","Text":"The middle 1 certainly is orthogonal to the other 2 by the position of the 0s."},{"Start":"00:41.865 ","End":"00:43.230","Text":"The first and last 1,"},{"Start":"00:43.230 ","End":"00:48.934","Text":"it\u0027s 1 times minus 2 plus 2 times 1, so it\u0027s 0."},{"Start":"00:48.934 ","End":"00:50.690","Text":"They\u0027re orthogonal, the 3 of them,"},{"Start":"00:50.690 ","End":"00:52.160","Text":"so they\u0027re an orthogonal basis,"},{"Start":"00:52.160 ","End":"00:54.865","Text":"so we can apply the technique."},{"Start":"00:54.865 ","End":"00:57.495","Text":"Now if we apply Part B,"},{"Start":"00:57.495 ","End":"01:01.810","Text":"first thing to do is to take the sum of the squares of the entries for the first column."},{"Start":"01:01.810 ","End":"01:05.060","Text":"It comes out 5, then 16, then 5."},{"Start":"01:05.060 ","End":"01:08.410","Text":"Next step is to divide the columns by these numbers."},{"Start":"01:08.410 ","End":"01:11.880","Text":"We take 1/5 in the first column,"},{"Start":"01:11.880 ","End":"01:14.075","Text":"1/16 to the second column,"},{"Start":"01:14.075 ","End":"01:15.985","Text":"1/5 to the third column,"},{"Start":"01:15.985 ","End":"01:18.270","Text":"and just put them side-by-side,"},{"Start":"01:18.270 ","End":"01:22.065","Text":"so we have a 3 by 3 matrix which is this."},{"Start":"01:22.065 ","End":"01:26.580","Text":"The last step remember is the transpose."},{"Start":"01:26.580 ","End":"01:28.920","Text":"The transpose of this,"},{"Start":"01:28.920 ","End":"01:33.215","Text":"the only thing is to move the minus from here to here."},{"Start":"01:33.215 ","End":"01:34.940","Text":"It\u0027s mostly symmetrical."},{"Start":"01:34.940 ","End":"01:36.380","Text":"This is the answer."},{"Start":"01:36.380 ","End":"01:38.140","Text":"This is the inverse,"},{"Start":"01:38.140 ","End":"01:43.785","Text":"and that was a lot easier than the lengthy computation of the inverse the regular way."},{"Start":"01:43.785 ","End":"01:46.090","Text":"Next, we come to Part D,"},{"Start":"01:46.090 ","End":"01:51.410","Text":"which is also to use techniques above to find the inverse of a matrix."},{"Start":"01:51.410 ","End":"01:53.555","Text":"This time it\u0027s this 1."},{"Start":"01:53.555 ","End":"01:58.730","Text":"Unfortunately, this time, the columns are not orthogonal."},{"Start":"01:58.730 ","End":"02:03.210","Text":"For example, the first dot product with the second,"},{"Start":"02:03.210 ","End":"02:09.000","Text":"it\u0027s 1 plus 4 minus 0.5."},{"Start":"02:09.000 ","End":"02:11.460","Text":"It\u0027s not equal to 0,"},{"Start":"02:11.460 ","End":"02:13.445","Text":"so these 2 are not orthogonal."},{"Start":"02:13.445 ","End":"02:15.230","Text":"Looks like we can\u0027t do it,"},{"Start":"02:15.230 ","End":"02:20.660","Text":"but all is not lost because it turns out that the rows are orthogonal."},{"Start":"02:20.660 ","End":"02:25.790","Text":"In other words, if you transpose this matrix, a transpose,"},{"Start":"02:25.790 ","End":"02:29.420","Text":"then luckily, these are orthogonal."},{"Start":"02:29.420 ","End":"02:33.620","Text":"You could do the check 1 times 2 is 2,"},{"Start":"02:33.620 ","End":"02:37.010","Text":"1 times 2 is 2, altogether 4."},{"Start":"02:37.010 ","End":"02:40.070","Text":"Root 2 times root 8 is root 16."},{"Start":"02:40.070 ","End":"02:43.205","Text":"That\u0027s 4 with a minus so that\u0027s 0."},{"Start":"02:43.205 ","End":"02:48.260","Text":"Similarly with the other combinations, so these are orthogonal."},{"Start":"02:48.260 ","End":"02:51.945","Text":"Now we can apply the Part B above."},{"Start":"02:51.945 ","End":"02:55.385","Text":"The first thing is to take the sum of the squares of each column."},{"Start":"02:55.385 ","End":"02:57.295","Text":"I\u0027ll just give you the answer,"},{"Start":"02:57.295 ","End":"02:59.200","Text":"it\u0027s just computation like."},{"Start":"02:59.200 ","End":"03:03.110","Text":"1 squared plus 1 squared plus root 2 squared is 4,"},{"Start":"03:03.110 ","End":"03:05.615","Text":"then you get 16 and then 1."},{"Start":"03:05.615 ","End":"03:12.310","Text":"Then we divide 1/4, 1/16, 1/1 in each of the columns,"},{"Start":"03:12.310 ","End":"03:14.840","Text":"and that\u0027s the result of the first step."},{"Start":"03:14.840 ","End":"03:16.670","Text":"But we also have a second step,"},{"Start":"03:16.670 ","End":"03:20.255","Text":"where we have to take the transpose of this."},{"Start":"03:20.255 ","End":"03:23.555","Text":"That gives us the inverse of a transpose."},{"Start":"03:23.555 ","End":"03:28.985","Text":"How do we get from the inverse of a transpose to the inverse of a?"},{"Start":"03:28.985 ","End":"03:31.700","Text":"Well, we know there\u0027s a formula that"},{"Start":"03:31.700 ","End":"03:35.920","Text":"the transpose inverse and the inverse transpose is the same thing."},{"Start":"03:35.920 ","End":"03:39.245","Text":"This is a inverse transpose."},{"Start":"03:39.245 ","End":"03:41.900","Text":"These 2 transposes are equal,"},{"Start":"03:41.900 ","End":"03:45.490","Text":"so we can just drop the transpose."},{"Start":"03:45.490 ","End":"03:51.200","Text":"What we get is that a inverse is equal to this."},{"Start":"03:51.200 ","End":"03:52.790","Text":"That\u0027s the answer."},{"Start":"03:52.790 ","End":"03:55.560","Text":"That concludes this exercise."}],"ID":27121},{"Watched":false,"Name":"Exercise 6","Duration":"4m 4s","ChapterTopicVideoID":26207,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.260","Text":"In this exercise, we\u0027re asked to prove a theorem that if"},{"Start":"00:04.260 ","End":"00:08.265","Text":"B and C are orthonormal bases of R^n,"},{"Start":"00:08.265 ","End":"00:14.310","Text":"and the change of basis matrix from B to C is an orthogonal matrix."},{"Start":"00:14.310 ","End":"00:19.350","Text":"Now, B is some u_1 up to u_n,"},{"Start":"00:19.350 ","End":"00:21.690","Text":"and orthonormal means that"},{"Start":"00:21.690 ","End":"00:27.720","Text":"u_i.u_j is either 1 if i equals j and 0 if i is not equal to j."},{"Start":"00:27.720 ","End":"00:31.170","Text":"Similarly, C is an orthonormal basis v_1 to v_n,"},{"Start":"00:31.170 ","End":"00:34.685","Text":"so again, v_i v_j is either 1 or 0."},{"Start":"00:34.685 ","End":"00:37.190","Text":"Now because B is a basis,"},{"Start":"00:37.190 ","End":"00:39.480","Text":"any vector, in particular,"},{"Start":"00:39.480 ","End":"00:44.240","Text":"v_i is a linear combination of u_1 to u_n,"},{"Start":"00:44.240 ","End":"00:52.365","Text":"this is true for i equal 1 to n. We get that v_1 is a linear combination of u_1 to u_n,"},{"Start":"00:52.365 ","End":"00:54.160","Text":"v_2 up to v_n,"},{"Start":"00:54.160 ","End":"00:59.230","Text":"we have a linear combination and we get all these coefficients and they form a matrix."},{"Start":"00:59.230 ","End":"01:01.645","Text":"At the change of basis matrix,"},{"Start":"01:01.645 ","End":"01:05.780","Text":"is the transpose of the matrix of coefficients,"},{"Start":"01:05.780 ","End":"01:11.510","Text":"so just transpose set of a_11, a_12 to a_1n,"},{"Start":"01:11.510 ","End":"01:13.625","Text":"and we take here the first column,"},{"Start":"01:13.625 ","End":"01:15.140","Text":"a_11, a_21,"},{"Start":"01:15.140 ","End":"01:19.525","Text":"a_n1, and so on the transpose of these."},{"Start":"01:19.525 ","End":"01:22.860","Text":"If we call each column w_i,"},{"Start":"01:22.860 ","End":"01:25.590","Text":"w_1, w_2 up to w_n,"},{"Start":"01:25.590 ","End":"01:28.380","Text":"we can write it like this."},{"Start":"01:28.380 ","End":"01:34.445","Text":"What we have to show is that this matrix is orthogonal,"},{"Start":"01:34.445 ","End":"01:39.530","Text":"meaning that the columns are orthonormal. In other words, the w_i."},{"Start":"01:39.530 ","End":"01:44.780","Text":"w_j is 1 or 0 depending on if i equals j or not."},{"Start":"01:44.780 ","End":"01:50.614","Text":"Now the way we\u0027ll do this is to compute v_i.v_j in general,"},{"Start":"01:50.614 ","End":"01:54.750","Text":"I\u0027m copying from here each v_i,"},{"Start":"01:54.750 ","End":"01:57.270","Text":"let\u0027s look at v_2, for example."},{"Start":"01:57.270 ","End":"02:01.010","Text":"The 2 appears here and here and here in all the first index and"},{"Start":"02:01.010 ","End":"02:05.030","Text":"the second index goes from 1-n so got a_i1,"},{"Start":"02:05.030 ","End":"02:07.860","Text":"u_1, a_i2, u_2, and so on."},{"Start":"02:07.860 ","End":"02:10.770","Text":"Similarly for j a_j1 u_1,"},{"Start":"02:10.770 ","End":"02:13.115","Text":"a_j u_2, and so on."},{"Start":"02:13.115 ","End":"02:17.405","Text":"This dot-product is going to actually contain n squared terms."},{"Start":"02:17.405 ","End":"02:22.115","Text":"But we don\u0027t really need all n squared terms because a lot of them are 0."},{"Start":"02:22.115 ","End":"02:25.750","Text":"I claim that we just need to take the ones where u_1 is"},{"Start":"02:25.750 ","End":"02:29.470","Text":"with u_1 and u_2 is with u_2 and so on,"},{"Start":"02:29.470 ","End":"02:31.020","Text":"u_n is with u_n,"},{"Start":"02:31.020 ","End":"02:35.990","Text":"like u_1, with u_2, that will give us 0."},{"Start":"02:35.990 ","End":"02:43.310","Text":"I mean, we have this property that u_k with u_l don\u0027t want to use i and j again is 1 if k"},{"Start":"02:43.310 ","End":"02:46.700","Text":"equals l and 0 if k is not equal to l. We only need to"},{"Start":"02:46.700 ","End":"02:50.835","Text":"take these because all the rest will be 0s."},{"Start":"02:50.835 ","End":"02:55.500","Text":"In other words we throw out the u_i.u_i because"},{"Start":"02:55.500 ","End":"03:01.545","Text":"u_k.u_l is 1 if k equals l also just left with a_i1,"},{"Start":"03:01.545 ","End":"03:05.975","Text":"a_j1, a_i2, a_j2, this sum,"},{"Start":"03:05.975 ","End":"03:11.315","Text":"and this is exactly equal to the dot product of the following 2 column matrices."},{"Start":"03:11.315 ","End":"03:17.490","Text":"But these matrices are exactly the columns from here."},{"Start":"03:17.490 ","End":"03:19.860","Text":"If i or j is, let\u0027s say 2 have a_21,"},{"Start":"03:19.860 ","End":"03:21.690","Text":"a_22 up to a_2n."},{"Start":"03:21.690 ","End":"03:25.170","Text":"The second index goes from 1-n. Just like here,"},{"Start":"03:25.170 ","End":"03:27.990","Text":"the i is fixed then we go from 1-n,"},{"Start":"03:27.990 ","End":"03:33.230","Text":"so this is w_i. w_j."},{"Start":"03:33.230 ","End":"03:37.665","Text":"W_i.w_j is equal to,"},{"Start":"03:37.665 ","End":"03:42.575","Text":"first of all, v_i.v_j because that\u0027s what we expanded v_i.v_j w_i w_j."},{"Start":"03:42.575 ","End":"03:46.730","Text":"But more importantly, this is equal to this"},{"Start":"03:46.730 ","End":"03:50.690","Text":"because v_i.v_j is 1 or 0 according to whether i equals j,"},{"Start":"03:50.690 ","End":"03:54.020","Text":"i is not equal to j so w_i.w_j is this."},{"Start":"03:54.020 ","End":"03:58.675","Text":"This means that the w\u0027s are an orthonormal basis."},{"Start":"03:58.675 ","End":"04:00.470","Text":"It still up here."},{"Start":"04:00.470 ","End":"04:05.300","Text":"This is what we had to show and we\u0027ve shown it, we\u0027re done."}],"ID":27111},{"Watched":false,"Name":"Exercise 7","Duration":"4m 38s","ChapterTopicVideoID":26208,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:06.240","Text":"In this exercise, we have 2 basis, b and c of R^n,"},{"Start":"00:06.240 ","End":"00:10.890","Text":"and A is a change of basis matrix from b to c."},{"Start":"00:10.890 ","End":"00:14.160","Text":"We\u0027re given that A is orthogonal,"},{"Start":"00:14.160 ","End":"00:21.434","Text":"we have to prove that B is an orthonormal basis if and only if C is an orthonormal basis."},{"Start":"00:21.434 ","End":"00:25.305","Text":"Part a, if B is orthonormal,"},{"Start":"00:25.305 ","End":"00:31.620","Text":"then that means that u_i.u_j is 1 or 0 as usual."},{"Start":"00:31.620 ","End":"00:35.880","Text":"What we need to do is to prove that c is orthonormal."},{"Start":"00:35.880 ","End":"00:41.535","Text":"In other words, that v_i.v_j is also 1 or 0 accordingly."},{"Start":"00:41.535 ","End":"00:43.395","Text":"Let\u0027s prove that."},{"Start":"00:43.395 ","End":"00:47.690","Text":"Let\u0027s say that the matrix a is given by the following coefficients,"},{"Start":"00:47.690 ","End":"00:50.420","Text":"a_11, a_12, a_1n, and so on."},{"Start":"00:50.420 ","End":"00:53.915","Text":"Then, because it\u0027s a change of basis matrix,"},{"Start":"00:53.915 ","End":"00:58.385","Text":"if we express the v_i as linear combinations of the u_j,"},{"Start":"00:58.385 ","End":"01:03.335","Text":"then the coefficients for the transpose matrix of this."},{"Start":"01:03.335 ","End":"01:05.900","Text":"Notice that here a_11, a_12, a_1n."},{"Start":"01:05.900 ","End":"01:08.945","Text":"Here\u0027s the column a_11, a_12, a_1n,"},{"Start":"01:08.945 ","End":"01:10.775","Text":"just the transpose here."},{"Start":"01:10.775 ","End":"01:13.985","Text":"In general, a vector v_i would be a_1i,"},{"Start":"01:13.985 ","End":"01:18.650","Text":"u_1 plus a_2i, u_2 plus a_ni u_n."},{"Start":"01:18.650 ","End":"01:26.480","Text":"Remembering that we have that v_i.v_j is the transpose of v_i times v_j."},{"Start":"01:26.480 ","End":"01:28.370","Text":"This makes the first 1 a row vector."},{"Start":"01:28.370 ","End":"01:30.790","Text":"We have a row vector times a column vector."},{"Start":"01:30.790 ","End":"01:35.510","Text":"This is equal to the transpose of what we just said earlier."},{"Start":"01:35.510 ","End":"01:39.040","Text":"Then the same thing with j instead of the i."},{"Start":"01:39.040 ","End":"01:43.235","Text":"This is equal to, write it in matrix form."},{"Start":"01:43.235 ","End":"01:46.415","Text":"Here, we have the column vectors u_1 to u_n,"},{"Start":"01:46.415 ","End":"01:50.000","Text":"and if we have a column vector a_1i, a_2i,"},{"Start":"01:50.000 ","End":"01:57.400","Text":"and so on, the product of this will be this coefficient will multiply this vector."},{"Start":"01:58.940 ","End":"02:02.495","Text":"After it, we have to take the transpose and here,"},{"Start":"02:02.495 ","End":"02:08.510","Text":"this combination, we can take the coefficients and put them in a column vector."},{"Start":"02:08.510 ","End":"02:12.320","Text":"Again, we have a_1i times u_1,"},{"Start":"02:12.320 ","End":"02:16.200","Text":"just standard working with matrices."},{"Start":"02:16.970 ","End":"02:19.400","Text":"Then we take the transpose,"},{"Start":"02:19.400 ","End":"02:22.490","Text":"so we have to reverse the order and then take the transpose of each."},{"Start":"02:22.490 ","End":"02:25.400","Text":"It\u0027s this transpose times this transpose."},{"Start":"02:25.400 ","End":"02:28.630","Text":"This transpose reverses rows and columns."},{"Start":"02:28.630 ","End":"02:34.430","Text":"This column becomes a row vector with a transpose times this, times this."},{"Start":"02:34.430 ","End":"02:44.240","Text":"Now, each u_i transpose with u_j naught transpose will give us a dot-product."},{"Start":"02:44.240 ","End":"02:47.060","Text":"In each place we have an ij dot-product,"},{"Start":"02:47.060 ","End":"02:51.120","Text":"and we have n squared of these, or matrix full."},{"Start":"02:51.290 ","End":"02:54.320","Text":"We know that the ones on the diagonal"},{"Start":"02:54.320 ","End":"02:57.925","Text":"are equal to 1 and everything else is 0."},{"Start":"02:57.925 ","End":"03:00.225","Text":"This is identity matrix,"},{"Start":"03:00.225 ","End":"03:02.900","Text":"so in the product, this can just be thrown out,"},{"Start":"03:02.900 ","End":"03:05.090","Text":"and we just get this times this."},{"Start":"03:05.090 ","End":"03:10.550","Text":"Now, this transpose times this plus row of this"},{"Start":"03:10.550 ","End":"03:14.410","Text":"times this is the dot product of these 2."},{"Start":"03:14.410 ","End":"03:20.000","Text":"It\u0027s the ith column of a dot product with the jth column of a."},{"Start":"03:20.000 ","End":"03:22.385","Text":"Because A is orthogonal,"},{"Start":"03:22.385 ","End":"03:28.955","Text":"this dot-product will give us 1 or 0 according to other i equals j or not."},{"Start":"03:28.955 ","End":"03:30.380","Text":"This is what we wanted to show."},{"Start":"03:30.380 ","End":"03:34.520","Text":"This is v_i.v_j, and that\u0027s part a."},{"Start":"03:34.520 ","End":"03:39.515","Text":"Now part b, it\u0027s going to derive from the first part."},{"Start":"03:39.515 ","End":"03:44.315","Text":"See if A is the change of basis matrix from B to C,"},{"Start":"03:44.315 ","End":"03:47.585","Text":"then by well-known theorem,"},{"Start":"03:47.585 ","End":"03:52.595","Text":"A inverse is the change of basis matrix from C back to B."},{"Start":"03:52.595 ","End":"03:57.325","Text":"They\u0027re inverses of each other from B to C or C to B inverse matrices."},{"Start":"03:57.325 ","End":"04:00.165","Text":"A is orthogonal."},{"Start":"04:00.165 ","End":"04:02.760","Text":"We showed that if A is orthogonal,"},{"Start":"04:02.760 ","End":"04:05.390","Text":"then so is A inverse in a different exercise,"},{"Start":"04:05.390 ","End":"04:09.200","Text":"we showed that A inverse is A transpose and A transpose is orthogonal."},{"Start":"04:09.200 ","End":"04:15.360","Text":"We have that, and C is an orthonormal basis."},{"Start":"04:15.360 ","End":"04:19.550","Text":"If we switch the roles, switching the roles of C and B,"},{"Start":"04:19.550 ","End":"04:22.675","Text":"and switching the roles of A and A prime,"},{"Start":"04:22.675 ","End":"04:27.240","Text":"it\u0027s as if A is a change of basis from B to C,"},{"Start":"04:27.240 ","End":"04:31.160","Text":"only its a inverse from C to B, but it\u0027s all symmetrical."},{"Start":"04:31.160 ","End":"04:35.555","Text":"It follows that B is also an orthonormal basis."},{"Start":"04:35.555 ","End":"04:38.920","Text":"That concludes part b and we\u0027re done."}],"ID":27112},{"Watched":false,"Name":"Exercise 8","Duration":"3m 59s","ChapterTopicVideoID":26209,"CourseChapterTopicPlaylistID":253225,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.800","Text":"In this exercise, A is an orthogonal matrix of order n,"},{"Start":"00:04.800 ","End":"00:11.190","Text":"and we have to prove that there are 2 orthonormal bases B and C of R^n,"},{"Start":"00:11.190 ","End":"00:18.345","Text":"such that A is the change of basis matrix from B to C. Then in part b,"},{"Start":"00:18.345 ","End":"00:22.395","Text":"we\u0027re given a vector in R^n whose norm is 1."},{"Start":"00:22.395 ","End":"00:26.880","Text":"We have to show that there exists an orthogonal matrix"},{"Start":"00:26.880 ","End":"00:33.255","Text":"whose first column is exactly the vector v. We\u0027ll start with part a."},{"Start":"00:33.255 ","End":"00:39.710","Text":"Suppose that a has these entries using double indexing,"},{"Start":"00:39.710 ","End":"00:45.200","Text":"a_ij, we\u0027ll choose B first, we let it be the standard basis E,"},{"Start":"00:45.200 ","End":"00:48.640","Text":"which you can write as e_1 to e_n."},{"Start":"00:48.640 ","End":"00:53.875","Text":"We\u0027ll choose C as the column vectors of A."},{"Start":"00:53.875 ","End":"00:57.470","Text":"It\u0027s not a standard notation, but think it\u0027s understandable."},{"Start":"00:57.470 ","End":"01:00.125","Text":"We call 1 of A, means first column of A,"},{"Start":"01:00.125 ","End":"01:03.475","Text":"second column of A, nth column of A."},{"Start":"01:03.475 ","End":"01:05.690","Text":"Now that I\u0027ve defined B and C,"},{"Start":"01:05.690 ","End":"01:09.260","Text":"all that remains is to show that A is the change of"},{"Start":"01:09.260 ","End":"01:13.575","Text":"basis matrix from B to C. Let\u0027s check that,"},{"Start":"01:13.575 ","End":"01:19.260","Text":"denote v_i as the column i of A, this is v_1 to v_n,"},{"Start":"01:19.260 ","End":"01:24.930","Text":"v_i will just be a_1i, a_2i up to a_ni."},{"Start":"01:24.930 ","End":"01:29.420","Text":"The second coordinate remains i, just like when i is 1,"},{"Start":"01:29.420 ","End":"01:31.715","Text":"we have all ones here, and they\u0027re all two\u0027s here."},{"Start":"01:31.715 ","End":"01:37.499","Text":"Take column i, they\u0027ll be all i in the second index."},{"Start":"01:37.499 ","End":"01:41.270","Text":"We can break this up in terms of the standard basis."},{"Start":"01:41.270 ","End":"01:45.800","Text":"A_1i times this plus a_2i times this."},{"Start":"01:45.800 ","End":"01:50.625","Text":"This is actually e_1, e_2, e_n standard basis vectors."},{"Start":"01:50.625 ","End":"01:56.550","Text":"Write that out. Now, if we do this for i equals 1 to n,"},{"Start":"01:56.550 ","End":"01:59.840","Text":"we got n equations as follows."},{"Start":"01:59.840 ","End":"02:02.730","Text":"Just this with i equals 1, 2 up to n."},{"Start":"02:02.730 ","End":"02:08.570","Text":"Now reminder, the e_i or the standard basis and that be called B."},{"Start":"02:08.570 ","End":"02:10.640","Text":"The columns of a are v_i,"},{"Start":"02:10.640 ","End":"02:15.650","Text":"and v_1 through v_n is what we call basis C. The coefficients"},{"Start":"02:15.650 ","End":"02:20.825","Text":"here are not exactly the change of basis from B to C transpose."},{"Start":"02:20.825 ","End":"02:23.785","Text":"If you transpose these coefficients,"},{"Start":"02:23.785 ","End":"02:29.015","Text":"the transpose of this is this with a transpose here."},{"Start":"02:29.015 ","End":"02:31.400","Text":"If you transpose this, you get a_11,"},{"Start":"02:31.400 ","End":"02:34.870","Text":"a_12 up to a_1n, we get exactly the matrix a."},{"Start":"02:34.870 ","End":"02:37.475","Text":"The change of basis from B to C is a,"},{"Start":"02:37.475 ","End":"02:40.445","Text":"and that concludes the first half of this question."},{"Start":"02:40.445 ","End":"02:45.470","Text":"Now part B, I guess I should copy the original question,"},{"Start":"02:45.470 ","End":"02:47.405","Text":"it\u0027s here for reference."},{"Start":"02:47.405 ","End":"02:54.335","Text":"Let v_1 be the given v, whose norm is 1."},{"Start":"02:54.335 ","End":"02:56.945","Text":"V_1 is an orthogonal set."},{"Start":"02:56.945 ","End":"02:58.550","Text":"It\u0027s even orthonormal."},{"Start":"02:58.550 ","End":"03:02.515","Text":"Any non-zero vector on its own is orthogonal."},{"Start":"03:02.515 ","End":"03:09.330","Text":"We can use the Gram-Schmidt process to complete the set to an orthogonal basis."},{"Start":"03:09.330 ","End":"03:13.375","Text":"If you don\u0027t know the Gram-Schmidt process then skip this exercise."},{"Start":"03:13.375 ","End":"03:17.855","Text":"I complete it to an orthogonal basis, v_1 to v_n,"},{"Start":"03:17.855 ","End":"03:22.460","Text":"where v_1 is our v. Once we\u0027ve got an orthogonal basis,"},{"Start":"03:22.460 ","End":"03:27.520","Text":"we can normalize it to an orthonormal basis by dividing each vector by its norm,"},{"Start":"03:27.520 ","End":"03:33.280","Text":"so that 1 be w_1 to w_n, where w_i is v_i over the norm of v_i,"},{"Start":"03:33.280 ","End":"03:36.170","Text":"then each of these will have norm of 1."},{"Start":"03:36.170 ","End":"03:37.340","Text":"They\u0027ll still be orthogonal,"},{"Start":"03:37.340 ","End":"03:40.270","Text":"so it\u0027s an orthonormal basis."},{"Start":"03:40.270 ","End":"03:43.650","Text":"The first vector, which is w_1,"},{"Start":"03:43.650 ","End":"03:47.670","Text":"is the same as v_1 because the norm of v_1 is 1,"},{"Start":"03:47.670 ","End":"03:49.845","Text":"so we\u0027re not dividing by anything."},{"Start":"03:49.845 ","End":"03:53.810","Text":"We\u0027ve now got an orthonormal basis where the first 1 is"},{"Start":"03:53.810 ","End":"03:59.130","Text":"exactly our given v, and we\u0027re done."}],"ID":27113}],"Thumbnail":null,"ID":253225},{"Name":"Orthogonal Transformations","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Definition","Duration":"6m 26s","ChapterTopicVideoID":26224,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.345","Text":"New topic, Orthogonal Transformation."},{"Start":"00:03.345 ","End":"00:05.325","Text":"We\u0027ll start with the definition."},{"Start":"00:05.325 ","End":"00:09.150","Text":"T is a linear transformation from R^n to R^n,"},{"Start":"00:09.150 ","End":"00:10.800","Text":"and where n is finite."},{"Start":"00:10.800 ","End":"00:14.820","Text":"T is called an orthogonal transformation or an orthogonal linear"},{"Start":"00:14.820 ","End":"00:19.260","Text":"transformation if it preserves the inner product."},{"Start":"00:19.260 ","End":"00:21.960","Text":"What do I mean by preserves?"},{"Start":"00:21.960 ","End":"00:24.180","Text":"Means that if we take 2 vectors,"},{"Start":"00:24.180 ","End":"00:27.720","Text":"u and v, and apply T to each of them,"},{"Start":"00:27.720 ","End":"00:31.830","Text":"then the dot product of T of u with T of v is the same as"},{"Start":"00:31.830 ","End":"00:35.940","Text":"the original dot product of u and v. If we take a couple of vectors,"},{"Start":"00:35.940 ","End":"00:37.320","Text":"apply T to them,"},{"Start":"00:37.320 ","End":"00:40.220","Text":"the vectors change but the dot product doesn\u0027t."},{"Start":"00:40.220 ","End":"00:42.320","Text":"Now, remark and notation,"},{"Start":"00:42.320 ","End":"00:48.015","Text":"the inner product is also called the scalar product,"},{"Start":"00:48.015 ","End":"00:52.280","Text":"and of course, dot product informally because we write it with a dot."},{"Start":"00:52.280 ","End":"00:56.300","Text":"We write u.v or angular brackets, u,"},{"Start":"00:56.300 ","End":"01:00.020","Text":"v. The other remark I wanted to make about notation is that we"},{"Start":"01:00.020 ","End":"01:05.405","Text":"sometimes write just plain Tu instead of T brackets u,"},{"Start":"01:05.405 ","End":"01:07.415","Text":"you can just write it like that."},{"Start":"01:07.415 ","End":"01:14.570","Text":"As an example, let\u0027s take the linear transformation from R^2 to R^2 given by T of x,"},{"Start":"01:14.570 ","End":"01:16.160","Text":"y is y, x."},{"Start":"01:16.160 ","End":"01:20.930","Text":"Just switch x and y. I claim that this T is an orthogonal transformation."},{"Start":"01:20.930 ","End":"01:23.825","Text":"Let\u0027s prove it straight from the definition."},{"Start":"01:23.825 ","End":"01:27.735","Text":"Choose u and v in R^2 in this case."},{"Start":"01:27.735 ","End":"01:29.340","Text":"Let\u0027s say that u is a,"},{"Start":"01:29.340 ","End":"01:30.900","Text":"b and v is c,"},{"Start":"01:30.900 ","End":"01:35.360","Text":"d. We need to show that the dot product of Tu with Tv is the"},{"Start":"01:35.360 ","End":"01:40.120","Text":"same as the dot product of u with v. Let\u0027s compute them both."},{"Start":"01:40.120 ","End":"01:42.690","Text":"Let\u0027s see. First of all, T of u. T of v,"},{"Start":"01:42.690 ","End":"01:44.240","Text":"what is that equal to?"},{"Start":"01:44.240 ","End":"01:46.070","Text":"It\u0027s T applied to a,"},{"Start":"01:46.070 ","End":"01:48.960","Text":"b. T applied to c,"},{"Start":"01:48.960 ","End":"01:50.820","Text":"d. Now, T of a,"},{"Start":"01:50.820 ","End":"01:52.710","Text":"b is b, a."},{"Start":"01:52.710 ","End":"01:54.750","Text":"It reverses the coordinates."},{"Start":"01:54.750 ","End":"01:56.475","Text":"Here, d, c,"},{"Start":"01:56.475 ","End":"02:01.855","Text":"the dot product is this times this plus this times this, bd plus ac."},{"Start":"02:01.855 ","End":"02:05.190","Text":"On the other hand, u. v is a,"},{"Start":"02:05.190 ","End":"02:06.840","Text":"b. c, d,"},{"Start":"02:06.840 ","End":"02:09.460","Text":"which is ac plus bd."},{"Start":"02:09.460 ","End":"02:11.240","Text":"It looks slightly different,"},{"Start":"02:11.240 ","End":"02:13.360","Text":"but in fact, it\u0027s the same."},{"Start":"02:13.360 ","End":"02:17.780","Text":"T of u. T of v is u. v for all u and v,"},{"Start":"02:17.780 ","End":"02:19.910","Text":"and so it\u0027s orthogonal."},{"Start":"02:19.910 ","End":"02:24.560","Text":"Now a theorem which we\u0027ll present but won\u0027t prove,"},{"Start":"02:24.560 ","End":"02:26.260","Text":"we\u0027ll in the exercises,"},{"Start":"02:26.260 ","End":"02:31.395","Text":"if T is a linear transformation from R^n to R^n,"},{"Start":"02:31.395 ","End":"02:37.445","Text":"then T is orthogonal if and only if it preserves the norm."},{"Start":"02:37.445 ","End":"02:39.545","Text":"What does it mean preserves the norm?"},{"Start":"02:39.545 ","End":"02:42.365","Text":"Means if we take the norm of T of u,"},{"Start":"02:42.365 ","End":"02:44.420","Text":"it\u0027s the same as the norm of u."},{"Start":"02:44.420 ","End":"02:47.600","Text":"We\u0027ll remind you what the norm is, in general,"},{"Start":"02:47.600 ","End":"02:53.990","Text":"the norm of a vector is the square root of the vector dot product with itself,"},{"Start":"02:53.990 ","End":"02:57.170","Text":"inner product, scalar product, like so."},{"Start":"02:57.170 ","End":"03:03.800","Text":"I want to remind you of the formula that if v is a vector in R^n,"},{"Start":"03:03.800 ","End":"03:11.405","Text":"then the norm of v is the square root of the sums of the squares of the coordinates."},{"Start":"03:11.405 ","End":"03:14.660","Text":"Let\u0027s check that this is so at the example we had"},{"Start":"03:14.660 ","End":"03:19.905","Text":"above where we had that T reverses the coordinates."},{"Start":"03:19.905 ","End":"03:22.560","Text":"We showed it preserves the dot product."},{"Start":"03:22.560 ","End":"03:25.290","Text":"Now, let\u0027s show that it preserves the norm."},{"Start":"03:25.290 ","End":"03:28.925","Text":"Let u be any vector in R^2,"},{"Start":"03:28.925 ","End":"03:30.900","Text":"let\u0027s say a, b."},{"Start":"03:30.900 ","End":"03:34.200","Text":"The norm of u is the norm of the vector a, b,"},{"Start":"03:34.200 ","End":"03:36.080","Text":"which is, using this formula,"},{"Start":"03:36.080 ","End":"03:38.845","Text":"square root of a squared plus b squared."},{"Start":"03:38.845 ","End":"03:41.670","Text":"T of u is b, a,"},{"Start":"03:41.670 ","End":"03:43.890","Text":"so the norm of T of u is the norm of b,"},{"Start":"03:43.890 ","End":"03:46.140","Text":"a which is b squared plus a squared."},{"Start":"03:46.140 ","End":"03:48.315","Text":"Obviously, these 2 are equal,"},{"Start":"03:48.315 ","End":"03:50.925","Text":"so norm of u equals norm of T of u."},{"Start":"03:50.925 ","End":"03:53.600","Text":"In our example, it certainly works."},{"Start":"03:53.600 ","End":"03:56.270","Text":"Either T preserves the norm."},{"Start":"03:56.270 ","End":"03:58.790","Text":"Continuing, I\u0027m going to show you how"},{"Start":"03:58.790 ","End":"04:02.600","Text":"orthogonal transformations are related to matrices."},{"Start":"04:02.600 ","End":"04:09.455","Text":"We know in general that if we have a linear transformation from R^n to R^n, call it T,"},{"Start":"04:09.455 ","End":"04:15.275","Text":"then there\u0027s a matrix that corresponds to T. This is an n by n matrix call it A,"},{"Start":"04:15.275 ","End":"04:21.540","Text":"such that T of v is A times v for all v. Conversely,"},{"Start":"04:21.540 ","End":"04:23.420","Text":"if we have an n by n matrix A,"},{"Start":"04:23.420 ","End":"04:29.165","Text":"then it defines a linear transformation T by T of v equals Av."},{"Start":"04:29.165 ","End":"04:30.920","Text":"It\u0027s a 2-way thing."},{"Start":"04:30.920 ","End":"04:34.295","Text":"Given the transformation, we have a matrix and vice versa."},{"Start":"04:34.295 ","End":"04:36.875","Text":"Now, our theorem here,"},{"Start":"04:36.875 ","End":"04:38.540","Text":"and we\u0027ll prove it in the exercise,"},{"Start":"04:38.540 ","End":"04:42.095","Text":"is that if A is an n by n matrix,"},{"Start":"04:42.095 ","End":"04:45.770","Text":"and if T is the corresponding transformation,"},{"Start":"04:45.770 ","End":"04:51.200","Text":"then T is orthogonal if and only if A is orthogonal."},{"Start":"04:51.200 ","End":"04:55.655","Text":"T is orthogonal as a transformation if and only if A is orthogonal as a matrix."},{"Start":"04:55.655 ","End":"05:02.840","Text":"In case you missed out on the definition of orthogonal matrix, here\u0027s the definition."},{"Start":"05:02.840 ","End":"05:10.925","Text":"A matrix is orthogonal if its columns as vectors form an orthonormal set."},{"Start":"05:10.925 ","End":"05:14.930","Text":"Let\u0027s check if this theorem is true at least in our example that we had above."},{"Start":"05:14.930 ","End":"05:16.700","Text":"Remember the 1 where T of x,"},{"Start":"05:16.700 ","End":"05:17.870","Text":"y is y, x,"},{"Start":"05:17.870 ","End":"05:19.640","Text":"just switch x and y."},{"Start":"05:19.640 ","End":"05:22.250","Text":"We show that it\u0027s an orthogonal transformation."},{"Start":"05:22.250 ","End":"05:25.790","Text":"Now, let\u0027s see what\u0027s the corresponding matrix and show that it\u0027s"},{"Start":"05:25.790 ","End":"05:31.345","Text":"an orthogonal matrix using the definition I showed you that the columns are orthonormal."},{"Start":"05:31.345 ","End":"05:37.260","Text":"We can write T with column vectors as T of x,"},{"Start":"05:37.260 ","End":"05:40.320","Text":"y is y, x,"},{"Start":"05:40.320 ","End":"05:42.180","Text":"0, 1 times x,"},{"Start":"05:42.180 ","End":"05:44.550","Text":"y is 0x plus 1y is y,"},{"Start":"05:44.550 ","End":"05:47.130","Text":"so we have y here and then 1, 0 times x,"},{"Start":"05:47.130 ","End":"05:49.020","Text":"y is just x,"},{"Start":"05:49.020 ","End":"05:50.670","Text":"so we get y, x."},{"Start":"05:50.670 ","End":"05:52.320","Text":"The matrix 0, 1, 1,"},{"Start":"05:52.320 ","End":"05:54.665","Text":"0 is indeed orthogonal."},{"Start":"05:54.665 ","End":"05:57.415","Text":"The columns are orthonormal,"},{"Start":"05:57.415 ","End":"06:00.900","Text":"like the norm of this column is 1."},{"Start":"06:00.900 ","End":"06:03.035","Text":"0 squared plus 1 squared is 1,"},{"Start":"06:03.035 ","End":"06:04.490","Text":"square root of 1 is 1."},{"Start":"06:04.490 ","End":"06:08.710","Text":"Similarly, 1, 0 also has a norm of 1,"},{"Start":"06:08.710 ","End":"06:11.295","Text":"and the dot product of the 2 columns,"},{"Start":"06:11.295 ","End":"06:13.845","Text":"0 times 1 plus 1 times 0 is 0."},{"Start":"06:13.845 ","End":"06:15.200","Text":"The columns are orthonormal,"},{"Start":"06:15.200 ","End":"06:16.969","Text":"so the matrix is orthogonal."},{"Start":"06:16.969 ","End":"06:20.330","Text":"Well, that concludes this clip and in the following clips,"},{"Start":"06:20.330 ","End":"06:23.825","Text":"we\u0027ll talk about special orthogonal transformations,"},{"Start":"06:23.825 ","End":"06:27.120","Text":"rotations, and reflections."}],"ID":27128},{"Watched":false,"Name":"Rotation Transformations (2D)","Duration":"2m 19s","ChapterTopicVideoID":26222,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.315","Text":"Continuing with orthogonal transformations,"},{"Start":"00:03.315 ","End":"00:09.365","Text":"we\u0027re going to talk about rotation transformations in the plane in 2D."},{"Start":"00:09.365 ","End":"00:13.800","Text":"Linear transformation T from R^2 to R^2,"},{"Start":"00:13.800 ","End":"00:17.475","Text":"it takes the origin to the origin of course. It\u0027s linear."},{"Start":"00:17.475 ","End":"00:20.790","Text":"Normally consider rotations around the origin."},{"Start":"00:20.790 ","End":"00:26.430","Text":"Now let the transformation T be a rotation by angle Theta,"},{"Start":"00:26.430 ","End":"00:29.610","Text":"needless to say counterclockwise."},{"Start":"00:29.610 ","End":"00:34.680","Text":"We\u0027re going to prove in the exercises that the formula for T of x,"},{"Start":"00:34.680 ","End":"00:38.250","Text":"y is what\u0027s written here. I won\u0027t read it out."},{"Start":"00:38.250 ","End":"00:41.835","Text":"But in matrix form it\u0027s Theta,"},{"Start":"00:41.835 ","End":"00:46.470","Text":"multiplies x, y by the matrix cosine Theta minus sine Theta,"},{"Start":"00:46.470 ","End":"00:48.750","Text":"sine Theta cosine Theta."},{"Start":"00:48.750 ","End":"00:53.615","Text":"This is called the rotation matrix for angle Theta."},{"Start":"00:53.615 ","End":"00:57.590","Text":"The reason we\u0027re talking about rotations is that 1 example of"},{"Start":"00:57.590 ","End":"01:01.595","Text":"orthogonal transformations, proofing the exercises."},{"Start":"01:01.595 ","End":"01:02.840","Text":"Let\u0027s do an example."},{"Start":"01:02.840 ","End":"01:08.240","Text":"Let\u0027s compute a transformation of rotation by angle of 45 degree,"},{"Start":"01:08.240 ","End":"01:11.135","Text":"or if you prefer radians than Pi over 4."},{"Start":"01:11.135 ","End":"01:16.075","Text":"Using this formula, putting Theta equals Pi over 4,"},{"Start":"01:16.075 ","End":"01:17.885","Text":"we get the following."},{"Start":"01:17.885 ","End":"01:23.225","Text":"Now both the cosine and the sine of 45 degrees is root 2 over 2,"},{"Start":"01:23.225 ","End":"01:26.065","Text":"so we get this matrix,"},{"Start":"01:26.065 ","End":"01:33.265","Text":"and this matrix here is the rotation matrix for 45 degrees."},{"Start":"01:33.265 ","End":"01:36.880","Text":"Compute what this does to x y."},{"Start":"01:36.880 ","End":"01:40.805","Text":"If you multiply it, you get this times x minus this times y,"},{"Start":"01:40.805 ","End":"01:42.935","Text":"this times x plus this times y,"},{"Start":"01:42.935 ","End":"01:46.485","Text":"and in row form comes out to be this."},{"Start":"01:46.485 ","End":"01:50.130","Text":"For example, let\u0027s say we take the 0.2,"},{"Start":"01:50.130 ","End":"01:52.980","Text":"1, the vector 2, 1 really."},{"Start":"01:52.980 ","End":"01:54.990","Text":"Where does T send that to?"},{"Start":"01:54.990 ","End":"02:00.855","Text":"Just plug it in here we get a 1/2 root 2, 3/2 root 2."},{"Start":"02:00.855 ","End":"02:03.140","Text":"Now a picture of that."},{"Start":"02:03.140 ","End":"02:06.350","Text":"This is the original vector, 2, 1,"},{"Start":"02:06.350 ","End":"02:08.570","Text":"this is an angle of 45 degrees,"},{"Start":"02:08.570 ","End":"02:10.055","Text":"and then we rotate it."},{"Start":"02:10.055 ","End":"02:14.780","Text":"We know that it comes out as 1/2 root 2, 3/2 root 2."},{"Start":"02:14.780 ","End":"02:16.985","Text":"That concludes this clip."},{"Start":"02:16.985 ","End":"02:20.100","Text":"Next clip will be reflections."}],"ID":27126},{"Watched":false,"Name":"Reflection Transformations (2D)","Duration":"4m 47s","ChapterTopicVideoID":26221,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.930","Text":"We\u0027ve just talked about rotations and now we\u0027ll talk about"},{"Start":"00:03.930 ","End":"00:08.985","Text":"reflection transformations in 2D."},{"Start":"00:08.985 ","End":"00:11.115","Text":"Only a special kind,"},{"Start":"00:11.115 ","End":"00:15.540","Text":"because a linear transformation takes 0 to 0,"},{"Start":"00:15.540 ","End":"00:18.265","Text":"or if you like the origin to the origin,"},{"Start":"00:18.265 ","End":"00:21.920","Text":"then the mirror line has to be a line"},{"Start":"00:21.920 ","End":"00:25.490","Text":"through the origin otherwise the origin would reflect,"},{"Start":"00:25.490 ","End":"00:27.220","Text":"it wouldn\u0027t go to itself."},{"Start":"00:27.220 ","End":"00:32.930","Text":"Let\u0027s take the angle that the mirror makes with the positive x-axis and we can make sure"},{"Start":"00:32.930 ","End":"00:38.130","Text":"it\u0027s between 0 and 180 degrees and call that Theta."},{"Start":"00:38.130 ","End":"00:41.780","Text":"We have a reflection in the line makes an angle of Theta."},{"Start":"00:41.780 ","End":"00:48.100","Text":"Now if the line doesn\u0027t make an angle of 90 degrees or Pi over 2 with the x-axis,"},{"Start":"00:48.100 ","End":"00:52.085","Text":"then we can write y as a function of x."},{"Start":"00:52.085 ","End":"00:56.000","Text":"It\u0027s just the slope of the line times x."},{"Start":"00:56.000 ","End":"00:58.895","Text":"If it happens to be 90 degrees,"},{"Start":"00:58.895 ","End":"01:01.130","Text":"then the line is the y-axis,"},{"Start":"01:01.130 ","End":"01:05.040","Text":"it\u0027s vertical or the equation x equals 0."},{"Start":"01:05.380 ","End":"01:11.240","Text":"Now we\u0027re going to prove in the exercises that the transformation sends x,"},{"Start":"01:11.240 ","End":"01:14.180","Text":"y to whatever is written here,"},{"Start":"01:14.180 ","End":"01:16.475","Text":"and it involves 2 Theta here."},{"Start":"01:16.475 ","End":"01:19.840","Text":"This formula works for both these cases."},{"Start":"01:19.840 ","End":"01:23.070","Text":"In matrix form, this is much neater,"},{"Start":"01:23.070 ","End":"01:26.510","Text":"it comes out like this and similar to"},{"Start":"01:26.510 ","End":"01:29.450","Text":"the rotation matrix except that the minuses"},{"Start":"01:29.450 ","End":"01:33.100","Text":"here and not here and we have a 2 Theta instead of a Theta."},{"Start":"01:33.100 ","End":"01:38.450","Text":"Just like rotations, reflections are also orthogonal transformations."},{"Start":"01:38.450 ","End":"01:40.820","Text":"The proof is in the exercises."},{"Start":"01:40.820 ","End":"01:44.305","Text":"This matrix, call it A,"},{"Start":"01:44.305 ","End":"01:48.680","Text":"is the reflection matrix corresponding to the angle Theta."},{"Start":"01:48.680 ","End":"01:56.870","Text":"As an example, let\u0027s compute the transformation of the reflection in the line y equals x,"},{"Start":"01:56.870 ","End":"01:59.464","Text":"which is a 45-degree line."},{"Start":"01:59.464 ","End":"02:00.830","Text":"Now, y equals x,"},{"Start":"02:00.830 ","End":"02:02.360","Text":"write it as y equals 1x."},{"Start":"02:02.360 ","End":"02:06.725","Text":"We want tangent of Theta to equal 1."},{"Start":"02:06.725 ","End":"02:09.690","Text":"Yeah, tangent Theta\u0027s the slope."},{"Start":"02:09.690 ","End":"02:18.520","Text":"The slope is 1 and Theta is 45 degrees or Pi over 4 radian."},{"Start":"02:18.950 ","End":"02:22.320","Text":"The matrix involves 2 Theta,"},{"Start":"02:22.320 ","End":"02:25.185","Text":"2 Theta would be Pi over 2,"},{"Start":"02:25.185 ","End":"02:29.245","Text":"so like here, cosine sine is sine minus cosine,"},{"Start":"02:29.245 ","End":"02:31.125","Text":"but with Pi over 2."},{"Start":"02:31.125 ","End":"02:34.120","Text":"Cosine of 90 degrees is 0,"},{"Start":"02:34.120 ","End":"02:37.375","Text":"sine of 90 degrees is 1."},{"Start":"02:37.375 ","End":"02:40.930","Text":"The matrix we get is this, yeah,"},{"Start":"02:40.930 ","End":"02:47.050","Text":"this is the reflection matrix corresponding to a line of 45 degrees through the origin."},{"Start":"02:47.050 ","End":"02:50.470","Text":"Now if we multiply this matrix by x y,"},{"Start":"02:50.470 ","End":"02:53.780","Text":"0 1 times x y is y and 1 0 times x,"},{"Start":"02:53.780 ","End":"02:57.170","Text":"y is x, so it basically flips x and y."},{"Start":"02:57.170 ","End":"02:58.955","Text":"We write it in row form,"},{"Start":"02:58.955 ","End":"03:01.100","Text":"t of x, y is y, x."},{"Start":"03:01.100 ","End":"03:04.700","Text":"For example, it sends the vector 2,"},{"Start":"03:04.700 ","End":"03:06.965","Text":"1 to the vector 1,"},{"Start":"03:06.965 ","End":"03:09.605","Text":"2. Here\u0027s a diagram."},{"Start":"03:09.605 ","End":"03:12.590","Text":"This is the original vector 2, 1."},{"Start":"03:12.590 ","End":"03:15.230","Text":"This is where it gets sent to 1, 2."},{"Start":"03:15.230 ","End":"03:21.150","Text":"This is the mirror and that\u0027s a geometric properties."},{"Start":"03:21.150 ","End":"03:30.050","Text":"For example, this is equal to this and this angle is equal to this angle."},{"Start":"03:30.050 ","End":"03:37.445","Text":"You can also look at it as the mirror is perpendicular bisector of this segment,"},{"Start":"03:37.445 ","End":"03:41.950","Text":"so that this segment is equal to this."},{"Start":"03:41.950 ","End":"03:45.330","Text":"Here we have a 90 degree angle,"},{"Start":"03:45.330 ","End":"03:47.715","Text":"so that\u0027s how it is geometrically."},{"Start":"03:47.715 ","End":"03:50.465","Text":"Before we end this clip, I want to make a remark."},{"Start":"03:50.465 ","End":"03:53.960","Text":"Something doesn\u0027t look quite so neat as the rotation,"},{"Start":"03:53.960 ","End":"03:57.050","Text":"because here we had an angle of Theta,"},{"Start":"03:57.050 ","End":"04:01.535","Text":"that\u0027s the angle of the mirror with the x-axis,"},{"Start":"04:01.535 ","End":"04:03.800","Text":"but here it\u0027s 2 Theta."},{"Start":"04:03.800 ","End":"04:08.375","Text":"Some people prefer to have a Theta written here,"},{"Start":"04:08.375 ","End":"04:12.840","Text":"and then this is Theta over 2. Yeah, so write that."},{"Start":"04:12.840 ","End":"04:15.980","Text":"If you take the angle between the mirror and"},{"Start":"04:15.980 ","End":"04:19.745","Text":"the positive x-axis to be Theta over 2 instead of Theta,"},{"Start":"04:19.745 ","End":"04:23.360","Text":"then the reflection matrix is without the 2."},{"Start":"04:23.360 ","End":"04:24.830","Text":"It\u0027s just cosine Theta, sine Theta,"},{"Start":"04:24.830 ","End":"04:26.990","Text":"sine Theta minus cosine Theta."},{"Start":"04:26.990 ","End":"04:32.535","Text":"One more thing which relates to both rotation and reflection,"},{"Start":"04:32.535 ","End":"04:39.695","Text":"turns out that every orthogonal transformation in the plane is 1 or the other."},{"Start":"04:39.695 ","End":"04:42.545","Text":"It\u0027s either a reflection or a rotation,"},{"Start":"04:42.545 ","End":"04:44.975","Text":"and that will be proven in the exercises."},{"Start":"04:44.975 ","End":"04:47.490","Text":"That concludes this clip."}],"ID":27125},{"Watched":false,"Name":"Rotation Transformations (3D)","Duration":"4m 29s","ChapterTopicVideoID":26223,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.610","Text":"We\u0027re still in orthogonal transformations."},{"Start":"00:02.610 ","End":"00:06.840","Text":"We\u0027ve just talked about rotation and reflection in 2D,"},{"Start":"00:06.840 ","End":"00:12.165","Text":"and we\u0027ll just briefly mention rotation transformations in 3D."},{"Start":"00:12.165 ","End":"00:16.410","Text":"We\u0027re only going to consider rotations about 1 of the 3 axis,"},{"Start":"00:16.410 ","End":"00:18.330","Text":"x, y, or z-axis."},{"Start":"00:18.330 ","End":"00:21.030","Text":"Now, remember in the 2D in the x, y plane,"},{"Start":"00:21.030 ","End":"00:25.020","Text":"we had the formula that the rotation is as follows,"},{"Start":"00:25.020 ","End":"00:26.955","Text":"where the angle is Theta,"},{"Start":"00:26.955 ","End":"00:31.175","Text":"and we can actually generalize that to 3D."},{"Start":"00:31.175 ","End":"00:39.140","Text":"Not surprisingly, the formula for rotation about the z or z-axis is the following,"},{"Start":"00:39.140 ","End":"00:42.680","Text":"where we take this matrix and just fill it in with a 1 here and"},{"Start":"00:42.680 ","End":"00:46.700","Text":"0s here so that the z is untouched and on x,"},{"Start":"00:46.700 ","End":"00:48.910","Text":"y it\u0027s just like it was here,"},{"Start":"00:48.910 ","End":"00:55.265","Text":"and similarly for the x-axis and the y-axis."},{"Start":"00:55.265 ","End":"00:57.700","Text":"They\u0027re all very similar."},{"Start":"00:57.700 ","End":"01:00.690","Text":"Let\u0027s do an example, an exercise."},{"Start":"01:00.690 ","End":"01:04.400","Text":"Compute the rotation matrix for the transformation which"},{"Start":"01:04.400 ","End":"01:09.125","Text":"rotates 90 degrees about the z or z-axis."},{"Start":"01:09.125 ","End":"01:14.120","Text":"Secondly, to see what this transformation does to the vector 1,"},{"Start":"01:14.120 ","End":"01:18.095","Text":"0, 0, it\u0027s a unit vector in the x-direction."},{"Start":"01:18.095 ","End":"01:20.090","Text":"Something that I should have mentioned."},{"Start":"01:20.090 ","End":"01:21.905","Text":"It\u0027s a bit technical,"},{"Start":"01:21.905 ","End":"01:25.640","Text":"is a question of clockwise or counterclockwise."},{"Start":"01:25.640 ","End":"01:28.745","Text":"How do you know which is the positive direction?"},{"Start":"01:28.745 ","End":"01:31.650","Text":"There\u0027s a diagram here, which tells you."},{"Start":"01:31.650 ","End":"01:35.235","Text":"But basically, if you\u0027re rotating about the z-axis,"},{"Start":"01:35.235 ","End":"01:40.070","Text":"you\u0027re looking along the z-axis and you see the plane x,"},{"Start":"01:40.070 ","End":"01:43.370","Text":"y, and you rotate counterclockwise."},{"Start":"01:43.370 ","End":"01:44.075","Text":"That\u0027s this."},{"Start":"01:44.075 ","End":"01:45.905","Text":"Similarly for the other 2,"},{"Start":"01:45.905 ","End":"01:48.520","Text":"I don\u0027t want to get into that technical detail."},{"Start":"01:48.520 ","End":"01:50.225","Text":"Let\u0027s just solve this."},{"Start":"01:50.225 ","End":"01:55.265","Text":"Just assume that these matrices work and that they take you in the right sense,"},{"Start":"01:55.265 ","End":"01:58.480","Text":"sense meaning clockwise or counterclockwise."},{"Start":"01:58.480 ","End":"02:04.220","Text":"We take the matrix for rotation about the z-axis."},{"Start":"02:04.220 ","End":"02:08.685","Text":"It\u0027s just off the screen, this 1."},{"Start":"02:08.685 ","End":"02:14.750","Text":"Just substitute Theta equals 90 degrees or Pi over 2 radians,"},{"Start":"02:14.750 ","End":"02:16.310","Text":"get cosine minus sine,"},{"Start":"02:16.310 ","End":"02:17.600","Text":"sine cosine here,"},{"Start":"02:17.600 ","End":"02:19.720","Text":"and 0s and 1."},{"Start":"02:19.720 ","End":"02:22.850","Text":"Cosine of 90 degrees is 0,"},{"Start":"02:22.850 ","End":"02:24.120","Text":"sine of 90 degrees is 1,"},{"Start":"02:24.120 ","End":"02:26.540","Text":"so this is the matrix we get."},{"Start":"02:26.540 ","End":"02:29.985","Text":"Then we want to apply it, in part b,"},{"Start":"02:29.985 ","End":"02:32.940","Text":"to the given vector 1,"},{"Start":"02:32.940 ","End":"02:35.015","Text":"0, 0, in column form."},{"Start":"02:35.015 ","End":"02:36.875","Text":"If you multiply out,"},{"Start":"02:36.875 ","End":"02:39.170","Text":"because this is a column vector with 1s and 0s,"},{"Start":"02:39.170 ","End":"02:43.400","Text":"the 1 means we take this column plus 0 times this column,"},{"Start":"02:43.400 ","End":"02:44.955","Text":"plus 0 times this column."},{"Start":"02:44.955 ","End":"02:46.395","Text":"It\u0027s just this column,"},{"Start":"02:46.395 ","End":"02:48.095","Text":"0, 1, 0,"},{"Start":"02:48.095 ","End":"02:51.050","Text":"and in row form 0, 1, 0."},{"Start":"02:51.050 ","End":"02:53.710","Text":"Here\u0027s the diagram."},{"Start":"02:53.710 ","End":"02:55.410","Text":"We start with 1, 0,"},{"Start":"02:55.410 ","End":"03:00.530","Text":"0 and it rotates counterclockwise and we\u0027re looking z from"},{"Start":"03:00.530 ","End":"03:06.565","Text":"above and takes us to the unit vector in the y-direction, which is this."},{"Start":"03:06.565 ","End":"03:13.050","Text":"Now a remark, we\u0027re in the section on orthogonal transformations."},{"Start":"03:13.050 ","End":"03:16.655","Text":"As you\u0027d expect this transformation,"},{"Start":"03:16.655 ","End":"03:19.400","Text":"rotation about the z-axis,"},{"Start":"03:19.400 ","End":"03:22.490","Text":"say, is an orthogonal transformation."},{"Start":"03:22.490 ","End":"03:24.064","Text":"Let\u0027s check."},{"Start":"03:24.064 ","End":"03:25.745","Text":"Let\u0027s look at this matrix."},{"Start":"03:25.745 ","End":"03:29.040","Text":"I claim this is an orthogonal matrix."},{"Start":"03:29.170 ","End":"03:32.750","Text":"If we take the columns of the matrix,"},{"Start":"03:32.750 ","End":"03:35.255","Text":"they form an orthonormal set,"},{"Start":"03:35.255 ","End":"03:37.670","Text":"meaning each has norm 1 and the dot product"},{"Start":"03:37.670 ","End":"03:41.165","Text":"of any 2 with each other, different ones is 0."},{"Start":"03:41.165 ","End":"03:45.320","Text":"If the matrix is orthogonal,"},{"Start":"03:45.320 ","End":"03:48.080","Text":"then the transformation is orthogonal."},{"Start":"03:48.080 ","End":"03:50.285","Text":"We had a theorem like that."},{"Start":"03:50.285 ","End":"03:53.780","Text":"Of course, this is not just true about the z-axis."},{"Start":"03:53.780 ","End":"03:59.745","Text":"It\u0027s going to be true for the y and z-axis also,"},{"Start":"03:59.745 ","End":"04:03.550","Text":"and in fact, rotation through any axis."},{"Start":"04:03.550 ","End":"04:06.290","Text":"I should have said axis through the origin."},{"Start":"04:06.290 ","End":"04:08.375","Text":"The reason it\u0027s orthogonal,"},{"Start":"04:08.375 ","End":"04:13.070","Text":"intuitively, is that the distances are preserved."},{"Start":"04:13.070 ","End":"04:15.950","Text":"It\u0027s a rigid transformation rotation."},{"Start":"04:15.950 ","End":"04:18.710","Text":"A distance of every point to the origin stays the same."},{"Start":"04:18.710 ","End":"04:20.570","Text":"It\u0027s invariant to the norm,"},{"Start":"04:20.570 ","End":"04:22.300","Text":"and so it\u0027s orthogonal."},{"Start":"04:22.300 ","End":"04:27.090","Text":"Anyway, that\u0027s enough for 3D transformations,"},{"Start":"04:27.090 ","End":"04:29.980","Text":"and that ends this clip."}],"ID":27127},{"Watched":false,"Name":"Exercise 1","Duration":"3m 24s","ChapterTopicVideoID":26225,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.020","Text":"In this exercise, we\u0027re going to prove a theorem."},{"Start":"00:04.020 ","End":"00:10.920","Text":"Let T be a linear transformation from RN to RM."},{"Start":"00:10.920 ","End":"00:15.735","Text":"The theorem claims that t is orthogonal if and only if"},{"Start":"00:15.735 ","End":"00:20.490","Text":"the norm of T of u is the same as the norm of u for every u."},{"Start":"00:20.490 ","End":"00:24.930","Text":"In other words, if and only if T preserves the norm."},{"Start":"00:24.930 ","End":"00:27.825","Text":"We\u0027ll start with the only if part,"},{"Start":"00:27.825 ","End":"00:31.185","Text":"which means that this implies this."},{"Start":"00:31.185 ","End":"00:35.115","Text":"We take it that T is orthogonal,"},{"Start":"00:35.115 ","End":"00:37.995","Text":"and then we\u0027ll have to show that it preserves the norm."},{"Start":"00:37.995 ","End":"00:41.505","Text":"Now orthogonal, means that it preserves the dot-product."},{"Start":"00:41.505 ","End":"00:49.125","Text":"That T of u dot T of v is the same as u dot v for all u and v. Let u belong to RN,"},{"Start":"00:49.125 ","End":"00:50.700","Text":"that\u0027s the u here,"},{"Start":"00:50.700 ","End":"00:55.865","Text":"we need to show that the norm of T of u is the same as the norm of u."},{"Start":"00:55.865 ","End":"01:00.830","Text":"The norm of v is the square root of the dot-product,"},{"Start":"01:00.830 ","End":"01:02.820","Text":"write it this way, you can write it this way,"},{"Start":"01:02.820 ","End":"01:07.080","Text":"so it\u0027s a square root of the dot product of the vector with itself."},{"Start":"01:07.330 ","End":"01:14.540","Text":"Norm of T of u is the square root of T of u dot T of u."},{"Start":"01:14.540 ","End":"01:18.530","Text":"But T preserves the scalar product."},{"Start":"01:18.530 ","End":"01:20.210","Text":"Under the square root,"},{"Start":"01:20.210 ","End":"01:23.285","Text":"it\u0027s going to be the same as u dot u."},{"Start":"01:23.285 ","End":"01:25.505","Text":"This, according to this definition,"},{"Start":"01:25.505 ","End":"01:27.680","Text":"is the norm of u."},{"Start":"01:27.680 ","End":"01:31.145","Text":"This equals this and so T preserves the norm."},{"Start":"01:31.145 ","End":"01:35.585","Text":"Note that the key step in doing this only if"},{"Start":"01:35.585 ","End":"01:41.270","Text":"was showing that the norm can be expressed in terms of the scalar product."},{"Start":"01:41.270 ","End":"01:43.175","Text":"Now, for the other direction,"},{"Start":"01:43.175 ","End":"01:45.020","Text":"for the if, we\u0027re going to do the opposite."},{"Start":"01:45.020 ","End":"01:50.245","Text":"We\u0027re going to show that the scalar product can be expressed in terms of the norm."},{"Start":"01:50.245 ","End":"01:54.000","Text":"Assume that T preserves the norm,"},{"Start":"01:54.000 ","End":"01:56.005","Text":"in other words, this holds."},{"Start":"01:56.005 ","End":"01:57.590","Text":"Our trick, like I said,"},{"Start":"01:57.590 ","End":"02:00.560","Text":"is to express the dot product in terms of the norm."},{"Start":"02:00.560 ","End":"02:03.290","Text":"From an earlier part of the chapter,"},{"Start":"02:03.290 ","End":"02:07.520","Text":"we showed that the dot product is equal to,"},{"Start":"02:07.520 ","End":"02:12.545","Text":"well, this expression in terms of the norm of the sum and the norm of the difference."},{"Start":"02:12.545 ","End":"02:15.200","Text":"If you think of just high-school algebra,"},{"Start":"02:15.200 ","End":"02:18.820","Text":"a plus b squared is a squared plus ab plus b squared."},{"Start":"02:18.820 ","End":"02:21.905","Text":"If you have a squared minus 2ab plus b squared."},{"Start":"02:21.905 ","End":"02:26.075","Text":"If you subtract them, you get 4ab divided by 4, it\u0027s ab."},{"Start":"02:26.075 ","End":"02:29.195","Text":"That\u0027s of a mnemonic why this might be true."},{"Start":"02:29.195 ","End":"02:31.455","Text":"Anyway, we have that."},{"Start":"02:31.455 ","End":"02:34.640","Text":"T of u dot T of v,"},{"Start":"02:34.640 ","End":"02:36.200","Text":"and here u and v are arbitrary."},{"Start":"02:36.200 ","End":"02:41.480","Text":"Yeah, so T of u dot T of v is equal to, from this formula,"},{"Start":"02:41.480 ","End":"02:44.765","Text":"just replacing u and v with t of u and T of v,"},{"Start":"02:44.765 ","End":"02:47.125","Text":"we get this expression."},{"Start":"02:47.125 ","End":"02:49.775","Text":"Then because T is linear,"},{"Start":"02:49.775 ","End":"02:53.800","Text":"T of u plus T of v is T of u plus v,"},{"Start":"02:53.800 ","End":"02:55.500","Text":"so we have this."},{"Start":"02:55.500 ","End":"03:00.365","Text":"Now we can use the fact that T preserves the norm for any vector,"},{"Start":"03:00.365 ","End":"03:02.630","Text":"doesn\u0027t have to be just u, it could be u plus v,"},{"Start":"03:02.630 ","End":"03:07.890","Text":"could be u minus v. We get this expression."},{"Start":"03:07.890 ","End":"03:12.320","Text":"Just throw out the T basically, we have this."},{"Start":"03:12.320 ","End":"03:18.230","Text":"Then once again we use this formula and this is u dot v. That completes"},{"Start":"03:18.230 ","End":"03:20.690","Text":"the other direction we did the only if and"},{"Start":"03:20.690 ","End":"03:24.930","Text":"the if and so that concludes this exercise."}],"ID":27129},{"Watched":false,"Name":"Exercise 2","Duration":"2m 44s","ChapterTopicVideoID":26226,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:05.910","Text":"In this exercise, we\u0027re given an orthogonal transformation T from R^n to R^n."},{"Start":"00:05.910 ","End":"00:08.010","Text":"We have to prove, first of all,"},{"Start":"00:08.010 ","End":"00:11.640","Text":"that T is an isomorphism."},{"Start":"00:11.640 ","End":"00:14.685","Text":"After we prove that, we know that it has an inverse."},{"Start":"00:14.685 ","End":"00:20.130","Text":"Part b, we have to show that this inverse T to the minus 1 is orthogonal also."},{"Start":"00:20.130 ","End":"00:27.780","Text":"Now, T is an isomorphism if and only if the kernel of T is just the 0."},{"Start":"00:27.780 ","End":"00:29.130","Text":"Not going to prove this,"},{"Start":"00:29.130 ","End":"00:31.050","Text":"but it\u0027s a known result."},{"Start":"00:31.050 ","End":"00:37.155","Text":"It holds for finite dimensional vector spaces like R^n."},{"Start":"00:37.155 ","End":"00:41.325","Text":"Actually doesn\u0027t hold for infinite dimensional spaces necessarily."},{"Start":"00:41.325 ","End":"00:45.860","Text":"Now, let\u0027s show that the kernel of T is 0."},{"Start":"00:45.860 ","End":"00:48.950","Text":"We just actually need the 1 direction, the if."},{"Start":"00:48.950 ","End":"00:50.765","Text":"If kernel T is 0,"},{"Start":"00:50.765 ","End":"00:52.220","Text":"then T is an isomorphism,"},{"Start":"00:52.220 ","End":"00:54.335","Text":"so that you belong to the kernel."},{"Start":"00:54.335 ","End":"00:59.360","Text":"That means that T of u is the 0 vector."},{"Start":"00:59.360 ","End":"01:02.165","Text":"Now apply the norm to both sides."},{"Start":"01:02.165 ","End":"01:06.890","Text":"Norm of T of u is the norm of 0, which is 0."},{"Start":"01:06.890 ","End":"01:09.515","Text":"But because T is orthogonal,"},{"Start":"01:09.515 ","End":"01:14.060","Text":"the norm of T of u is the same as the norm of u is 2 are equal,"},{"Start":"01:14.060 ","End":"01:15.800","Text":"so the norm of u is 0."},{"Start":"01:15.800 ","End":"01:19.940","Text":"The only vector whose norm is 0 is the 0 vector,"},{"Start":"01:19.940 ","End":"01:21.425","Text":"so u is 0."},{"Start":"01:21.425 ","End":"01:22.760","Text":"U belongs to the kernel,"},{"Start":"01:22.760 ","End":"01:24.610","Text":"means u is 0."},{"Start":"01:24.610 ","End":"01:28.545","Text":"That means that the kernel contains just 0,"},{"Start":"01:28.545 ","End":"01:31.990","Text":"and so T is an isomorphism."},{"Start":"01:32.060 ","End":"01:34.980","Text":"That concludes Part a."},{"Start":"01:34.980 ","End":"01:39.085","Text":"Now Part b, we have to show that the inverse is orthogonal also."},{"Start":"01:39.085 ","End":"01:40.895","Text":"Like I said before,"},{"Start":"01:40.895 ","End":"01:44.150","Text":"T inverse exists because T is an isomorphism."},{"Start":"01:44.150 ","End":"01:46.370","Text":"We also know that the inverse of"},{"Start":"01:46.370 ","End":"01:50.065","Text":"a linear transformation is a linear transformation, goes without saying."},{"Start":"01:50.065 ","End":"01:53.570","Text":"We need to show that T inverse is orthogonal,"},{"Start":"01:53.570 ","End":"01:57.685","Text":"and we\u0027ll do that by showing that it preserves the dot product."},{"Start":"01:57.685 ","End":"02:00.240","Text":"Take any 2 vectors in R^n,"},{"Start":"02:00.240 ","End":"02:01.890","Text":"we have to show,"},{"Start":"02:01.890 ","End":"02:06.050","Text":"and if I apply T-inverse to u and v and then take the dot product is the same as"},{"Start":"02:06.050 ","End":"02:12.135","Text":"taking the dot product of u and v. U. v is,"},{"Start":"02:12.135 ","End":"02:15.255","Text":"I can write u is T of T inverse of u."},{"Start":"02:15.255 ","End":"02:21.715","Text":"Similarly v is T of T inverse of v. Because T is orthogonal."},{"Start":"02:21.715 ","End":"02:27.655","Text":"T of something, dot T of something is equal to the dot of those to something."},{"Start":"02:27.655 ","End":"02:29.450","Text":"We have this equals this."},{"Start":"02:29.450 ","End":"02:32.240","Text":"That actually concludes Part b,"},{"Start":"02:32.240 ","End":"02:35.240","Text":"because we\u0027ve shown that for any u and v,"},{"Start":"02:35.240 ","End":"02:38.990","Text":"T inverse u. T inverse v equals u. v."},{"Start":"02:38.990 ","End":"02:43.860","Text":"T inverse preserves the scalar product and so it\u0027s orthogonal."}],"ID":27130},{"Watched":false,"Name":"Exercise 3","Duration":"3m 17s","ChapterTopicVideoID":26227,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.590","Text":"In this exercise, A is an orthogonal matrix of order n,"},{"Start":"00:04.590 ","End":"00:07.875","Text":"and we define linear transformation T from"},{"Start":"00:07.875 ","End":"00:14.340","Text":"R_n to R_n by T of u is matrix A times vector u."},{"Start":"00:14.340 ","End":"00:18.390","Text":"We have to show that T is an orthogonal transformation."},{"Start":"00:18.390 ","End":"00:20.249","Text":"B is a converse."},{"Start":"00:20.249 ","End":"00:23.850","Text":"We start with an orthogonal transformation and show that there is"},{"Start":"00:23.850 ","End":"00:28.605","Text":"an A such that T of u is Au for all u."},{"Start":"00:28.605 ","End":"00:32.595","Text":"This A is orthogonal solution."},{"Start":"00:32.595 ","End":"00:35.340","Text":"Now, since A is orthogonal,"},{"Start":"00:35.340 ","End":"00:39.985","Text":"it has the property that A transpose times A is the identity."},{"Start":"00:39.985 ","End":"00:46.770","Text":"The dot product can be expressed using matrix multiplication for column and row vectors."},{"Start":"00:46.770 ","End":"00:50.200","Text":"A vector u dot product with the vector v,"},{"Start":"00:50.200 ","End":"00:52.685","Text":"let\u0027s assume u and v are column vectors,"},{"Start":"00:52.685 ","End":"00:54.770","Text":"is the row vector u."},{"Start":"00:54.770 ","End":"00:58.100","Text":"In other words, u transpose times,"},{"Start":"00:58.100 ","End":"01:06.160","Text":"this is matrix multiplication times v. T of u dot product T of v,"},{"Start":"01:06.160 ","End":"01:07.550","Text":"perhaps I skipped a step."},{"Start":"01:07.550 ","End":"01:09.580","Text":"This should say Au."},{"Start":"01:09.580 ","End":"01:14.110","Text":"Av, because T of u is Au and T of v is Av."},{"Start":"01:14.110 ","End":"01:17.330","Text":"Then using this result for the dot product,"},{"Start":"01:17.330 ","End":"01:21.830","Text":"we can transpose the first to make it a row vector times a column vector,"},{"Start":"01:21.830 ","End":"01:23.195","Text":"and that gives us a scalar."},{"Start":"01:23.195 ","End":"01:26.720","Text":"Now we can use the associative law to rearrange and"},{"Start":"01:26.720 ","End":"01:29.060","Text":"also the rule that the transpose of"},{"Start":"01:29.060 ","End":"01:32.800","Text":"a product is a product of the transpose but in reverse order."},{"Start":"01:32.800 ","End":"01:34.545","Text":"We get this."},{"Start":"01:34.545 ","End":"01:38.150","Text":"Then A transpose times A is the identity."},{"Start":"01:38.150 ","End":"01:39.950","Text":"So we can throw that bit out."},{"Start":"01:39.950 ","End":"01:45.275","Text":"It\u0027s u transpose times v. We know that this is u.v."},{"Start":"01:45.275 ","End":"01:49.585","Text":"We\u0027ve just shown that T of u.T of v is u.v,"},{"Start":"01:49.585 ","End":"01:54.735","Text":"which means that T preserves dot product and so it\u0027s orthogonal."},{"Start":"01:54.735 ","End":"01:58.370","Text":"Now on to Part b where we know that T is"},{"Start":"01:58.370 ","End":"02:01.865","Text":"an orthogonal transformation and we\u0027re looking for A."},{"Start":"02:01.865 ","End":"02:10.140","Text":"Now, every linear transformation has a form T of u equals matrix A times u,"},{"Start":"02:10.140 ","End":"02:16.475","Text":"where A is the matrix representation of T with respect to the standard basis"},{"Start":"02:16.475 ","End":"02:23.330","Text":"E. The way we find A is just to take a matrix whose column vectors are T of e_1,"},{"Start":"02:23.330 ","End":"02:26.515","Text":"T of e_2, so on up to T of e_n."},{"Start":"02:26.515 ","End":"02:28.560","Text":"There\u0027s the columns of A."},{"Start":"02:28.560 ","End":"02:33.605","Text":"Now we just need to show that these columns are an orthonormal set that"},{"Start":"02:33.605 ","End":"02:36.680","Text":"each has a norm of 1 or dot product"},{"Start":"02:36.680 ","End":"02:40.744","Text":"with itself is 1 and the dot product of any 2 different ones is 0."},{"Start":"02:40.744 ","End":"02:44.940","Text":"Let\u0027s see. T of e_i.T of e_j,"},{"Start":"02:44.940 ","End":"02:48.720","Text":"because T is orthogonal it preserved dot product."},{"Start":"02:48.720 ","End":"02:50.790","Text":"This is e_i.e_j."},{"Start":"02:50.790 ","End":"02:55.040","Text":"We know that the standard basis is an orthonormal set."},{"Start":"02:55.040 ","End":"02:57.170","Text":"E_i.e_j is 1 or 0,"},{"Start":"02:57.170 ","End":"03:00.695","Text":"according to whether or not i equals j."},{"Start":"03:00.695 ","End":"03:05.010","Text":"That shows that the set T of e_1,"},{"Start":"03:05.010 ","End":"03:09.040","Text":"T of e_2 up to T of e_n is an orthonormal set,"},{"Start":"03:09.040 ","End":"03:11.150","Text":"because any 2 different ones,"},{"Start":"03:11.150 ","End":"03:12.350","Text":"dot product is 0,"},{"Start":"03:12.350 ","End":"03:15.065","Text":"any 2 of the same gives us 1."},{"Start":"03:15.065 ","End":"03:18.750","Text":"That concludes Part b and we\u0027re done."}],"ID":27131},{"Watched":false,"Name":"Exercise 4","Duration":"1m 58s","ChapterTopicVideoID":26228,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.860","Text":"In this exercise, we have to show that the only possible eigenvalues of"},{"Start":"00:04.860 ","End":"00:11.835","Text":"an orthogonal transformation are 1 and minus 1 and similarly for orthogonal matrix."},{"Start":"00:11.835 ","End":"00:17.145","Text":"Let\u0027s start with part a and let T be an orthogonal transformation."},{"Start":"00:17.145 ","End":"00:19.860","Text":"Let\u0027s suppose that Lambda is an eigenvalue"},{"Start":"00:19.860 ","End":"00:22.680","Text":"of T. You want to show that Lambda is plus or minus 1,"},{"Start":"00:22.680 ","End":"00:24.810","Text":"because it\u0027s an eigenvalue,"},{"Start":"00:24.810 ","End":"00:34.410","Text":"Tv equals Lambda v for some non 0 vector v and so T v dot T v is Lambda v dot Lambda v,"},{"Start":"00:34.410 ","End":"00:38.580","Text":"which is Lambda squared v dot v. On the other hand,"},{"Start":"00:38.580 ","End":"00:40.770","Text":"T v dot T v,"},{"Start":"00:40.770 ","End":"00:45.945","Text":"is v dot v because T is orthogonal so it preserves the dot-product."},{"Start":"00:45.945 ","End":"00:50.840","Text":"We can compare these 2 equalities and get that Lambda squared v"},{"Start":"00:50.840 ","End":"00:56.435","Text":"dot v is 1v dot v. V is a non 0 vector,"},{"Start":"00:56.435 ","End":"00:58.700","Text":"so its norm or norm squared,"},{"Start":"00:58.700 ","End":"01:01.145","Text":"which is v dot v is not 0."},{"Start":"01:01.145 ","End":"01:06.485","Text":"If it\u0027s not 0, we can divide both sides by it and get Lambda squared equals 1."},{"Start":"01:06.485 ","End":"01:10.650","Text":"Lambda is plus or minus 1 and that\u0027s part a."},{"Start":"01:10.650 ","End":"01:14.070","Text":"In part b, we\u0027ll let A be an orthogonal n by"},{"Start":"01:14.070 ","End":"01:20.090","Text":"n matrix and we\u0027re going to define now a transformation T from R^n"},{"Start":"01:20.090 ","End":"01:24.900","Text":"to R^n by T of u is A times u and we know"},{"Start":"01:24.900 ","End":"01:30.530","Text":"that matrix A represents T with respect to the standard basis of R^n."},{"Start":"01:30.530 ","End":"01:32.690","Text":"Doesn\u0027t matter that it\u0027s the standard basis."},{"Start":"01:32.690 ","End":"01:35.360","Text":"The point is that with respect to some basis A"},{"Start":"01:35.360 ","End":"01:38.629","Text":"represents T and we know that in that case,"},{"Start":"01:38.629 ","End":"01:45.650","Text":"the transformation and the matrix representing it have the same eigenvalues and that"},{"Start":"01:45.650 ","End":"01:49.460","Text":"means that the eigenvalues are plus or minus 1` because T"},{"Start":"01:49.460 ","End":"01:53.390","Text":"has eigenvalues plus or minus 1 from part a and A has the same,"},{"Start":"01:53.390 ","End":"01:55.550","Text":"so it\u0027s also plus or minus 1."},{"Start":"01:55.550 ","End":"01:58.710","Text":"That concludes this exercise."}],"ID":27132},{"Watched":false,"Name":"Exercise 5","Duration":"3m 47s","ChapterTopicVideoID":26229,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.860","Text":"In this exercise, we\u0027re going to prove that the composition of"},{"Start":"00:04.860 ","End":"00:09.600","Text":"orthogonal transformations is also an orthogonal transformation."},{"Start":"00:09.600 ","End":"00:12.600","Text":"In other words, if T_1, T_2,"},{"Start":"00:12.600 ","End":"00:15.990","Text":"and so on up to T_k are orthogonal,"},{"Start":"00:15.990 ","End":"00:18.165","Text":"then so is the composition."},{"Start":"00:18.165 ","End":"00:22.470","Text":"We know that the composition of linear transformation is a linear transformation."},{"Start":"00:22.470 ","End":"00:24.630","Text":"If each of these is from R^n to R^n,"},{"Start":"00:24.630 ","End":"00:26.715","Text":"then the composition is from R^n to R^n."},{"Start":"00:26.715 ","End":"00:29.910","Text":"The only question is whether it\u0027s orthogonal."},{"Start":"00:29.910 ","End":"00:34.995","Text":"By the way, we don\u0027t always bother writing the little circle for composition."},{"Start":"00:34.995 ","End":"00:39.030","Text":"Sometimes you write it just as a dot or sometimes just like juxtaposition,"},{"Start":"00:39.030 ","End":"00:41.290","Text":"meaning without any symbol at all."},{"Start":"00:41.290 ","End":"00:47.210","Text":"Anyway, remember that T is orthogonal if and only if it preserves the norm."},{"Start":"00:47.210 ","End":"00:51.610","Text":"In other words, the norm of T of u is the same as the norm of u."},{"Start":"00:51.610 ","End":"00:57.845","Text":"We\u0027ll do the proof by induction on k and start with the base case k equals 2."},{"Start":"00:57.845 ","End":"01:02.390","Text":"k is 2 then we want to show that if T_1 and T_2 are orthogonal,"},{"Start":"01:02.390 ","End":"01:05.075","Text":"so is T_1 composed T_2."},{"Start":"01:05.075 ","End":"01:07.985","Text":"T_1 composed T_2 applied to u,"},{"Start":"01:07.985 ","End":"01:11.195","Text":"is T_1 of T_2 of u,"},{"Start":"01:11.195 ","End":"01:12.680","Text":"take the norm of both."},{"Start":"01:12.680 ","End":"01:17.330","Text":"Now this is equal to T_1 of something,"},{"Start":"01:17.330 ","End":"01:19.900","Text":"and because T_1 is orthogonal,"},{"Start":"01:19.900 ","End":"01:23.675","Text":"the norm of T_1 of something is the same as the norm of that something,"},{"Start":"01:23.675 ","End":"01:25.160","Text":"which is T_2 of u."},{"Start":"01:25.160 ","End":"01:27.860","Text":"Then do the same trick again with T_2,"},{"Start":"01:27.860 ","End":"01:33.370","Text":"the norm of T_2 of u is equal to the norm of u because T_2 is orthogonal."},{"Start":"01:33.370 ","End":"01:36.190","Text":"That\u0027s the base case where k is 2."},{"Start":"01:36.190 ","End":"01:40.790","Text":"Now let\u0027s do the induction step from k to k plus 1."},{"Start":"01:40.790 ","End":"01:45.205","Text":"If we have T_1 composed T_2, and so on,"},{"Start":"01:45.205 ","End":"01:48.105","Text":"up to composed T_k plus 1,"},{"Start":"01:48.105 ","End":"01:49.460","Text":"we want to show that\u0027s orthogonal,"},{"Start":"01:49.460 ","End":"01:53.330","Text":"but we know that the composition of k of them is orthogonal."},{"Start":"01:53.330 ","End":"01:57.995","Text":"What we can do is group it as T_1 up to T_k,"},{"Start":"01:57.995 ","End":"02:03.500","Text":"which will be orthogonal by induction composed with T_k plus 1,"},{"Start":"02:03.500 ","End":"02:05.225","Text":"which is given to be orthogonal."},{"Start":"02:05.225 ","End":"02:07.055","Text":"Now if you look at it this way then,"},{"Start":"02:07.055 ","End":"02:12.090","Text":"you have a composition of 2 orthogonal transformations."},{"Start":"02:12.090 ","End":"02:16.695","Text":"So this will be orthogonal."},{"Start":"02:16.695 ","End":"02:18.860","Text":"That concludes the proof,"},{"Start":"02:18.860 ","End":"02:23.150","Text":"but I want to show you an alternative proof, alternative solution."},{"Start":"02:23.150 ","End":"02:27.740","Text":"If a linear transformation T is represented by a matrix A, in general,"},{"Start":"02:27.740 ","End":"02:30.415","Text":"meaning that T of u is Au,"},{"Start":"02:30.415 ","End":"02:35.780","Text":"then T is an orthogonal transformation if and only if A is an orthogonal matrix."},{"Start":"02:35.780 ","End":"02:37.205","Text":"We showed this."},{"Start":"02:37.205 ","End":"02:42.645","Text":"Let\u0027s say that we have now T_1 up to T_k,"},{"Start":"02:42.645 ","End":"02:46.320","Text":"and each of these T_is is represented by A_i,"},{"Start":"02:46.320 ","End":"02:48.210","Text":"i goes from 1 to k."},{"Start":"02:48.210 ","End":"02:51.205","Text":"Now recall the following result,"},{"Start":"02:51.205 ","End":"02:54.980","Text":"that in general, when you compose transformations,"},{"Start":"02:54.980 ","End":"02:57.980","Text":"it\u0027s the same as multiplying the matrices."},{"Start":"02:57.980 ","End":"02:59.120","Text":"You can pause and read this,"},{"Start":"02:59.120 ","End":"03:00.290","Text":"but basically what it says,"},{"Start":"03:00.290 ","End":"03:03.635","Text":"if you represent transformations by matrices,"},{"Start":"03:03.635 ","End":"03:09.390","Text":"then the composition of transformations corresponds to the product of matrices."},{"Start":"03:09.390 ","End":"03:13.100","Text":"What we can say is that the composition T_1,"},{"Start":"03:13.100 ","End":"03:17.120","Text":"T_2 up to T_k is represented by the matrix product A_1,"},{"Start":"03:17.120 ","End":"03:19.990","Text":"A_2, and so on times A_k."},{"Start":"03:19.990 ","End":"03:24.090","Text":"T_1 composed T_2 composed T_k of u, is A_1,"},{"Start":"03:24.090 ","End":"03:28.505","Text":"A_2, A_k applied to u times u for all u."},{"Start":"03:28.505 ","End":"03:32.915","Text":"Now we know that the product of orthogonal matrices is orthogonal."},{"Start":"03:32.915 ","End":"03:36.700","Text":"A_1, A_2 up to A_k is an orthogonal matrix,"},{"Start":"03:36.700 ","End":"03:40.875","Text":"and it represents T_1 composed T_2 composed to T_k."},{"Start":"03:40.875 ","End":"03:43.665","Text":"That\u0027s going to be an orthogonal transformation."},{"Start":"03:43.665 ","End":"03:48.430","Text":"That concludes the alternative proof. Now we\u0027re done."}],"ID":27133},{"Watched":false,"Name":"Exercise 6","Duration":"4m ","ChapterTopicVideoID":26230,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.470","Text":"In this exercise, as usual,"},{"Start":"00:02.470 ","End":"00:05.950","Text":"we have a transformation T from R^n to R^n."},{"Start":"00:05.950 ","End":"00:07.825","Text":"Let me paraphrase what it says."},{"Start":"00:07.825 ","End":"00:11.800","Text":"We have to prove that t is orthogonal if and"},{"Start":"00:11.800 ","End":"00:16.240","Text":"only if it takes an orthonormal basis to an orthonormal basis."},{"Start":"00:16.240 ","End":"00:19.825","Text":"Let me remind you what is an orthonormal set,"},{"Start":"00:19.825 ","End":"00:21.940","Text":"doesn\u0027t have to be these u_1 want to u_n,"},{"Start":"00:21.940 ","End":"00:25.955","Text":"but any set of vectors is called orthonormal"},{"Start":"00:25.955 ","End":"00:30.780","Text":"if the dot product of anyone with itself is 1,"},{"Start":"00:30.780 ","End":"00:34.365","Text":"but the dot product of 2 different ones is 0."},{"Start":"00:34.365 ","End":"00:36.360","Text":"We have 2 parts and the if and only if."},{"Start":"00:36.360 ","End":"00:38.665","Text":"Let\u0027s first of all do the only if."},{"Start":"00:38.665 ","End":"00:44.545","Text":"That means that we start off with t being orthogonal and u_1 to u_n are orthonormal,"},{"Start":"00:44.545 ","End":"00:48.445","Text":"and we have to prove that tu_1 up to tu_n is orthonormal."},{"Start":"00:48.445 ","End":"00:49.835","Text":"It\u0027s just what I said here."},{"Start":"00:49.835 ","End":"00:52.835","Text":"Now, t preserves the dot product."},{"Start":"00:52.835 ","End":"00:57.690","Text":"Tu_i.tu_j is the same as u_i.u_j,"},{"Start":"00:57.690 ","End":"01:01.380","Text":"which is 1 or 0 accordingly."},{"Start":"01:01.380 ","End":"01:04.990","Text":"That\u0027s it for the only if, but we had to show."},{"Start":"01:04.990 ","End":"01:07.145","Text":"Now, let\u0027s do the if part,"},{"Start":"01:07.145 ","End":"01:11.660","Text":"where we assume that tu_1 to tu_n is orthonormal,"},{"Start":"01:11.660 ","End":"01:15.245","Text":"and we show that t is an orthogonal transformation."},{"Start":"01:15.245 ","End":"01:20.460","Text":"What we have to show is that t preserves the dot product and"},{"Start":"01:20.460 ","End":"01:26.615","Text":"is that tx.ty equals x.y for all x and y."},{"Start":"01:26.615 ","End":"01:31.430","Text":"These are vectors even though x and y letters usually reserved for scalars,"},{"Start":"01:31.430 ","End":"01:33.230","Text":"never mind should have called it v and w."},{"Start":"01:33.230 ","End":"01:38.570","Text":"Now, we know that tu_i.tu_j is 1 or 0"},{"Start":"01:38.570 ","End":"01:42.785","Text":"according to whether or not i equals j times u_i.u_j"},{"Start":"01:42.785 ","End":"01:47.060","Text":"because the tu_i is our orthonormal given and the u_i,"},{"Start":"01:47.060 ","End":"01:49.805","Text":"u_j are orthonormal given."},{"Start":"01:49.805 ","End":"01:54.065","Text":"Now, we can express x and y in terms of the basis u_i."},{"Start":"01:54.065 ","End":"01:55.970","Text":"Let\u0027s say that x is a_1u_1,"},{"Start":"01:55.970 ","End":"01:58.325","Text":"and so on up to a_nu_n,"},{"Start":"01:58.325 ","End":"02:02.335","Text":"and that y is b_1u_1 up to be b_nu_n."},{"Start":"02:02.335 ","End":"02:09.260","Text":"Then the dot product of x with y is going to have squared terms."},{"Start":"02:09.260 ","End":"02:12.350","Text":"We take au_1 with each of these,"},{"Start":"02:12.350 ","End":"02:14.630","Text":"and that\u0027s the first row, a_1u_1 b_1u_1,"},{"Start":"02:14.630 ","End":"02:17.120","Text":"a_1u_1, b_2u_2,"},{"Start":"02:17.120 ","End":"02:22.640","Text":"and so on, then we take the a_2u_2 with each of those and find the a_nu_n,"},{"Start":"02:22.640 ","End":"02:25.590","Text":"with each of the terms here."},{"Start":"02:25.590 ","End":"02:27.035","Text":"Now, it looks a mass,"},{"Start":"02:27.035 ","End":"02:32.190","Text":"but I claim that only the diagonal makes a difference."},{"Start":"02:32.190 ","End":"02:33.985","Text":"Let\u0027s take the first 1,"},{"Start":"02:33.985 ","End":"02:37.445","Text":"a_1u_1.b_1u_1 is a_1 times b_1,"},{"Start":"02:37.445 ","End":"02:39.860","Text":"and then u_1.u_1 is 1."},{"Start":"02:39.860 ","End":"02:42.740","Text":"Here u_2.u_2 is also 1."},{"Start":"02:42.740 ","End":"02:44.285","Text":"U_n.u_n is 1."},{"Start":"02:44.285 ","End":"02:46.420","Text":"Here we just have a_n and b_n."},{"Start":"02:46.420 ","End":"02:51.270","Text":"Everything else comes out 0 because u_i.u_j is 0,"},{"Start":"02:51.270 ","End":"02:53.505","Text":"if i is not equal to j."},{"Start":"02:53.505 ","End":"03:02.425","Text":"Altogether, we just get the diagonal with the a_ib_i is a_1b_1 plus a_2b_2 up to a_nb_n."},{"Start":"03:02.425 ","End":"03:05.795","Text":"We can do the same thing instead of with x and y,"},{"Start":"03:05.795 ","End":"03:07.100","Text":"with tx and ty."},{"Start":"03:07.100 ","End":"03:12.245","Text":"We can write tx as a linear combination of tu_1,"},{"Start":"03:12.245 ","End":"03:15.680","Text":"tu_2, tu_n and ty also in terms of tu_1,"},{"Start":"03:15.680 ","End":"03:17.855","Text":"tu_2, tu_n, and again,"},{"Start":"03:17.855 ","End":"03:21.110","Text":"get this mess or whatever you call it,"},{"Start":"03:21.110 ","End":"03:25.035","Text":"each 1 of these dot with each 1 of these."},{"Start":"03:25.035 ","End":"03:33.455","Text":"Once again, that only the diagonals make a difference and the tu_i.tu_j is 1 or 0."},{"Start":"03:33.455 ","End":"03:37.540","Text":"We just get a_1b_1 plus a_2b_2,"},{"Start":"03:37.540 ","End":"03:40.485","Text":"just like we got up here."},{"Start":"03:40.485 ","End":"03:43.290","Text":"Look, x.y is this,"},{"Start":"03:43.290 ","End":"03:45.495","Text":"and tx.ty is this,"},{"Start":"03:45.495 ","End":"03:47.690","Text":"which is the same, maybe in different colors,"},{"Start":"03:47.690 ","End":"03:49.535","Text":"but they\u0027re the same thing."},{"Start":"03:49.535 ","End":"03:52.235","Text":"X.y equals tx.ty."},{"Start":"03:52.235 ","End":"03:56.430","Text":"That means that dot product is preserved."},{"Start":"03:56.430 ","End":"03:58.245","Text":"That\u0027s what we have to show,"},{"Start":"03:58.245 ","End":"04:00.880","Text":"and so we\u0027re done."}],"ID":27134},{"Watched":false,"Name":"Exercise 7","Duration":"3m 47s","ChapterTopicVideoID":26231,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.950","Text":"In this exercise, B is given to be an orthonormal basis of"},{"Start":"00:04.950 ","End":"00:10.050","Text":"R^n and T is a linear transformation from R^n to R^n,"},{"Start":"00:10.050 ","End":"00:17.355","Text":"and we let T sub B be the matrix representing T with respect to the basis B."},{"Start":"00:17.355 ","End":"00:21.510","Text":"We have to prove that T is an orthogonal transformation"},{"Start":"00:21.510 ","End":"00:26.175","Text":"if and only if T_B is an orthogonal matrix."},{"Start":"00:26.175 ","End":"00:28.740","Text":"We\u0027ve already proved a special case of this"},{"Start":"00:28.740 ","End":"00:32.130","Text":"for when the basis is the standard basis."},{"Start":"00:32.130 ","End":"00:34.200","Text":"We know that, for example,"},{"Start":"00:34.200 ","End":"00:39.825","Text":"that T_E is orthogonal if T is orthogonal and vice versa."},{"Start":"00:39.825 ","End":"00:42.210","Text":"That\u0027s proved the only if part."},{"Start":"00:42.210 ","End":"00:44.210","Text":"This is what we\u0027re given,"},{"Start":"00:44.210 ","End":"00:46.385","Text":"and this is what we\u0027re going to prove."},{"Start":"00:46.385 ","End":"00:50.930","Text":"T is given orthogonal, T_B will prove is orthogonal,"},{"Start":"00:50.930 ","End":"00:54.710","Text":"and because E and B, both orthonormal basis,"},{"Start":"00:54.710 ","End":"00:57.650","Text":"this because it\u0027s given and this is the standard basis is always"},{"Start":"00:57.650 ","End":"01:00.785","Text":"orthonormal to the change of basis matrix,"},{"Start":"01:00.785 ","End":"01:03.650","Text":"then BE is orthogonal."},{"Start":"01:03.650 ","End":"01:06.500","Text":"This we also proved in an exercise."},{"Start":"01:06.500 ","End":"01:11.390","Text":"Now we can write T_B as a product of 3 matrices."},{"Start":"01:11.390 ","End":"01:17.210","Text":"This is a standard formula for change of basis for a transformation."},{"Start":"01:17.210 ","End":"01:21.800","Text":"It\u0027s the product of 3 orthogonal matrices."},{"Start":"01:21.800 ","End":"01:25.580","Text":"I\u0027ll explain in a moment why, and we already showed"},{"Start":"01:25.580 ","End":"01:31.390","Text":"that the product of orthogonal matrices is orthogonal."},{"Start":"01:31.390 ","End":"01:33.875","Text":"Why are these orthogonal?"},{"Start":"01:33.875 ","End":"01:39.035","Text":"This 1 we just said here is orthogonal."},{"Start":"01:39.035 ","End":"01:41.420","Text":"This 1 is its inverse,"},{"Start":"01:41.420 ","End":"01:45.055","Text":"and the inverse of an orthogonal is also orthogonal."},{"Start":"01:45.055 ","End":"01:48.800","Text":"This part here is by this remark."},{"Start":"01:48.800 ","End":"01:54.160","Text":"But the previous exercise that we know that T_E,"},{"Start":"01:54.160 ","End":"01:55.685","Text":"where E is the standard basis,"},{"Start":"01:55.685 ","End":"01:59.240","Text":"is orthogonal because T is orthogonal."},{"Start":"01:59.240 ","End":"02:03.994","Text":"That concludes the explanation of why each of these 3 is orthogonal."},{"Start":"02:03.994 ","End":"02:08.160","Text":"T_B is orthogonal. I forgot this."},{"Start":"02:08.160 ","End":"02:13.790","Text":"This is just in writing what I said about why T_E is orthogonal."},{"Start":"02:13.790 ","End":"02:16.265","Text":"Now, we come to the if part."},{"Start":"02:16.265 ","End":"02:20.059","Text":"This time we\u0027re given that T_B is orthogonal"},{"Start":"02:20.059 ","End":"02:23.155","Text":"and we have to show that T is orthogonal."},{"Start":"02:23.155 ","End":"02:30.125","Text":"We can look at this T_B as a change of basis matrix from the basis B."},{"Start":"02:30.125 ","End":"02:36.290","Text":"Let\u0027s say it\u0027s u_1 to u_n, to the basis Tu_1 up to Tu_n,"},{"Start":"02:36.290 ","End":"02:38.045","Text":"call that B prime."},{"Start":"02:38.045 ","End":"02:41.555","Text":"Because in both cases we build the matrix the same way."},{"Start":"02:41.555 ","End":"02:45.350","Text":"We see what u_1 to u_n, go to,"},{"Start":"02:45.350 ","End":"02:48.605","Text":"and we write the coordinates of each of these"},{"Start":"02:48.605 ","End":"02:52.700","Text":"with respect to this basis as the columns of the matrix."},{"Start":"02:52.700 ","End":"03:00.905","Text":"We do that for the change of basis matrix and also for the representation matrix."},{"Start":"03:00.905 ","End":"03:06.900","Text":"What we said here can be written in shorthand as this."},{"Start":"03:06.900 ","End":"03:12.005","Text":"This is the change of basis matrix from B to B prime."},{"Start":"03:12.005 ","End":"03:16.490","Text":"B is orthonormal given and the change of basis matrix"},{"Start":"03:16.490 ","End":"03:19.195","Text":"from B to B prime is orthogonal,"},{"Start":"03:19.195 ","End":"03:22.040","Text":"and so by a previous exercise,"},{"Start":"03:22.040 ","End":"03:25.970","Text":"the other base is P prime is also orthonormal."},{"Start":"03:25.970 ","End":"03:29.630","Text":"We have that the transformation T maps"},{"Start":"03:29.630 ","End":"03:35.035","Text":"one orthonormal basis B to another orthonormal basis B prime."},{"Start":"03:35.035 ","End":"03:41.720","Text":"As such, T must be orthogonal, again by a previous exercise."},{"Start":"03:41.720 ","End":"03:45.095","Text":"This is what we had to show that T is orthogonal,"},{"Start":"03:45.095 ","End":"03:48.150","Text":"and so we are done."}],"ID":27135},{"Watched":false,"Name":"Exercise 8","Duration":"3m 30s","ChapterTopicVideoID":26232,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.545","Text":"This exercise has 3 parts,"},{"Start":"00:02.545 ","End":"00:06.550","Text":"and each part we describe a reflection or rotation."},{"Start":"00:06.550 ","End":"00:09.220","Text":"Our task is to write the formula."},{"Start":"00:09.220 ","End":"00:10.795","Text":"When I say formula,"},{"Start":"00:10.795 ","End":"00:15.490","Text":"I mean like in the form T of x, y equals something."},{"Start":"00:15.490 ","End":"00:20.580","Text":"Part a is a reflection in the following line."},{"Start":"00:20.580 ","End":"00:23.950","Text":"What we want to do is figure out what"},{"Start":"00:23.950 ","End":"00:28.225","Text":"is the angle that this line makes with the positive x-axis."},{"Start":"00:28.225 ","End":"00:32.470","Text":"We let tangent Theta equal the slope 1 over root 3,"},{"Start":"00:32.470 ","End":"00:38.325","Text":"and that gives us Theta equals 30 degrees or Pi over 6."},{"Start":"00:38.325 ","End":"00:46.170","Text":"The reflection matrix then is the following to what\u0027s here is twice, what\u0027s here."},{"Start":"00:46.170 ","End":"00:51.080","Text":"If here is 30 degrees, then we get cosine 2 Theta, etc,"},{"Start":"00:51.080 ","End":"00:54.200","Text":"so we see a 60 degrees in the matrix."},{"Start":"00:54.200 ","End":"00:56.290","Text":"That\u0027s the matrix."},{"Start":"00:56.290 ","End":"01:00.435","Text":"Evaluating this, we get the following."},{"Start":"01:00.435 ","End":"01:03.020","Text":"Then we can get the formula for T of x,"},{"Start":"01:03.020 ","End":"01:07.190","Text":"y as the column vector by multiplying this matrix by x,"},{"Start":"01:07.190 ","End":"01:10.910","Text":"y, and we get the following."},{"Start":"01:10.910 ","End":"01:17.060","Text":"In row form, we get T of x, y equals the following."},{"Start":"01:17.060 ","End":"01:22.685","Text":"As an example, T of 2,1 plug in x equals 2,"},{"Start":"01:22.685 ","End":"01:25.390","Text":"y equals 1, and we get this."},{"Start":"01:25.390 ","End":"01:28.100","Text":"This is the picture of it."},{"Start":"01:28.100 ","End":"01:32.780","Text":"This is the line that we\u0027re given and the reflection in it takes 2,"},{"Start":"01:32.780 ","End":"01:36.710","Text":"1, to the other side of the line, the reflection."},{"Start":"01:36.710 ","End":"01:39.320","Text":"This comes out to be like we have here,"},{"Start":"01:39.320 ","End":"01:42.460","Text":"1 plus root 3/2 root 3 minus 1/2."},{"Start":"01:42.460 ","End":"01:46.220","Text":"Now we come to part b, which is similar to part a."},{"Start":"01:46.220 ","End":"01:49.685","Text":"It\u0027s just that there\u0027s root 3 here, instead of 1 over root 3."},{"Start":"01:49.685 ","End":"01:53.270","Text":"But this time, we have tangent Theta equals square root of 3"},{"Start":"01:53.270 ","End":"01:55.765","Text":"and Theta is 60 degrees."},{"Start":"01:55.765 ","End":"01:59.620","Text":"The reflection matrix, remember there\u0027s a 2 Theta here,"},{"Start":"01:59.620 ","End":"02:02.100","Text":"cosine 2 Theta, sine 2 Theta, etc."},{"Start":"02:02.100 ","End":"02:04.635","Text":"It goes on 120 degrees, etc."},{"Start":"02:04.635 ","End":"02:07.355","Text":"This comes out to be as follows;"},{"Start":"02:07.355 ","End":"02:10.430","Text":"cosine and sines of well-known angles."},{"Start":"02:10.430 ","End":"02:12.824","Text":"Apply this to a vector x, y,"},{"Start":"02:12.824 ","End":"02:16.730","Text":"and we get this matrix times vector x, y,"},{"Start":"02:16.730 ","End":"02:19.960","Text":"which comes out to be the following."},{"Start":"02:19.960 ","End":"02:23.285","Text":"In row form, it\u0027s like so."},{"Start":"02:23.285 ","End":"02:24.855","Text":"That was part b."},{"Start":"02:24.855 ","End":"02:28.910","Text":"Now part c, this time a rotation of 30 degrees"},{"Start":"02:28.910 ","End":"02:31.955","Text":"counterclockwise unless told otherwise."},{"Start":"02:31.955 ","End":"02:36.520","Text":"Rotation matrix this time there\u0027s no 2 Theta is just Theta."},{"Start":"02:36.520 ","End":"02:41.180","Text":"We\u0027re given the 30 degrees following cosine 30 minus sine 30,"},{"Start":"02:41.180 ","End":"02:43.145","Text":"sine 30, cosine 30."},{"Start":"02:43.145 ","End":"02:45.860","Text":"It\u0027s a bit similar to the reflection 1 on"},{"Start":"02:45.860 ","End":"02:48.650","Text":"either reflection as a minus here instead of here,"},{"Start":"02:48.650 ","End":"02:50.575","Text":"and it\u0027s twice the Theta."},{"Start":"02:50.575 ","End":"02:53.840","Text":"This comes out to be, again, cosine of sine of"},{"Start":"02:53.840 ","End":"02:58.010","Text":"famous angles to be memorized and apply this to x, y"},{"Start":"02:58.010 ","End":"03:03.720","Text":"and we get in column form the following."},{"Start":"03:03.720 ","End":"03:09.090","Text":"Then in row form, this is what we get, and the picture."},{"Start":"03:09.090 ","End":"03:11.195","Text":"This is the picture."},{"Start":"03:11.195 ","End":"03:12.710","Text":"I forgot the example."},{"Start":"03:12.710 ","End":"03:17.210","Text":"Yeah, so take T of 1,1, plug x equals 1, y equals 1,"},{"Start":"03:17.210 ","End":"03:18.395","Text":"and we get this."},{"Start":"03:18.395 ","End":"03:23.510","Text":"This means that the vector 1,1 after a rotation of 30 degrees goes to"},{"Start":"03:23.510 ","End":"03:26.500","Text":"root 3 minus 1/2, root 3 plus 1/2,"},{"Start":"03:26.500 ","End":"03:31.210","Text":"and that concludes part c at this exercise."}],"ID":27136},{"Watched":false,"Name":"Exercise 9 parts a-c","Duration":"4m 56s","ChapterTopicVideoID":26234,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:01.965","Text":"In this exercise, there are"},{"Start":"00:01.965 ","End":"00:06.525","Text":"several parts and each of them we give a formula in terms of a matrix."},{"Start":"00:06.525 ","End":"00:10.470","Text":"We need to first check that each of them is orthogonal and then"},{"Start":"00:10.470 ","End":"00:14.635","Text":"describe it in terms of a rotation or a reflection."},{"Start":"00:14.635 ","End":"00:20.150","Text":"I\u0027ll start by reminding you what a rotation and reflection are in terms of a matrix."},{"Start":"00:20.150 ","End":"00:24.860","Text":"Rotation of angle theta as a matrix cosine theta minus sine theta,"},{"Start":"00:24.860 ","End":"00:26.450","Text":"sine theta cosine theta."},{"Start":"00:26.450 ","End":"00:32.105","Text":"The reflection in the line of slope theta over 2 is of the form cosine,"},{"Start":"00:32.105 ","End":"00:34.925","Text":"sine, sine, minus cosine."},{"Start":"00:34.925 ","End":"00:38.885","Text":"Note that here it\u0027s theta over 2 and here it\u0027s theta."},{"Start":"00:38.885 ","End":"00:40.190","Text":"This is a variation."},{"Start":"00:40.190 ","End":"00:43.310","Text":"Sometimes we take here theta and here 2 theta."},{"Start":"00:43.310 ","End":"00:46.880","Text":"It doesn\u0027t matter as long as the angle here is double the angle here."},{"Start":"00:46.880 ","End":"00:51.245","Text":"There\u0027s an exception if the line is vertical if it\u0027s the y-axis."},{"Start":"00:51.245 ","End":"00:56.285","Text":"Now by inspection, all 4 of these matrices are orthogonal."},{"Start":"00:56.285 ","End":"00:58.925","Text":"It\u0027s simple computation."},{"Start":"00:58.925 ","End":"01:00.710","Text":"Just look at the columns."},{"Start":"01:00.710 ","End":"01:03.440","Text":"Each of them has norm of 1,"},{"Start":"01:03.440 ","End":"01:06.350","Text":"like 1 over root 2 squared plus 1 over root 2 squared is"},{"Start":"01:06.350 ","End":"01:09.975","Text":"a half plus a half is 1, and so on."},{"Start":"01:09.975 ","End":"01:13.415","Text":"The dot product of this column with this column is 0."},{"Start":"01:13.415 ","End":"01:17.034","Text":"It\u0027s just mechanical arithmetical checking."},{"Start":"01:17.034 ","End":"01:18.860","Text":"They\u0027re all orthogonal."},{"Start":"01:18.860 ","End":"01:21.285","Text":"There\u0027s actually a theorem that in 2D,"},{"Start":"01:21.285 ","End":"01:25.495","Text":"all orthogonal matrices are either rotation or reflection."},{"Start":"01:25.495 ","End":"01:27.720","Text":"Now, let\u0027s take the part separately."},{"Start":"01:27.720 ","End":"01:30.905","Text":"Part a, this is a reflection."},{"Start":"01:30.905 ","End":"01:35.705","Text":"One way of telling which is which is by seeing where the signs are opposite."},{"Start":"01:35.705 ","End":"01:38.915","Text":"If this diagonal is the same,"},{"Start":"01:38.915 ","End":"01:40.580","Text":"then it\u0027s a rotation."},{"Start":"01:40.580 ","End":"01:43.069","Text":"If this diagonal is the same,"},{"Start":"01:43.069 ","End":"01:44.480","Text":"then it\u0027s a reflection."},{"Start":"01:44.480 ","End":"01:45.980","Text":"The other diagonal here,"},{"Start":"01:45.980 ","End":"01:49.330","Text":"they\u0027re gated and similarly here."},{"Start":"01:49.330 ","End":"01:51.920","Text":"This one is a reflection because this is"},{"Start":"01:51.920 ","End":"01:54.619","Text":"the diagonal that\u0027s equal and this is one that\u0027s opposite,"},{"Start":"01:54.619 ","End":"01:57.020","Text":"so it\u0027s this one."},{"Start":"01:57.020 ","End":"01:59.990","Text":"We just have to let cosine theta equals 1 over root"},{"Start":"01:59.990 ","End":"02:04.155","Text":"2 and sine theta equals 1 over root 2,"},{"Start":"02:04.155 ","End":"02:06.060","Text":"and is a famous angle."},{"Start":"02:06.060 ","End":"02:09.935","Text":"This is 45 degrees or pi over 4."},{"Start":"02:09.935 ","End":"02:14.675","Text":"Theta over 2 is pi over 8,"},{"Start":"02:14.675 ","End":"02:17.910","Text":"and that\u0027s 22.5 degrees."},{"Start":"02:17.910 ","End":"02:21.380","Text":"We can say that the answer is that,"},{"Start":"02:21.380 ","End":"02:29.050","Text":"this transformation is a reflection in the line y equals tangent of pi over 8 times x."},{"Start":"02:29.050 ","End":"02:32.745","Text":"Now part b, this is the matrix."},{"Start":"02:32.745 ","End":"02:35.640","Text":"This is a reflection because these 2 have"},{"Start":"02:35.640 ","End":"02:38.450","Text":"the same sign and these 2 have the opposite sign."},{"Start":"02:38.450 ","End":"02:43.225","Text":"We compare it to cosine sine, sine minus cosine."},{"Start":"02:43.225 ","End":"02:45.240","Text":"Cosine theta is three-fifths,"},{"Start":"02:45.240 ","End":"02:47.370","Text":"sine theta is four-fifths."},{"Start":"02:47.370 ","End":"02:57.380","Text":"We can do it by taking the tangent as sine over cosine is 4 over 3, which is positive."},{"Start":"02:57.380 ","End":"02:59.510","Text":"Theta is in the first quadrant,"},{"Start":"02:59.510 ","End":"03:01.930","Text":"it has to be there in the first or the second."},{"Start":"03:01.930 ","End":"03:05.000","Text":"We\u0027re only taking angles from 0 to 180 degrees."},{"Start":"03:05.000 ","End":"03:09.830","Text":"What we need is not theta but theta over 2."},{"Start":"03:09.830 ","End":"03:13.580","Text":"Actually, we need tangent of theta over 2."},{"Start":"03:13.580 ","End":"03:15.740","Text":"I\u0027ll remind you why."},{"Start":"03:15.740 ","End":"03:21.400","Text":"It\u0027s because the reflection is in the line y equals tangent of theta over 2 times x."},{"Start":"03:21.400 ","End":"03:24.200","Text":"If we want the line, we need tangent theta over 2."},{"Start":"03:24.200 ","End":"03:28.645","Text":"We can use the formula for tangent of a double angle."},{"Start":"03:28.645 ","End":"03:32.430","Text":"Tangent 2 alpha is 2 tangent alpha over 1 minus tan squared alpha."},{"Start":"03:32.430 ","End":"03:35.160","Text":"If you let alpha equals theta over 2,"},{"Start":"03:35.160 ","End":"03:40.790","Text":"then we get four-thirds equals 2t over 1 minus t squared."},{"Start":"03:40.790 ","End":"03:44.590","Text":"2 cancels with the 4 leading 2 here."},{"Start":"03:44.590 ","End":"03:48.620","Text":"What we get, cross-multiply and rearrange,"},{"Start":"03:48.620 ","End":"03:53.735","Text":"and we get the quadratic equation in t as follows."},{"Start":"03:53.735 ","End":"03:57.955","Text":"This factorizes, or you could use the formula."},{"Start":"03:57.955 ","End":"04:03.320","Text":"The solutions are t equals a half or t equals minus 2."},{"Start":"04:03.320 ","End":"04:08.390","Text":"But t is the tangent of an angle that\u0027s in the first quadrant."},{"Start":"04:08.390 ","End":"04:11.965","Text":"Well, theta is in the first quadrant and so is theta over 2."},{"Start":"04:11.965 ","End":"04:14.220","Text":"Minus 2 is ruled out,"},{"Start":"04:14.220 ","End":"04:17.265","Text":"so tangent of theta over 2 is a half."},{"Start":"04:17.265 ","End":"04:25.115","Text":"That means that the answer is that we have a reflection in the line y equals 1.5 x."},{"Start":"04:25.115 ","End":"04:28.625","Text":"Now part c, this is the matrix."},{"Start":"04:28.625 ","End":"04:33.500","Text":"This is a rotation because the diagonal with the opposite signs is this diagonal."},{"Start":"04:33.500 ","End":"04:36.035","Text":"It fits this formula."},{"Start":"04:36.035 ","End":"04:39.350","Text":"We\u0027re left with cosine theta equals 0,"},{"Start":"04:39.350 ","End":"04:42.275","Text":"sine theta equals 1, so theta is 90 degrees."},{"Start":"04:42.275 ","End":"04:47.090","Text":"The answer is that it\u0027s a rotation of 90 degrees,"},{"Start":"04:47.090 ","End":"04:52.105","Text":"or if you like, pi over 2 counterclockwise around the origin."},{"Start":"04:52.105 ","End":"04:57.390","Text":"That was part c. Then we\u0027ll take a break and then we\u0027ll do part d."}],"ID":27138},{"Watched":false,"Name":"Exercise 9 part d","Duration":"5m 55s","ChapterTopicVideoID":26233,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:01.590","Text":"We\u0027re back from the break,"},{"Start":"00:01.590 ","End":"00:03.120","Text":"we\u0027ll do part d now in fact,"},{"Start":"00:03.120 ","End":"00:04.949","Text":"we\u0027ll do it in 2 different ways."},{"Start":"00:04.949 ","End":"00:07.710","Text":"This is the matrix of the transformation"},{"Start":"00:07.710 ","End":"00:10.950","Text":"and you can recognize it as a reflection,"},{"Start":"00:10.950 ","End":"00:13.470","Text":"even though the minuses are a bit different,"},{"Start":"00:13.470 ","End":"00:14.610","Text":"it might find it confusing,"},{"Start":"00:14.610 ","End":"00:18.180","Text":"but this is the one that it compares to the one"},{"Start":"00:18.180 ","End":"00:23.835","Text":"where the signs in the upper right and lower left are the same."},{"Start":"00:23.835 ","End":"00:27.045","Text":"Both minuses, but they\u0027re the same."},{"Start":"00:27.045 ","End":"00:29.760","Text":"Here they\u0027re opposite signs."},{"Start":"00:29.760 ","End":"00:33.105","Text":"We have cosine Theta is 4/5,"},{"Start":"00:33.105 ","End":"00:36.370","Text":"sine Theta is minus 3/5."},{"Start":"00:36.370 ","End":"00:39.730","Text":"Now we could continue like in part b."},{"Start":"00:39.730 ","End":"00:43.885","Text":"If we did that, we\u0027d get tangent Theta over 2 is minus a 1/3."},{"Start":"00:43.885 ","End":"00:48.060","Text":"Then the transformation would be a reflection in the line y equals minus 1/3x."},{"Start":"00:48.060 ","End":"00:49.835","Text":"We\u0027ll do this this way."},{"Start":"00:49.835 ","End":"00:53.315","Text":"But at the end they wanted to do the different way first."},{"Start":"00:53.315 ","End":"00:58.850","Text":"You could note that our matrix is the product of 2 matrices,"},{"Start":"00:58.850 ","End":"01:00.080","Text":"the one from part A,"},{"Start":"01:00.080 ","End":"01:02.000","Text":"and the one from part B."},{"Start":"01:02.000 ","End":"01:05.675","Text":"This might be given or you might try to break it up."},{"Start":"01:05.675 ","End":"01:07.940","Text":"This is a rotation of 90 degrees,"},{"Start":"01:07.940 ","End":"01:11.835","Text":"so we could write this equals something times this."},{"Start":"01:11.835 ","End":"01:16.440","Text":"We could figure out this by multiplying this by the inverse of this."},{"Start":"01:16.440 ","End":"01:19.270","Text":"The inverse is the transpose that it\u0027s orthogonal."},{"Start":"01:19.270 ","End":"01:20.675","Text":"Just put the minus here,"},{"Start":"01:20.675 ","End":"01:22.595","Text":"multiply and you\u0027d get this,"},{"Start":"01:22.595 ","End":"01:24.290","Text":"or just do it by trial and error,"},{"Start":"01:24.290 ","End":"01:29.005","Text":"whatever the reason we have it given to us this is the case."},{"Start":"01:29.005 ","End":"01:34.040","Text":"That means that we have a composite transformation."},{"Start":"01:34.040 ","End":"01:35.480","Text":"We do it from right to left."},{"Start":"01:35.480 ","End":"01:36.875","Text":"The first apply this one,"},{"Start":"01:36.875 ","End":"01:40.265","Text":"which is part a, 90 degree rotation."},{"Start":"01:40.265 ","End":"01:43.070","Text":"Then from part b, this one,"},{"Start":"01:43.070 ","End":"01:47.030","Text":"which is a reflection in the line y equals 1/2x."},{"Start":"01:47.030 ","End":"01:50.780","Text":"You could say the answer is we have a rotation of 90 degrees"},{"Start":"01:50.780 ","End":"01:54.155","Text":"followed by reflection, in this line."},{"Start":"01:54.155 ","End":"01:56.180","Text":"I\u0027d like to illustrate it further."},{"Start":"01:56.180 ","End":"01:59.120","Text":"Let\u0027s take an example, 0.1, 2"},{"Start":"01:59.120 ","End":"02:03.365","Text":"and see what happens to it both ways."},{"Start":"02:03.365 ","End":"02:05.390","Text":"Whether we do it the way from part b,"},{"Start":"02:05.390 ","End":"02:09.815","Text":"or whether we do it this way with 2 pieces and 2 jumps."},{"Start":"02:09.815 ","End":"02:14.330","Text":"The first way we just multiply this matrix straight away by 1, 2,"},{"Start":"02:14.330 ","End":"02:15.875","Text":"which is our starting point,"},{"Start":"02:15.875 ","End":"02:19.625","Text":"and we get to the end point in fraction or a decimal."},{"Start":"02:19.625 ","End":"02:23.270","Text":"Let\u0027s say this minus 0.4, minus 2.2."},{"Start":"02:23.270 ","End":"02:27.140","Text":"The other way is to apply it in 2 pieces."},{"Start":"02:27.140 ","End":"02:30.280","Text":"We have this product applied to this vector."},{"Start":"02:30.280 ","End":"02:32.210","Text":"First we do the right-hand one,"},{"Start":"02:32.210 ","End":"02:34.335","Text":"and that gives us minus 2, 1."},{"Start":"02:34.335 ","End":"02:35.810","Text":"That\u0027s the intermediate step,"},{"Start":"02:35.810 ","End":"02:37.610","Text":"rotate it 90 degrees."},{"Start":"02:37.610 ","End":"02:40.620","Text":"Then we apply this matrix to this."},{"Start":"02:40.620 ","End":"02:46.055","Text":"That reflects in the line y equals a 1/2x and gives us this."},{"Start":"02:46.055 ","End":"02:48.575","Text":"Now a diagram will help."},{"Start":"02:48.575 ","End":"02:50.425","Text":"Let\u0027s explain."},{"Start":"02:50.425 ","End":"02:54.075","Text":"We start with the vector 1, 2."},{"Start":"02:54.075 ","End":"02:57.320","Text":"In the first way, the expressway,"},{"Start":"02:57.320 ","End":"03:01.550","Text":"we just multiplied by the matrix and got from here straight to here."},{"Start":"03:01.550 ","End":"03:06.620","Text":"This is a reflection in the line y equals minus 1/3x,"},{"Start":"03:06.620 ","End":"03:08.740","Text":"which I explained at the beginning."},{"Start":"03:08.740 ","End":"03:12.750","Text":"You\u0027ll see how we got to this in the continuation."},{"Start":"03:12.750 ","End":"03:15.305","Text":"The other way was to break it up into 2 steps."},{"Start":"03:15.305 ","End":"03:17.570","Text":"First of all, this matrix,"},{"Start":"03:17.570 ","End":"03:19.670","Text":"which is a rotation of 90 degrees,"},{"Start":"03:19.670 ","End":"03:22.975","Text":"and that brings us to this minus 2, 1."},{"Start":"03:22.975 ","End":"03:27.170","Text":"Then a reflection in this line,"},{"Start":"03:27.170 ","End":"03:30.950","Text":"y equals 1/2x brings us from here to here."},{"Start":"03:30.950 ","End":"03:35.150","Text":"It\u0027s either one reflection through this line"},{"Start":"03:35.150 ","End":"03:39.110","Text":"or a rotation followed by reflection here."},{"Start":"03:39.110 ","End":"03:43.205","Text":"Next I\u0027ll explain how we got to this y equals minus 1/3x."},{"Start":"03:43.205 ","End":"03:45.860","Text":"Here we are again with the same matrix."},{"Start":"03:45.860 ","End":"03:48.425","Text":"We\u0027re going to do it like we did in part b."},{"Start":"03:48.425 ","End":"03:54.080","Text":"We identify this as a reflection because it fits this pattern."},{"Start":"03:54.080 ","End":"03:56.120","Text":"Even though there\u0027s 2 minuses here,"},{"Start":"03:56.120 ","End":"03:57.895","Text":"these have the same sign,"},{"Start":"03:57.895 ","End":"04:00.020","Text":"just like these 2 do."},{"Start":"04:00.020 ","End":"04:04.865","Text":"We have a reflection through the line with angle Theta over 2."},{"Start":"04:04.865 ","End":"04:09.050","Text":"We know cosine Theta and sine Theta from here and here."},{"Start":"04:09.050 ","End":"04:13.490","Text":"Because of this, we know that tangent Theta\u0027s minus 3/4,"},{"Start":"04:13.490 ","End":"04:14.765","Text":"it\u0027s sine over cosine."},{"Start":"04:14.765 ","End":"04:17.464","Text":"But we also know that it\u0027s in the 4th quadrant."},{"Start":"04:17.464 ","End":"04:22.390","Text":"That\u0027s where the x is positive and the y is negative."},{"Start":"04:22.390 ","End":"04:28.985","Text":"Which means that Theta is between 270 degrees and 360 degrees."},{"Start":"04:28.985 ","End":"04:31.820","Text":"This also tells us where Theta over 2 is."},{"Start":"04:31.820 ","End":"04:38.860","Text":"It will be between 135 degrees and a 180 degrees it\u0027s 1/2 of that."},{"Start":"04:38.860 ","End":"04:41.870","Text":"We also know this will be useful for us,"},{"Start":"04:41.870 ","End":"04:46.640","Text":"or the tangent is in this range between 135 and 180,"},{"Start":"04:46.640 ","End":"04:50.135","Text":"the tangent is between minus 1 and 0."},{"Start":"04:50.135 ","End":"04:52.550","Text":"Now let\u0027s use the same trick in part b,"},{"Start":"04:52.550 ","End":"04:54.785","Text":"we have the formula, tan 2 Alpha,"},{"Start":"04:54.785 ","End":"04:58.400","Text":"2 tan Alpha over 1 minus tan squared Alpha."},{"Start":"04:58.400 ","End":"05:01.055","Text":"Take Alpha to be 1/2 Theta."},{"Start":"05:01.055 ","End":"05:02.690","Text":"T is tangent Alpha"},{"Start":"05:02.690 ","End":"05:04.325","Text":"or tangent over 1/2 Theta."},{"Start":"05:04.325 ","End":"05:10.310","Text":"What we get is tangent of Theta is minus 3/4 from here"},{"Start":"05:10.310 ","End":"05:13.170","Text":"and from here, tan Alpha is t,"},{"Start":"05:13.170 ","End":"05:16.590","Text":"2t over 1 minus t squared."},{"Start":"05:17.060 ","End":"05:22.535","Text":"By rearranging, we get a quadratic equation like so,"},{"Start":"05:22.535 ","End":"05:25.220","Text":"which happens to factorize like this,"},{"Start":"05:25.220 ","End":"05:29.795","Text":"which means that we have 2 possibilities for t minus 1/3 or 3."},{"Start":"05:29.795 ","End":"05:35.450","Text":"But because tangent Alpha has to be in the range from minus 1-0,"},{"Start":"05:35.450 ","End":"05:36.710","Text":"this is in the range,"},{"Start":"05:36.710 ","End":"05:39.350","Text":"but this isn\u0027t, so we throw that one out,"},{"Start":"05:39.350 ","End":"05:43.550","Text":"and that gives us that tangent of Theta over 2 is minus 1/3,"},{"Start":"05:43.550 ","End":"05:48.545","Text":"which means that the reflection is in the line y equals minus 1/3x,"},{"Start":"05:48.545 ","End":"05:50.525","Text":"just like I told you, it would be."},{"Start":"05:50.525 ","End":"05:55.980","Text":"We\u0027re all right, and we\u0027ve done it in 2 ways and now we\u0027re done."}],"ID":27137},{"Watched":false,"Name":"Exercise 10","Duration":"3m 17s","ChapterTopicVideoID":26218,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.990","Text":"In this exercise, we\u0027re going to prove something that we"},{"Start":"00:03.990 ","End":"00:08.085","Text":"presented in the tutorial but didn\u0027t give a proof for it."},{"Start":"00:08.085 ","End":"00:12.209","Text":"That is that the transformation of rotation"},{"Start":"00:12.209 ","End":"00:16.260","Text":"by an angle of Theta is given by the formula here."},{"Start":"00:16.260 ","End":"00:18.465","Text":"Won\u0027t read it out. There it is."},{"Start":"00:18.465 ","End":"00:19.890","Text":"Now let\u0027s prove it."},{"Start":"00:19.890 ","End":"00:21.830","Text":"The standard basis of R^2,"},{"Start":"00:21.830 ","End":"00:25.290","Text":"I write it in row form, is usually given the letter E"},{"Start":"00:25.290 ","End":"00:28.680","Text":"and it\u0027s 1,0 and 0,1."},{"Start":"00:28.680 ","End":"00:31.680","Text":"The first thing we\u0027ll do is compute what the transformation"},{"Start":"00:31.680 ","End":"00:35.580","Text":"does to these 2 basis vectors."},{"Start":"00:35.580 ","End":"00:39.275","Text":"This is 1,0, let\u0027s call it u,"},{"Start":"00:39.275 ","End":"00:44.545","Text":"and rotate it by an angle Theta so we get to T of u,"},{"Start":"00:44.545 ","End":"00:47.195","Text":"and just by simple trigonometry,"},{"Start":"00:47.195 ","End":"00:50.510","Text":"its coordinates are cosine Theta sine Theta."},{"Start":"00:50.510 ","End":"00:56.250","Text":"Now, the other basis member 0,1, I\u0027ll also call it u,"},{"Start":"00:56.250 ","End":"01:00.095","Text":"we\u0027re reusing the same letter and rotate it by Theta."},{"Start":"01:00.095 ","End":"01:03.095","Text":"What we get using degrees,"},{"Start":"01:03.095 ","End":"01:08.240","Text":"this is cosine of 90 plus Theta sine of 90 plus Theta,"},{"Start":"01:08.240 ","End":"01:12.080","Text":"because we have 90 degrees up to here and another Theta,"},{"Start":"01:12.080 ","End":"01:14.440","Text":"so it\u0027s 90 plus Theta."},{"Start":"01:14.440 ","End":"01:16.545","Text":"I\u0027m going to repeat that."},{"Start":"01:16.545 ","End":"01:23.380","Text":"What I said was that these 2 are both on the unit circle,"},{"Start":"01:23.380 ","End":"01:28.950","Text":"one\u0027s with angle Theta and one\u0027s with angle 90 plus Theta."},{"Start":"01:28.950 ","End":"01:33.195","Text":"T of 1,0 like we said is here."},{"Start":"01:33.195 ","End":"01:35.235","Text":"T of 0,1,"},{"Start":"01:35.235 ","End":"01:39.815","Text":"we can simplify a bit instead of writing it like we did here."},{"Start":"01:39.815 ","End":"01:44.330","Text":"Cosine of 90 plus Theta is minus sine Theta,"},{"Start":"01:44.330 ","End":"01:47.870","Text":"and sine of 90 degrees plus Theta is cosine Theta,"},{"Start":"01:47.870 ","End":"01:50.605","Text":"a basic trigonometric identities."},{"Start":"01:50.605 ","End":"01:55.910","Text":"Now let\u0027s compute what T does to a general vector x,y."},{"Start":"01:55.910 ","End":"02:00.365","Text":"We can write it as a linear combination of the basis members,"},{"Start":"02:00.365 ","End":"02:04.105","Text":"x times 1,0 plus y times 0,1."},{"Start":"02:04.105 ","End":"02:10.530","Text":"T of x,y by linearity is x times T of 1,0 plus y times T of 0,1."},{"Start":"02:10.530 ","End":"02:16.890","Text":"T of 1, we have from here, and T of 0,1, we have from here."},{"Start":"02:16.890 ","End":"02:19.050","Text":"This is what we have."},{"Start":"02:19.050 ","End":"02:22.430","Text":"Now we can expand, multiply out by x here,"},{"Start":"02:22.430 ","End":"02:24.800","Text":"by y here, and then add coordinate-wise."},{"Start":"02:24.800 ","End":"02:25.900","Text":"We\u0027ve got this,"},{"Start":"02:25.900 ","End":"02:31.030","Text":"and this is the formula we need or in column form like so,"},{"Start":"02:31.030 ","End":"02:35.095","Text":"and that gives us the rotation matrix that we\u0027re familiar with."},{"Start":"02:35.095 ","End":"02:36.925","Text":"If you want to do this another way,"},{"Start":"02:36.925 ","End":"02:41.600","Text":"once you have what the basis members go to,"},{"Start":"02:41.600 ","End":"02:44.470","Text":"if you put them in column form, like so,"},{"Start":"02:44.470 ","End":"02:48.005","Text":"this is just what we wrote here, but in column form,"},{"Start":"02:48.005 ","End":"02:51.920","Text":"then the matrix you can get by just"},{"Start":"02:51.920 ","End":"02:56.000","Text":"taking this column and this column and just putting them together."},{"Start":"02:56.000 ","End":"02:59.360","Text":"This is the matrix for the transformation."},{"Start":"02:59.360 ","End":"03:01.280","Text":"If we actually do multiplying out,"},{"Start":"03:01.280 ","End":"03:03.565","Text":"we get this vector."},{"Start":"03:03.565 ","End":"03:05.115","Text":"In row form,"},{"Start":"03:05.115 ","End":"03:06.960","Text":"like so, which is the same here."},{"Start":"03:06.960 ","End":"03:08.330","Text":"This is the same as this,"},{"Start":"03:08.330 ","End":"03:10.580","Text":"and the matrix here is the same as the matrix here,"},{"Start":"03:10.580 ","End":"03:12.994","Text":"just did it 2 different ways."},{"Start":"03:12.994 ","End":"03:18.180","Text":"Anyway, this is what we had to prove and so we\u0027re done."}],"ID":27122},{"Watched":false,"Name":"Exercise 11","Duration":"3m 30s","ChapterTopicVideoID":26219,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.820","Text":"In this exercise, we\u0027re going to prove"},{"Start":"00:02.820 ","End":"00:07.890","Text":"the formula that we gave in the tutorial but without proof."},{"Start":"00:07.890 ","End":"00:11.520","Text":"That\u0027s the formula for reflection in the line that makes"},{"Start":"00:11.520 ","End":"00:15.960","Text":"an angle not Theta but Theta over 2 with the positive x-axis,"},{"Start":"00:15.960 ","End":"00:17.564","Text":"and this is the formula."},{"Start":"00:17.564 ","End":"00:21.135","Text":"There was also a variation where if we put Theta here,"},{"Start":"00:21.135 ","End":"00:22.920","Text":"we put 2 Theta here,"},{"Start":"00:22.920 ","End":"00:26.265","Text":"either or we\u0027ll go with this variation where the angle is"},{"Start":"00:26.265 ","End":"00:30.870","Text":"Theta over 2 the mirror makes with the positive x-axis."},{"Start":"00:30.870 ","End":"00:36.060","Text":"The plan is first to figure out what T does to elements of a basis."},{"Start":"00:36.060 ","End":"00:41.710","Text":"We take the standard basis, which is 1,0 and 0,1."},{"Start":"00:41.710 ","End":"00:46.660","Text":"That\u0027s a unit vector in the x direction and the unit vector in the y direction."},{"Start":"00:46.660 ","End":"00:52.235","Text":"We\u0027ll compute these 2 first and diagrams might help."},{"Start":"00:52.235 ","End":"00:56.375","Text":"This is the first basis element, 1,0."},{"Start":"00:56.375 ","End":"01:01.055","Text":"Now, we said that the angle of the mirror with the x-axis is Theta over 2."},{"Start":"01:01.055 ","End":"01:05.195","Text":"After we reflect this, this total angle will be Theta,"},{"Start":"01:05.195 ","End":"01:09.140","Text":"so T of u will be cosine Theta sine Theta."},{"Start":"01:09.140 ","End":"01:14.990","Text":"For the other 1, this is the unit vector in the y direction,"},{"Start":"01:14.990 ","End":"01:18.394","Text":"the second member of the standard basis."},{"Start":"01:18.394 ","End":"01:21.625","Text":"We want to reflect it in this line here."},{"Start":"01:21.625 ","End":"01:25.715","Text":"Look, this angle here is Theta over 2,"},{"Start":"01:25.715 ","End":"01:30.665","Text":"so the compliment of the angle is 90 minus Theta over 2."},{"Start":"01:30.665 ","End":"01:34.020","Text":"Now, this angle that\u0027s marked in red is"},{"Start":"01:34.020 ","End":"01:37.865","Text":"equal to this angle that\u0027s marked in red because this is a mirror."},{"Start":"01:37.865 ","End":"01:40.610","Text":"If this is 90 minus Theta over 2,"},{"Start":"01:40.610 ","End":"01:43.720","Text":"these 2 together are 90 minus Theta over 2,"},{"Start":"01:43.720 ","End":"01:45.560","Text":"if we subtract the Theta over 2,"},{"Start":"01:45.560 ","End":"01:48.230","Text":"we\u0027re left with 90 minus Theta."},{"Start":"01:48.230 ","End":"01:52.490","Text":"But because the angle is clockwise, it\u0027s a negative angle,"},{"Start":"01:52.490 ","End":"01:56.860","Text":"it\u0027s not 90 minus Theta, it\u0027s Theta minus 90."},{"Start":"01:56.860 ","End":"01:59.280","Text":"I just summarized what we had so far."},{"Start":"01:59.280 ","End":"02:04.280","Text":"Yeah, 1,0 and 0,1 are taken to points on the unit circle with"},{"Start":"02:04.280 ","End":"02:10.400","Text":"angles Theta and Theta minus 90, respectively."},{"Start":"02:10.400 ","End":"02:14.940","Text":"T takes 0,1 to cos Theta sine Theta,"},{"Start":"02:14.940 ","End":"02:21.120","Text":"and 0,1 goes to cos Theta minus 90 sine Theta minus 90."},{"Start":"02:21.120 ","End":"02:24.485","Text":"Using basic trigonometric identities,"},{"Start":"02:24.485 ","End":"02:30.425","Text":"this is equal to sine Theta and this is equal to minus cosine Theta."},{"Start":"02:30.425 ","End":"02:34.790","Text":"Now we, continue to figure out what T does to a general x, y."},{"Start":"02:34.790 ","End":"02:39.880","Text":"Well, x, y can be written as a linear combination of 1,0 and 0,1."},{"Start":"02:39.880 ","End":"02:42.410","Text":"If we apply T to it, then by linearity,"},{"Start":"02:42.410 ","End":"02:46.190","Text":"this is equal to xT of 1,0 plus yT of 0,1."},{"Start":"02:46.190 ","End":"02:50.270","Text":"These 2 we already know, it\u0027s this and this."},{"Start":"02:50.270 ","End":"02:53.690","Text":"Just doing the computation, we get this."},{"Start":"02:53.690 ","End":"02:55.655","Text":"This is what we had to show,"},{"Start":"02:55.655 ","End":"02:58.130","Text":"but I\u0027d like to also do it in column form."},{"Start":"02:58.130 ","End":"03:01.190","Text":"The other way of doing it is saying that 0,1"},{"Start":"03:01.190 ","End":"03:04.100","Text":"in column form goes to cosine Theta sine Theta,"},{"Start":"03:04.100 ","End":"03:07.180","Text":"and 0,1 goes to sine Theta minus cosine Theta."},{"Start":"03:07.180 ","End":"03:10.100","Text":"These 2 columns, if you paste them together,"},{"Start":"03:10.100 ","End":"03:12.199","Text":"form the columns of a matrix,"},{"Start":"03:12.199 ","End":"03:14.930","Text":"and this is the matrix of the transformation."},{"Start":"03:14.930 ","End":"03:18.940","Text":"T of x, y is this matrix times x y."},{"Start":"03:18.940 ","End":"03:21.860","Text":"If you do the matrix multiplication,"},{"Start":"03:21.860 ","End":"03:25.940","Text":"we get this and this, which is the same as this,"},{"Start":"03:25.940 ","End":"03:30.780","Text":"this is row form and this is column form. We\u0027re done."}],"ID":27123},{"Watched":false,"Name":"Exercise 12","Duration":"4m 59s","ChapterTopicVideoID":26220,"CourseChapterTopicPlaylistID":253226,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.420","Text":"In this exercise, we\u0027re going to prove something we stated in"},{"Start":"00:03.420 ","End":"00:08.340","Text":"the tutorial that the linear transformation in 2D,"},{"Start":"00:08.340 ","End":"00:13.020","Text":"is orthogonal if and only if its rotation or reflection."},{"Start":"00:13.020 ","End":"00:17.920","Text":"Those are the only 2 possibilities for orthogonal transformation."},{"Start":"00:18.050 ","End":"00:21.240","Text":"Let\u0027s do the only if part,"},{"Start":"00:21.240 ","End":"00:25.080","Text":"which means that we take T to be orthogonal,"},{"Start":"00:25.080 ","End":"00:28.230","Text":"we have to prove its either rotation or reflection."},{"Start":"00:28.230 ","End":"00:34.560","Text":"We can express T in matrix form as T of x, y is matrix a times x, y,"},{"Start":"00:34.560 ","End":"00:38.775","Text":"where A is some a, b, c, d orthogonal matrix."},{"Start":"00:38.775 ","End":"00:43.425","Text":"Since A is orthogonal, then the columns of A,"},{"Start":"00:43.425 ","End":"00:48.340","Text":"let\u0027s call them as follows: col_1of A and col_2 of A."},{"Start":"00:48.340 ","End":"00:52.750","Text":"They\u0027re going to be an orthonormal set, an orthonormal basis even."},{"Start":"00:52.750 ","End":"00:56.630","Text":"That means that the columns ac and bd from"},{"Start":"00:56.630 ","End":"01:00.020","Text":"here are mutually perpendicular and have unit length."},{"Start":"01:00.020 ","End":"01:01.369","Text":"That\u0027s what it means for orthonormal."},{"Start":"01:01.369 ","End":"01:03.155","Text":"Each has a unit length,"},{"Start":"01:03.155 ","End":"01:04.640","Text":"norm equals 1,"},{"Start":"01:04.640 ","End":"01:07.159","Text":"and they are mutually perpendicular,"},{"Start":"01:07.159 ","End":"01:12.230","Text":"meaning the dot product of each of the other is 0."},{"Start":"01:12.760 ","End":"01:16.550","Text":"Now the vector ac, it\u0027s a unit vector,"},{"Start":"01:16.550 ","End":"01:20.030","Text":"it\u0027s on the unit circle so ac has to"},{"Start":"01:20.030 ","End":"01:24.690","Text":"be equal to cosine Theta sine Theta for some angle Theta."},{"Start":"01:25.550 ","End":"01:32.970","Text":"The vector bd is perpendicular to ac so there\u0027s only 2 possibilities."},{"Start":"01:32.970 ","End":"01:35.730","Text":"I\u0027ll give you the diagram first."},{"Start":"01:35.730 ","End":"01:39.515","Text":"Here\u0027s the diagram. This is a, c."},{"Start":"01:39.515 ","End":"01:42.390","Text":"If you want something perpendicular and also on the unit circle,"},{"Start":"01:42.390 ","End":"01:46.435","Text":"it\u0027s either the 90 degrees this way or 90 degrees this way."},{"Start":"01:46.435 ","End":"01:52.990","Text":"So b, d is either at this point Q or the opposite Q prime."},{"Start":"01:56.390 ","End":"02:04.659","Text":"B, d is either, in this case, cosine of Theta plus 90 sine of Theta plus 90,"},{"Start":"02:04.659 ","End":"02:11.900","Text":"which, by trigonometric identities, minus sine Theta cosine Theta."},{"Start":"02:11.900 ","End":"02:14.074","Text":"Or in the other case,"},{"Start":"02:14.074 ","End":"02:20.280","Text":"it\u0027s cosine of Theta minus 90 sine of Theta minus 90,"},{"Start":"02:20.280 ","End":"02:24.710","Text":"first case, second case, either plus 90 or minus 90."},{"Start":"02:24.710 ","End":"02:30.090","Text":"This by trigonometric identities is a sine Theta minus cosine Theta."},{"Start":"02:30.350 ","End":"02:37.290","Text":"In the first case, T of x, y is cosine Theta sine Theta,"},{"Start":"02:37.290 ","End":"02:47.520","Text":"That\u0027s our ac and bd is minus sine Theta cosine Theta."},{"Start":"02:48.130 ","End":"02:53.830","Text":"This is the matrix for rotation by angle Theta around the origin,"},{"Start":"02:53.830 ","End":"02:56.170","Text":"or in the other case,"},{"Start":"02:56.170 ","End":"02:58.980","Text":"T of x, y is the same cosine Theta sine Theta,"},{"Start":"02:58.980 ","End":"03:05.135","Text":"but the other possibility for the second column sine Theta minus cosine Theta."},{"Start":"03:05.135 ","End":"03:09.520","Text":"This matrix is a matrix which is reflection in the line through"},{"Start":"03:09.520 ","End":"03:13.665","Text":"the origin with angle Theta over 2,"},{"Start":"03:13.665 ","End":"03:18.495","Text":"with the positive x-axis here, Theta over 2."},{"Start":"03:18.495 ","End":"03:20.790","Text":"This is the mirror line."},{"Start":"03:20.790 ","End":"03:26.680","Text":"In the diagram, you can see it that the x-axis,"},{"Start":"03:26.680 ","End":"03:31.615","Text":"or rather a unit vector here reflected in the mirror takes us to here."},{"Start":"03:31.615 ","End":"03:34.750","Text":"Unit vector in the direction of the y-axis,"},{"Start":"03:34.750 ","End":"03:37.640","Text":"this line here, if you reflect it in the mirror,"},{"Start":"03:37.640 ","End":"03:40.165","Text":"goes to this 1 here."},{"Start":"03:40.165 ","End":"03:42.200","Text":"In case there\u0027s a rotation,"},{"Start":"03:42.200 ","End":"03:51.770","Text":"this 1 goes to this 1 and this vector goes to this vector."},{"Start":"03:51.770 ","End":"03:55.880","Text":"That concludes the only if part of the proof."},{"Start":"03:55.880 ","End":"03:58.970","Text":"Now the if part, which is the easier direction,"},{"Start":"03:58.970 ","End":"04:01.325","Text":"we\u0027ve almost basically proved it."},{"Start":"04:01.325 ","End":"04:05.000","Text":"We have to show that a rotation is orthogonal"},{"Start":"04:05.000 ","End":"04:07.340","Text":"and the reflection is orthogonal."},{"Start":"04:07.340 ","End":"04:10.955","Text":"Well, if T is a rotation, the matrix is this,"},{"Start":"04:10.955 ","End":"04:17.285","Text":"which is orthogonal because the columns are this and this,"},{"Start":"04:17.285 ","End":"04:19.790","Text":"and they form an orthonormal set."},{"Start":"04:19.790 ","End":"04:22.370","Text":"What we have to check is that the norm of each is"},{"Start":"04:22.370 ","End":"04:26.660","Text":"1 and the dot product of each with the other is 0."},{"Start":"04:26.660 ","End":"04:32.395","Text":"For this 1 we get cosine squared Theta plus sine squared Theta equals 1."},{"Start":"04:32.395 ","End":"04:35.240","Text":"The norm of this 1 squared is"},{"Start":"04:35.240 ","End":"04:38.870","Text":"minus sine Theta squared plus cosine Theta squared is also 1 and"},{"Start":"04:38.870 ","End":"04:42.700","Text":"the dot-product is minus cosine Theta sine Theta"},{"Start":"04:42.700 ","End":"04:48.450","Text":"plus sine Theta cosine Theta, and that is equal to 0."},{"Start":"04:48.450 ","End":"04:55.890","Text":"That\u0027s for this matrix and the case of a reflection is practically the same."},{"Start":"04:55.890 ","End":"04:59.940","Text":"That concludes this exercise."}],"ID":27124}],"Thumbnail":null,"ID":253226},{"Name":"The Spectral Theorem","TopicPlaylistFirstVideoID":0,"Duration":null,"Videos":[{"Watched":false,"Name":"Introduction","Duration":"13m 32s","ChapterTopicVideoID":29341,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.620","Text":"Hi. Welcome to our first video on"},{"Start":"00:04.620 ","End":"00:09.720","Text":"the spectral theorem for real symmetric matrices playlist."},{"Start":"00:09.720 ","End":"00:14.850","Text":"Now this video really is just an introduction and will preface some of"},{"Start":"00:14.850 ","End":"00:20.295","Text":"the ideas that are key for making use of the spectral theorem."},{"Start":"00:20.295 ","End":"00:25.815","Text":"Now, we\u0027ll start with the motivations to why we might want to use this."},{"Start":"00:25.815 ","End":"00:30.375","Text":"It\u0027s to raise a diagonalizable, not diagonal."},{"Start":"00:30.375 ","End":"00:32.220","Text":"Notice they\u0027re 2 different things."},{"Start":"00:32.220 ","End":"00:35.040","Text":"Matrices to a high power, say,"},{"Start":"00:35.040 ","End":"00:38.480","Text":"n. Now this has expansive uses."},{"Start":"00:38.480 ","End":"00:40.955","Text":"One of these is in ODEs,"},{"Start":"00:40.955 ","End":"00:43.445","Text":"so ordinary differential equations."},{"Start":"00:43.445 ","End":"00:46.550","Text":"Another one is in stochastic processes."},{"Start":"00:46.550 ","End":"00:50.390","Text":"These are processes with an element of randomness."},{"Start":"00:50.390 ","End":"00:56.780","Text":"For example, you may draw from a normal distribution or a Brownian distribution,"},{"Start":"00:56.780 ","End":"00:59.455","Text":"which is one and the same thing."},{"Start":"00:59.455 ","End":"01:06.725","Text":"Numerical analysis and all other linear systems that change over time."},{"Start":"01:06.725 ","End":"01:10.940","Text":"Now, it might be useful to just make the distinction quickly"},{"Start":"01:10.940 ","End":"01:15.920","Text":"even just what a diagonal matrix is."},{"Start":"01:15.920 ","End":"01:24.860","Text":"A diagonal matrix is one where all the diagonal components may have some values."},{"Start":"01:24.860 ","End":"01:28.175","Text":"Say a_ 1 to maybe a_ n,"},{"Start":"01:28.175 ","End":"01:34.115","Text":"But then all the other entries in this matrix are 0."},{"Start":"01:34.115 ","End":"01:39.605","Text":"For example, if this is the first row and this is the nth column,"},{"Start":"01:39.605 ","End":"01:41.420","Text":"well then this would be a 0,"},{"Start":"01:41.420 ","End":"01:44.825","Text":"because it\u0027s not on the leading diagonal."},{"Start":"01:44.825 ","End":"01:47.360","Text":"Now why a diagonal matrix is nice,"},{"Start":"01:47.360 ","End":"01:52.205","Text":"because if we raised this matrix to say some power n,"},{"Start":"01:52.205 ","End":"01:55.275","Text":"which just is some integer, well,"},{"Start":"01:55.275 ","End":"02:01.530","Text":"then that\u0027s actually just equal to a_1^n, a_2^n."},{"Start":"02:01.530 ","End":"02:08.255","Text":"Then all these diagonal entries really are just raised to some power n. Then of course,"},{"Start":"02:08.255 ","End":"02:11.525","Text":"all the other entries are 0 as well."},{"Start":"02:11.525 ","End":"02:15.995","Text":"Now not all matrices have this property where they\u0027re diagonal."},{"Start":"02:15.995 ","End":"02:21.940","Text":"What\u0027s the next best thing we can do when we can see if a matrix is diagonalizable?"},{"Start":"02:21.940 ","End":"02:25.520","Text":"If that\u0027s true, then we will be able to make use"},{"Start":"02:25.520 ","End":"02:28.990","Text":"of this fact and it will make our lives a lot easier."},{"Start":"02:28.990 ","End":"02:33.795","Text":"These are ideas that are quite key to the spectral theorem."},{"Start":"02:33.795 ","End":"02:39.430","Text":"Let\u0027s now look at what it means for a matrix to be diagonalizable."},{"Start":"02:39.430 ","End":"02:43.565","Text":"If a matrix A is diagonalizable,"},{"Start":"02:43.565 ","End":"02:49.715","Text":"in which case there exists an invertible matrix P and a diagonal matrix D,"},{"Start":"02:49.715 ","End":"02:56.205","Text":"such that A is equal to P times by D multiplied by P inverse."},{"Start":"02:56.205 ","End":"03:02.640","Text":"Then A^n is just equal to PD^n P inverse."},{"Start":"03:02.640 ","End":"03:04.580","Text":"Now why does this help us?"},{"Start":"03:04.580 ","End":"03:07.160","Text":"Because, say, we just wanted to work"},{"Start":"03:07.160 ","End":"03:13.340","Text":"out A^n and we didn\u0027t know is diagonalizable and we went about it the normal way."},{"Start":"03:13.340 ","End":"03:15.919","Text":"Let\u0027s just take a general matrix."},{"Start":"03:15.919 ","End":"03:20.890","Text":"We\u0027d say it might have entries a_1, a_2."},{"Start":"03:20.890 ","End":"03:23.855","Text":"Let\u0027s just say this is a_2 as well and this is a_3."},{"Start":"03:23.855 ","End":"03:28.415","Text":"It preserves this nature of being real and symmetric."},{"Start":"03:28.415 ","End":"03:33.324","Text":"Then if we\u0027re going to do that to the power of n, well,"},{"Start":"03:33.324 ","End":"03:34.925","Text":"if n is very large,"},{"Start":"03:34.925 ","End":"03:38.446","Text":"then this is going to require a lot of computational work,"},{"Start":"03:38.446 ","End":"03:42.095","Text":"because what we would have to do is we would have to do a_1,"},{"Start":"03:42.095 ","End":"03:45.185","Text":"a_ 2, a_2, a_ 3,"},{"Start":"03:45.185 ","End":"03:48.545","Text":"and then multiply that with itself n times,"},{"Start":"03:48.545 ","End":"03:53.000","Text":"and then eventually this final one, a_2, a_3."},{"Start":"03:53.000 ","End":"03:55.130","Text":"Now, this is a very nice."},{"Start":"03:55.130 ","End":"03:59.060","Text":"If we could get A, which was this matrix here,"},{"Start":"03:59.060 ","End":"04:02.665","Text":"into say, this form."},{"Start":"04:02.665 ","End":"04:05.650","Text":"Well, it\u0027s a lot easier to do A^n"},{"Start":"04:05.650 ","End":"04:09.185","Text":"because then we\u0027re only doing a diagonal matrix to the power of n,"},{"Start":"04:09.185 ","End":"04:12.065","Text":"and we need to multiply it by P and P inverse,"},{"Start":"04:12.065 ","End":"04:15.830","Text":"which computationally is a lot easier."},{"Start":"04:15.830 ","End":"04:20.180","Text":"Now, it\u0027s quite difficult to do this and it\u0027s not really"},{"Start":"04:20.180 ","End":"04:25.335","Text":"obvious how we find out what these P and D matrices are."},{"Start":"04:25.335 ","End":"04:26.945","Text":"You\u0027re also may be wondering,"},{"Start":"04:26.945 ","End":"04:29.945","Text":"how does this A^n equal this."},{"Start":"04:29.945 ","End":"04:33.155","Text":"Let\u0027s just actually motivate this result."},{"Start":"04:33.155 ","End":"04:37.625","Text":"Let\u0027s say we did have a matrix that was diagonalizable, A,"},{"Start":"04:37.625 ","End":"04:39.935","Text":"which was PDP inverse,"},{"Start":"04:39.935 ","End":"04:46.950","Text":"then A^2, well that\u0027s just going to be this PDP inverse^2."},{"Start":"04:47.710 ","End":"04:55.235","Text":"PDP inverse multiplied by PDP inverse."},{"Start":"04:55.235 ","End":"04:59.024","Text":"Now, P inverse multiplied by P,"},{"Start":"04:59.024 ","End":"05:02.090","Text":"well that\u0027s just equal to the identity matrix."},{"Start":"05:02.090 ","End":"05:03.558","Text":"Then what are we left with,"},{"Start":"05:03.558 ","End":"05:05.960","Text":"well, we\u0027ve got PD,"},{"Start":"05:05.960 ","End":"05:12.615","Text":"the identity matrix, DP inverse."},{"Start":"05:12.615 ","End":"05:16.280","Text":"Any matrix multiplied by the identity matrix,"},{"Start":"05:16.280 ","End":"05:18.830","Text":"given that the dimensions are consistent,"},{"Start":"05:18.830 ","End":"05:20.690","Text":"is just equal to itself."},{"Start":"05:20.690 ","End":"05:29.715","Text":"We can actually just remove this I and then we\u0027ve just got PD^2 multiplied by P inverse."},{"Start":"05:29.715 ","End":"05:34.035","Text":"Then if we just extend this to A^n,"},{"Start":"05:34.035 ","End":"05:40.460","Text":"then what we get is we get PDP inverse,"},{"Start":"05:40.460 ","End":"05:44.585","Text":"and then we\u0027ve got another PDP inverse."},{"Start":"05:44.585 ","End":"05:49.880","Text":"Then we keep going and then we\u0027ve got another PDP inverse."},{"Start":"05:49.880 ","End":"05:51.890","Text":"This is done n times."},{"Start":"05:51.890 ","End":"05:58.135","Text":"Then what we find is all these P inverse multiplied by P,"},{"Start":"05:58.135 ","End":"06:00.410","Text":"that will just give us a bunch of I\u0027s."},{"Start":"06:00.410 ","End":"06:08.400","Text":"Then essentially it just collapses down to PD^n P inverse."},{"Start":"06:08.400 ","End":"06:11.660","Text":"Essentially what our goal is,"},{"Start":"06:11.660 ","End":"06:16.010","Text":"is if we know that a matrix A is diagonalizable to"},{"Start":"06:16.010 ","End":"06:20.810","Text":"find these matrices P and D. That will make our life so"},{"Start":"06:20.810 ","End":"06:25.340","Text":"much easier when we want to raise A to the power of a large"},{"Start":"06:25.340 ","End":"06:30.225","Text":"number n. We have our relation."},{"Start":"06:30.225 ","End":"06:35.295","Text":"A^n is equal to PD^n P inverse."},{"Start":"06:35.295 ","End":"06:37.605","Text":"How do we find P and D?"},{"Start":"06:37.605 ","End":"06:44.810","Text":"Well, we have discussed this in a previous video on matrix diagonalization."},{"Start":"06:44.810 ","End":"06:47.195","Text":"We will just recap."},{"Start":"06:47.195 ","End":"06:52.085","Text":"The diagonal entries of this matrix D,"},{"Start":"06:52.085 ","End":"06:55.510","Text":"they\u0027re just the eigenvalues of A."},{"Start":"06:55.510 ","End":"07:01.160","Text":"These can be solved or found by solving the characteristic polynomial,"},{"Start":"07:01.160 ","End":"07:08.315","Text":"which is the determinant of A minus Lambda I is equal to 0,"},{"Start":"07:08.315 ","End":"07:13.965","Text":"where Lambda are the eigenvalues."},{"Start":"07:13.965 ","End":"07:19.380","Text":"If we just had a simple maybe 3^3 matrix,"},{"Start":"07:19.380 ","End":"07:21.990","Text":"which gave us eigenvalues,"},{"Start":"07:21.990 ","End":"07:26.190","Text":"say Lambda_1 is equal to a,"},{"Start":"07:26.190 ","End":"07:28.815","Text":"Lambda_2 is equal to b,"},{"Start":"07:28.815 ","End":"07:32.180","Text":"and Lambda_3 is equal to c,"},{"Start":"07:32.180 ","End":"07:33.680","Text":"where we\u0027ll just say that a, b,"},{"Start":"07:33.680 ","End":"07:35.360","Text":"and c are distinct."},{"Start":"07:35.360 ","End":"07:43.265","Text":"Then our diagonal matrix D would just be the eigenvalues a,"},{"Start":"07:43.265 ","End":"07:47.495","Text":"b, c, and then zeros everywhere else,"},{"Start":"07:47.495 ","End":"07:50.975","Text":"as we said at the beginning of the video."},{"Start":"07:50.975 ","End":"07:54.210","Text":"Now, how do we find P?"},{"Start":"07:55.480 ","End":"08:02.620","Text":"The columns of P are formed using the corresponding eigenvectors of A."},{"Start":"08:02.620 ","End":"08:06.035","Text":"Now, remember how we find the eigenvectors."},{"Start":"08:06.035 ","End":"08:11.555","Text":"We just substitute in our eigenvalues and we see"},{"Start":"08:11.555 ","End":"08:17.990","Text":"what eigenvectors we get from them using the typical equation which you may have seen,"},{"Start":"08:17.990 ","End":"08:20.960","Text":"Av is equal to Lambda_v,"},{"Start":"08:20.960 ","End":"08:24.770","Text":"where these v\u0027s here are the eigenvectors."},{"Start":"08:24.770 ","End":"08:29.330","Text":"We would have done a lot of examples of those in previous videos."},{"Start":"08:29.330 ","End":"08:33.050","Text":"Now, there\u0027s a really important thing to note here."},{"Start":"08:33.050 ","End":"08:39.290","Text":"That\u0027s the order of the eigenvector columns in P must correspond to"},{"Start":"08:39.290 ","End":"08:42.440","Text":"the order of the eigenvalues that are placed in"},{"Start":"08:42.440 ","End":"08:46.910","Text":"this matrix D. What you couldn\u0027t do is have,"},{"Start":"08:46.910 ","End":"08:50.000","Text":"say, the eigenvalue Lambda_1,"},{"Start":"08:50.000 ","End":"08:55.560","Text":"which was in the first column of D or in this first entry in the diagonal."},{"Start":"08:55.560 ","End":"08:58.205","Text":"Then in your matrix for P,"},{"Start":"08:58.205 ","End":"09:04.190","Text":"have the eigenvalue in the first column that corresponded to the eigenvector from b."},{"Start":"09:04.190 ","End":"09:10.190","Text":"Now, it\u0027s just good practice to make sure that when you work out your eigenvectors,"},{"Start":"09:10.190 ","End":"09:14.300","Text":"you just do them in the order that you work them out here."},{"Start":"09:14.300 ","End":"09:16.295","Text":"As long as it\u0027s consistent,"},{"Start":"09:16.295 ","End":"09:24.650","Text":"then you should arrive at the correct values of P and the diagonal matrix"},{"Start":"09:24.650 ","End":"09:34.675","Text":"D. Now we\u0027re going to suppose that we have a matrix A that is real and symmetric."},{"Start":"09:34.675 ","End":"09:39.940","Text":"In other words, the matrix A is equal to its transpose,"},{"Start":"09:39.940 ","End":"09:42.320","Text":"which will denote A^T."},{"Start":"09:43.380 ","End":"09:48.580","Text":"Now we can make use of the spectral theorem to determine P and"},{"Start":"09:48.580 ","End":"09:53.350","Text":"D. Although this time the process is slightly different."},{"Start":"09:53.350 ","End":"09:56.500","Text":"Now we\u0027re going to look at what we would do to determine P and"},{"Start":"09:56.500 ","End":"09:59.815","Text":"D if we have a real symmetric matrix."},{"Start":"09:59.815 ","End":"10:04.270","Text":"Let me just give an example of just 1 real symmetric matrix so you\u0027d fully"},{"Start":"10:04.270 ","End":"10:09.475","Text":"understand what we mean when we say the matrix A is equal to its transpose."},{"Start":"10:09.475 ","End":"10:11.610","Text":"If we had a matrix A,"},{"Start":"10:11.610 ","End":"10:13.595","Text":"which was equal to say,"},{"Start":"10:13.595 ","End":"10:17.315","Text":"and we\u0027re just going to do a 2^2 just so it\u0027s easy."},{"Start":"10:17.315 ","End":"10:22.310","Text":"We had say, 3 and this could be a 6,"},{"Start":"10:22.310 ","End":"10:25.820","Text":"and this could be a 6 and this could be say, minus 2."},{"Start":"10:25.820 ","End":"10:28.049","Text":"Well, A transpose,"},{"Start":"10:28.049 ","End":"10:32.120","Text":"remember what we do when we work out the transpose of a matrix,"},{"Start":"10:32.120 ","End":"10:36.910","Text":"we just swap the order of the i, j entry."},{"Start":"10:37.010 ","End":"10:41.615","Text":"For example, if we take this 3 here, well,"},{"Start":"10:41.615 ","End":"10:45.675","Text":"that\u0027s just going to be the first row first column."},{"Start":"10:45.675 ","End":"10:48.555","Text":"That maps to the first row, first column."},{"Start":"10:48.555 ","End":"10:50.220","Text":"This would just be a 3."},{"Start":"10:50.220 ","End":"10:52.310","Text":"Now, similarly,"},{"Start":"10:52.310 ","End":"10:56.090","Text":"the minus 2, that\u0027s the second row second column."},{"Start":"10:56.090 ","End":"10:57.950","Text":"That would just map to itself again."},{"Start":"10:57.950 ","End":"10:59.855","Text":"We\u0027ve just got a minus 2 here."},{"Start":"10:59.855 ","End":"11:01.565","Text":"Now this 6 here,"},{"Start":"11:01.565 ","End":"11:05.130","Text":"this is the first row second column."},{"Start":"11:05.130 ","End":"11:09.725","Text":"That would map to the second row first column,"},{"Start":"11:09.725 ","End":"11:11.795","Text":"which would go here."},{"Start":"11:11.795 ","End":"11:13.190","Text":"For the same reason,"},{"Start":"11:13.190 ","End":"11:15.900","Text":"this 6 would map here."},{"Start":"11:15.900 ","End":"11:18.645","Text":"These are real symmetric matrices."},{"Start":"11:18.645 ","End":"11:21.185","Text":"Now we\u0027re going to see how the process"},{"Start":"11:21.185 ","End":"11:26.350","Text":"differs when we want to determine P and D in this case."},{"Start":"11:26.350 ","End":"11:32.615","Text":"There are some similarities between the general case that we looked up before."},{"Start":"11:32.615 ","End":"11:37.940","Text":"That\u0027s the diagonal entries of D are the eigenvalues of A,"},{"Start":"11:37.940 ","End":"11:41.120","Text":"which was the same thing that we had before."},{"Start":"11:41.120 ","End":"11:46.295","Text":"We will also calculate the eigenvectors in the same way."},{"Start":"11:46.295 ","End":"11:51.770","Text":"Now, here\u0027s where the difference comes for each eigenspace,"},{"Start":"11:51.770 ","End":"11:55.850","Text":"which really is just a collection of the eigenvectors."},{"Start":"11:55.850 ","End":"11:59.195","Text":"We need to use the Gram-Schmidt,"},{"Start":"11:59.195 ","End":"12:00.850","Text":"which is an algorithm,"},{"Start":"12:00.850 ","End":"12:03.245","Text":"to find an orthonormal basis."},{"Start":"12:03.245 ","End":"12:08.300","Text":"Then the corresponding eigenvectors or vectors will form"},{"Start":"12:08.300 ","End":"12:14.915","Text":"an orthogonal matrix P. Now since P is an orthogonal matrix,"},{"Start":"12:14.915 ","End":"12:20.370","Text":"P inverse is equal to the transpose of P. Remember"},{"Start":"12:20.370 ","End":"12:26.270","Text":"before where we had A was equal to PDP inverse."},{"Start":"12:26.270 ","End":"12:29.810","Text":"Now, because P inverse is equal to the transpose of P,"},{"Start":"12:29.810 ","End":"12:33.035","Text":"we can replace that with P transpose,"},{"Start":"12:33.035 ","End":"12:40.040","Text":"in which case A^n now is just PD^n P transpose."},{"Start":"12:40.040 ","End":"12:43.220","Text":"Now, this might seem like a lot to take in,"},{"Start":"12:43.220 ","End":"12:47.180","Text":"but don\u0027t worry at this point because we\u0027re going to do plenty of"},{"Start":"12:47.180 ","End":"12:53.120","Text":"examples of seeing how we can use this whole process to"},{"Start":"12:53.120 ","End":"12:56.240","Text":"work out the matrices P and D."},{"Start":"12:56.240 ","End":"12:59.750","Text":"The Gram-Schmidt algorithm really is"},{"Start":"12:59.750 ","End":"13:03.890","Text":"just a way to find this orthonormal basis that we\u0027re looking for."},{"Start":"13:03.890 ","End":"13:08.555","Text":"We will see exactly what these things mean in the later videos."},{"Start":"13:08.555 ","End":"13:14.135","Text":"But hopefully that\u0027s a good introduction to the spectral theorem for"},{"Start":"13:14.135 ","End":"13:18.500","Text":"real symmetric matrices and covers some of the key things"},{"Start":"13:18.500 ","End":"13:23.840","Text":"that we\u0027ll need to do to work out these matrices P and D,"},{"Start":"13:23.840 ","End":"13:27.560","Text":"which will make our computation a lot easier for some of"},{"Start":"13:27.560 ","End":"13:32.520","Text":"the fields that we mentioned right at the start of the video."}],"ID":30947},{"Watched":false,"Name":"Example","Duration":"12m 38s","ChapterTopicVideoID":29342,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.170 ","End":"00:03.270","Text":"As promised in the previous video,"},{"Start":"00:03.270 ","End":"00:09.630","Text":"we\u0027re now going to be finding the matrices P and D that were necessary"},{"Start":"00:09.630 ","End":"00:16.755","Text":"for us to calculate or re-express A in the following form."},{"Start":"00:16.755 ","End":"00:19.965","Text":"Let A be the following real,"},{"Start":"00:19.965 ","End":"00:22.440","Text":"symmetric matrix, 5,"},{"Start":"00:22.440 ","End":"00:24.285","Text":"2, 2, 8."},{"Start":"00:24.285 ","End":"00:31.530","Text":"We need to find a suitable invertible matrix P and diagonalizable matrix D,"},{"Start":"00:31.530 ","End":"00:36.915","Text":"such that A can be expressed in this form PDP inverse."},{"Start":"00:36.915 ","End":"00:40.760","Text":"Remember what we needed for P and D,"},{"Start":"00:40.760 ","End":"00:47.095","Text":"we need it to work out the eigenvalues and the associated eigenvectors of A."},{"Start":"00:47.095 ","End":"00:52.685","Text":"Let\u0027s start by working out the eigenvalues and how do we do that?"},{"Start":"00:52.685 ","End":"00:57.125","Text":"Remember, we just use the characteristic polynomial."},{"Start":"00:57.125 ","End":"01:05.565","Text":"That\u0027s just the determinants of A minus Lambda I."},{"Start":"01:05.565 ","End":"01:09.470","Text":"This is the characteristic polynomial and then we just turn this into"},{"Start":"01:09.470 ","End":"01:14.080","Text":"the characteristic equation by setting this = 0."},{"Start":"01:14.080 ","End":"01:18.320","Text":"Now, what is A minus Lambda I?"},{"Start":"01:18.320 ","End":"01:26.480","Text":"Well, that\u0027s just going to be this matrix A but we minus Lambda on the diagonal entries."},{"Start":"01:26.480 ","End":"01:32.475","Text":"We just get, 5 minus Lambda 2,"},{"Start":"01:32.475 ","End":"01:36.255","Text":"2 and then 8 minus Lambda."},{"Start":"01:36.255 ","End":"01:41.500","Text":"This is our matrix A minus Lambda I."},{"Start":"01:41.500 ","End":"01:44.945","Text":"Now we need to work out what the determinant of this is."},{"Start":"01:44.945 ","End":"01:51.695","Text":"Remember for a 2 by 2 matrix that\u0027s just AD minus BC."},{"Start":"01:51.695 ","End":"01:55.205","Text":"If we had a general matrix a,"},{"Start":"01:55.205 ","End":"01:58.605","Text":"b, c, d like that."},{"Start":"01:58.605 ","End":"02:01.160","Text":"If we work out the determinant of this,"},{"Start":"02:01.160 ","End":"02:09.385","Text":"then we\u0027re just going to get 5 minus Lambda multiplied by 8 minus Lambda."},{"Start":"02:09.385 ","End":"02:12.735","Text":"Then we\u0027re just taking away 2 times 2."},{"Start":"02:12.735 ","End":"02:14.414","Text":"We get minus 4,"},{"Start":"02:14.414 ","End":"02:18.245","Text":"then we set that = 0."},{"Start":"02:18.245 ","End":"02:20.870","Text":"Now, if we expand this out,"},{"Start":"02:20.870 ","End":"02:27.800","Text":"then what we get is the following equation for our eigenvalues Lambda."},{"Start":"02:27.800 ","End":"02:31.895","Text":"Just to skip maybe one or two lines of the algebra,"},{"Start":"02:31.895 ","End":"02:36.485","Text":"you should arrive at Lambda^2 minus"},{"Start":"02:36.485 ","End":"02:43.055","Text":"13 Lambda plus 36, which = 0."},{"Start":"02:43.055 ","End":"02:48.575","Text":"Now this is quite a nice quadratic because it actually does factorize."},{"Start":"02:48.575 ","End":"02:52.940","Text":"What we get is we get Lambda minus 9,"},{"Start":"02:52.940 ","End":"02:58.320","Text":"Lambda minus 4, which = 0."},{"Start":"02:58.320 ","End":"03:01.670","Text":"Then that tells us that we\u0027ve got 2 values of"},{"Start":"03:01.670 ","End":"03:05.255","Text":"Lambda which are distinct and they are lambda,"},{"Start":"03:05.255 ","End":"03:12.345","Text":"let\u0027s just say Lambda 1 which = 4 and Lambda 2 which is equal to 9."},{"Start":"03:12.345 ","End":"03:14.870","Text":"Now we\u0027ve got the eigenvalues,"},{"Start":"03:14.870 ","End":"03:20.915","Text":"we need to work out the associated or corresponding eigenvectors."},{"Start":"03:20.915 ","End":"03:23.395","Text":"Let\u0027s do that now."},{"Start":"03:23.395 ","End":"03:26.415","Text":"Remember this was our vector A,"},{"Start":"03:26.415 ","End":"03:28.395","Text":"these were our eigenvalues."},{"Start":"03:28.395 ","End":"03:30.845","Text":"How do we work out the eigenvectors?"},{"Start":"03:30.845 ","End":"03:32.795","Text":"Well, we just use the eigenvector,"},{"Start":"03:32.795 ","End":"03:35.435","Text":"eigenvalue equation, the general one,"},{"Start":"03:35.435 ","End":"03:39.590","Text":"which is just Av ="},{"Start":"03:39.590 ","End":"03:47.165","Text":"Lambda v. The Vs here are our eigenvectors that we want to work out."},{"Start":"03:47.165 ","End":"03:50.450","Text":"4 Lambda = 4,"},{"Start":"03:50.450 ","End":"03:53.620","Text":"or our first value of Lambda, Lambda 1."},{"Start":"03:53.620 ","End":"03:57.585","Text":"For Lambda 1 = 4."},{"Start":"03:57.585 ","End":"03:58.800","Text":"Well, we know what A is,"},{"Start":"03:58.800 ","End":"04:01.140","Text":"that\u0027s just 5, 2, 2, 8."},{"Start":"04:01.140 ","End":"04:02.595","Text":"We\u0027ve got 5,"},{"Start":"04:02.595 ","End":"04:05.835","Text":"2, 2, 8."},{"Start":"04:05.835 ","End":"04:08.765","Text":"Then we\u0027re multiplying that by our eigenvector,"},{"Start":"04:08.765 ","End":"04:11.985","Text":"which we\u0027ll just call V_1, V_2,"},{"Start":"04:11.985 ","End":"04:19.515","Text":"just a general vector and that\u0027s = Lambda V. This is going to be 4 and then once again,"},{"Start":"04:19.515 ","End":"04:22.865","Text":"we\u0027ve got V_1, V_2 here."},{"Start":"04:22.865 ","End":"04:26.510","Text":"Now, if we just expand this out, well,"},{"Start":"04:26.510 ","End":"04:35.570","Text":"then that tells us that 5V_1 plus 2V_2 = 4V_1."},{"Start":"04:35.570 ","End":"04:41.150","Text":"That comes from multiplying this first row with this vector."},{"Start":"04:41.150 ","End":"04:44.150","Text":"Then we get from the second row,"},{"Start":"04:44.150 ","End":"04:52.755","Text":"we get to 2V_1 plus 8V_2= 4V_2."},{"Start":"04:52.755 ","End":"04:55.490","Text":"Now, what do these two things tell us?"},{"Start":"04:55.490 ","End":"04:58.430","Text":"Well, if we just simplify this a little bit,"},{"Start":"04:58.430 ","End":"05:02.380","Text":"we get V_1 = minus 2V_2."},{"Start":"05:02.380 ","End":"05:10.997","Text":"The eigenvector that corresponds to this eigenvalue of 4,"},{"Start":"05:10.997 ","End":"05:13.861","Text":"let\u0027s just call it V of"},{"Start":"05:13.861 ","End":"05:18.695","Text":"Lambda 1 to know that this is the eigenvector corresponding to the eigenvalue."},{"Start":"05:18.695 ","End":"05:23.145","Text":"Well, that\u0027s just equal to minus 2, 1."},{"Start":"05:23.145 ","End":"05:27.285","Text":"Because remember V_1 = minus 2V_2."},{"Start":"05:27.285 ","End":"05:29.680","Text":"If we set V_2= 1,"},{"Start":"05:29.680 ","End":"05:33.710","Text":"then this first entry is just going to be minus 2."},{"Start":"05:33.710 ","End":"05:37.250","Text":"Now, we apply exactly the same procedure to work out"},{"Start":"05:37.250 ","End":"05:41.275","Text":"the eigenvector for Lambda 2, which is 9."},{"Start":"05:41.275 ","End":"05:43.675","Text":"Lambda 2 which is = 9."},{"Start":"05:43.675 ","End":"05:47.870","Text":"Well, again, we\u0027re just doing Av = Lambda v so 5,"},{"Start":"05:47.870 ","End":"05:49.040","Text":"2, 2,"},{"Start":"05:49.040 ","End":"05:52.355","Text":"8 multiplied by V_1,"},{"Start":"05:52.355 ","End":"05:59.250","Text":"V_2 = 9, V_1, V_2."},{"Start":"05:59.250 ","End":"06:02.520","Text":"Then again exactly the same,"},{"Start":"06:02.520 ","End":"06:06.640","Text":"we get 5V_1"},{"Start":"06:06.770 ","End":"06:14.175","Text":"plus 2V_2 = 9V_1."},{"Start":"06:14.175 ","End":"06:22.330","Text":"We get 2V_1 plus 8V_2 is = 9V_2."},{"Start":"06:22.330 ","End":"06:24.500","Text":"What do these two things tell us?"},{"Start":"06:24.500 ","End":"06:32.340","Text":"Well, then this tells us that V_1=1/2V_2."},{"Start":"06:33.140 ","End":"06:41.950","Text":"This gives us our second eigenvector and we will just denote that as V Lambda 2."},{"Start":"06:42.560 ","End":"06:48.520","Text":"Remember, if we say V_2= 1,"},{"Start":"06:48.520 ","End":"06:53.135","Text":"then this is just going to be 1/2 because V_1 is a 1/2 V_2."},{"Start":"06:53.135 ","End":"06:55.085","Text":"Or if you like,"},{"Start":"06:55.085 ","End":"06:58.505","Text":"we can just multiply this by 2 and then we get 1 here,"},{"Start":"06:58.505 ","End":"07:00.500","Text":"or we\u0027ll keep the same color."},{"Start":"07:00.500 ","End":"07:04.065","Text":"We get a 1 here and we\u0027ll get a 2 here."},{"Start":"07:04.065 ","End":"07:10.280","Text":"These are our eigenvectors associated with their corresponding eigenvalues."},{"Start":"07:10.280 ","End":"07:18.825","Text":"But what we need to do now is obtain an orthogonal or an orthonormal basis."},{"Start":"07:18.825 ","End":"07:26.390","Text":"What we have here is we have the orthonormal basis, let\u0027s just write that."},{"Start":"07:26.390 ","End":"07:29.930","Text":"Orthonormal basis, and this is"},{"Start":"07:29.930 ","End":"07:34.910","Text":"just the sets of the two eigenvectors that we just worked out."},{"Start":"07:34.910 ","End":"07:40.365","Text":"Minus 2, 1 and orthogonal."},{"Start":"07:40.365 ","End":"07:43.130","Text":"These orthogonal, because remember,"},{"Start":"07:43.130 ","End":"07:46.310","Text":"orthogonal vectors, if you take the top product of them,"},{"Start":"07:46.310 ","End":"07:47.915","Text":"that gives you 0."},{"Start":"07:47.915 ","End":"07:51.695","Text":"We\u0027ve got minus 2 plus 2=0."},{"Start":"07:51.695 ","End":"07:54.215","Text":"We\u0027re happy with that. It\u0027s an orthogonal basis."},{"Start":"07:54.215 ","End":"07:59.910","Text":"What we need to do now is turn this into an orthonormal basis."},{"Start":"07:59.910 ","End":"08:05.915","Text":"What we mean by orthonormal is that when we take the magnitude of these two vectors,"},{"Start":"08:05.915 ","End":"08:07.610","Text":"it gives us 1."},{"Start":"08:07.610 ","End":"08:08.911","Text":"At the moment,"},{"Start":"08:08.911 ","End":"08:11.375","Text":"if we take the magnitude of these vectors,"},{"Start":"08:11.375 ","End":"08:12.965","Text":"of both of them, actually,"},{"Start":"08:12.965 ","End":"08:15.860","Text":"it will give us the square root of 5."},{"Start":"08:15.860 ","End":"08:22.570","Text":"This can be seen by just doing the magnitude of minus 2, 1."},{"Start":"08:22.570 ","End":"08:25.505","Text":"Maybe I should just use different notation because"},{"Start":"08:25.505 ","End":"08:28.050","Text":"that might confuse it. No, that\u0027s fine."},{"Start":"08:28.050 ","End":"08:31.185","Text":"The magnitude of minus 2, 1,"},{"Start":"08:31.185 ","End":"08:38.480","Text":"well that\u0027s just equal to minus 2^2 plus 1^2 square rooted,"},{"Start":"08:38.480 ","End":"08:42.270","Text":"which is equal to the square root of 5."},{"Start":"08:42.270 ","End":"08:44.010","Text":"How do we normalize this?"},{"Start":"08:44.010 ","End":"08:48.305","Text":"Well, we just divide both components by the square root of 5."},{"Start":"08:48.305 ","End":"08:51.290","Text":"We\u0027ll see what happens when we do that now."},{"Start":"08:51.290 ","End":"08:56.270","Text":"If we divide these two vectors by the square root of 5,"},{"Start":"08:56.270 ","End":"08:59.345","Text":"then what we get is we get this new set,"},{"Start":"08:59.345 ","End":"09:03.265","Text":"which is minus 2 over root 5,"},{"Start":"09:03.265 ","End":"09:06.045","Text":"1 over root 5,"},{"Start":"09:06.045 ","End":"09:08.700","Text":"and that\u0027s our first vector."},{"Start":"09:08.700 ","End":"09:16.740","Text":"Then our second vector is going to be 1 over root 5 and 2 over root 5,"},{"Start":"09:16.740 ","End":"09:18.550","Text":"so square root of 5."},{"Start":"09:18.550 ","End":"09:21.755","Text":"Now, this isn\u0027t just an orthogonal basis."},{"Start":"09:21.755 ","End":"09:28.736","Text":"This is what we call an orthonormal basis."},{"Start":"09:28.736 ","End":"09:30.185","Text":"When we take the magnitude of them,"},{"Start":"09:30.185 ","End":"09:31.865","Text":"it gives us 1."},{"Start":"09:31.865 ","End":"09:41.720","Text":"Our matrix P, which we were trying to find is just the columns of these 2 eigenvectors."},{"Start":"09:41.720 ","End":"09:44.780","Text":"I remember what we said in the previous video,"},{"Start":"09:44.780 ","End":"09:48.440","Text":"it\u0027s very important that we match the order of"},{"Start":"09:48.440 ","End":"09:52.640","Text":"the columns that correspond to the eigenvalue order for"},{"Start":"09:52.640 ","End":"10:01.680","Text":"our matrix D. So P is just = minus 2 over the square root of 5,"},{"Start":"10:01.680 ","End":"10:08.145","Text":"1 over the square root of 5 and then we\u0027ve got a 1 over the square root of 5 here,"},{"Start":"10:08.145 ","End":"10:13.430","Text":"and then 2 over the square root of 5."},{"Start":"10:13.430 ","End":"10:17.190","Text":"Now remember our matrix A,"},{"Start":"10:17.200 ","End":"10:22.020","Text":"so A which = 5,"},{"Start":"10:22.020 ","End":"10:24.120","Text":"2, 2, 8."},{"Start":"10:24.120 ","End":"10:27.300","Text":"Remember, this is real and symmetric."},{"Start":"10:27.300 ","End":"10:33.050","Text":"Recall from the previous video that when we have an orthonormal basis for P,"},{"Start":"10:33.050 ","End":"10:39.665","Text":"well, that means that P inverse is just = P transpose."},{"Start":"10:39.665 ","End":"10:42.890","Text":"We can actually work out the inverse of P very easily,"},{"Start":"10:42.890 ","End":"10:46.500","Text":"given that P is an orthonormal basis."},{"Start":"10:46.500 ","End":"10:49.295","Text":"What are the components of the transpose?"},{"Start":"10:49.295 ","End":"10:54.485","Text":"Well, this top left and bottom right one remain the same because remember,"},{"Start":"10:54.485 ","End":"10:57.900","Text":"this was just the 1,"},{"Start":"10:57.900 ","End":"11:01.095","Text":"1 component and this was the 2, 2 component."},{"Start":"11:01.095 ","End":"11:02.730","Text":"These two stay the same."},{"Start":"11:02.730 ","End":"11:05.850","Text":"We\u0027ve got minus 2 over root 5."},{"Start":"11:05.850 ","End":"11:10.885","Text":"We\u0027ve got a 2 over root 5 here and"},{"Start":"11:10.885 ","End":"11:16.805","Text":"the only components that change are the 2 that we have here."},{"Start":"11:16.805 ","End":"11:18.710","Text":"But because they\u0027re actually the same, well,"},{"Start":"11:18.710 ","End":"11:21.860","Text":"this matrix actually just remains the same."},{"Start":"11:21.860 ","End":"11:24.955","Text":"We\u0027ve got a 1 over root 5 here,"},{"Start":"11:24.955 ","End":"11:29.190","Text":"and we\u0027ve got a 1 over root 5 here as well."},{"Start":"11:29.190 ","End":"11:32.539","Text":"Here\u0027s our P, here\u0027s our P inverse."},{"Start":"11:32.539 ","End":"11:36.335","Text":"The only other thing we need is our matrix D. But remember,"},{"Start":"11:36.335 ","End":"11:41.880","Text":"D is simply the eigenvalues along the leading diagonal."},{"Start":"11:41.880 ","End":"11:45.375","Text":"If we just recall our Lambda 1 was = 4,"},{"Start":"11:45.375 ","End":"11:47.925","Text":"our Lambda 2 = 9."},{"Start":"11:47.925 ","End":"11:50.775","Text":"Our matrix D is just 4,"},{"Start":"11:50.775 ","End":"11:53.620","Text":"0, 0, 9."},{"Start":"11:53.620 ","End":"11:56.075","Text":"Now, if you\u0027d like, you can check,"},{"Start":"11:56.075 ","End":"12:06.860","Text":"if you want to do A is = P D P inverse and sub in these values for P,"},{"Start":"12:06.860 ","End":"12:08.180","Text":"D, and P inverse."},{"Start":"12:08.180 ","End":"12:13.880","Text":"But remember, P inverse now is actually P transpose."},{"Start":"12:13.880 ","End":"12:15.245","Text":"If you sub that in,"},{"Start":"12:15.245 ","End":"12:19.265","Text":"then you can see that we actually do get the same value for A."},{"Start":"12:19.265 ","End":"12:21.530","Text":"Then if we want to do A^n,"},{"Start":"12:21.530 ","End":"12:24.380","Text":"which was our big goal, well,"},{"Start":"12:24.380 ","End":"12:29.805","Text":"that\u0027s just going to be PD^n, P transpose."},{"Start":"12:29.805 ","End":"12:32.660","Text":"We\u0027ll look at a couple more examples in"},{"Start":"12:32.660 ","End":"12:38.580","Text":"the later videos and we\u0027re just build up to some more complex ones as well."}],"ID":30948},{"Watched":false,"Name":"Motivation","Duration":"8m 2s","ChapterTopicVideoID":29343,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:07.155","Text":"Hi. Welcome back to the third video on our spectral theorem playlist."},{"Start":"00:07.155 ","End":"00:12.240","Text":"Just to recap, what we\u0027ve seen so far is we\u0027ve briefly"},{"Start":"00:12.240 ","End":"00:17.850","Text":"mentioned maybe a couple uses or where the spectral theorem may be useful,"},{"Start":"00:17.850 ","End":"00:20.250","Text":"and in the previous video,"},{"Start":"00:20.250 ","End":"00:25.980","Text":"we saw how to determine the matrices P and D,"},{"Start":"00:25.980 ","End":"00:29.400","Text":"if a matrix A is diagonalizable."},{"Start":"00:29.400 ","End":"00:32.460","Text":"Remember we said if A is diagonalizable,"},{"Start":"00:32.460 ","End":"00:34.980","Text":"then we can express it in this form,"},{"Start":"00:34.980 ","End":"00:44.130","Text":"PDP inverse, where P is an invertible matrix and D is a diagonal matrix."},{"Start":"00:44.130 ","End":"00:47.645","Text":"Recall to find what P and D were,"},{"Start":"00:47.645 ","End":"00:54.425","Text":"we had to use the eigenvalues and eigenvectors from this matrix A."},{"Start":"00:54.425 ","End":"00:58.280","Text":"What we\u0027re going to do in this video is just maybe"},{"Start":"00:58.280 ","End":"01:02.300","Text":"further motivate why we might actually use the spectral theorem."},{"Start":"01:02.300 ","End":"01:04.385","Text":"We did touch on this briefly,"},{"Start":"01:04.385 ","End":"01:07.580","Text":"but we\u0027re just going to go into slightly more depth,"},{"Start":"01:07.580 ","End":"01:13.510","Text":"or just some areas of mathematics where the spectral theorem is useful."},{"Start":"01:13.510 ","End":"01:19.385","Text":"The first of these is to determine if a multivariate function,"},{"Start":"01:19.385 ","End":"01:23.580","Text":"say fx(y), so that\u0027s 2 variables,"},{"Start":"01:23.580 ","End":"01:28.985","Text":"has a local maximum or minimum at a critical point."},{"Start":"01:28.985 ","End":"01:31.899","Text":"How do we do this?"},{"Start":"01:31.899 ","End":"01:37.050","Text":"Well, if the Taylor Series of a 2 variable function f x,"},{"Start":"01:37.050 ","End":"01:38.355","Text":"y is this,"},{"Start":"01:38.355 ","End":"01:44.910","Text":"so this is evaluated at a point a, b."},{"Start":"01:44.910 ","End":"01:47.735","Text":"Well, then if we Taylor expand this function,"},{"Start":"01:47.735 ","End":"01:49.790","Text":"then we get f(x,"},{"Start":"01:49.790 ","End":"01:52.340","Text":"y) is equal to f(a,"},{"Start":"01:52.340 ","End":"01:56.900","Text":"b) plus, and this is the gradient function f(a,"},{"Start":"01:56.900 ","End":"02:03.655","Text":"b) multiplied by this coordinate x minus a, y minus b."},{"Start":"02:03.655 ","End":"02:06.290","Text":"We won\u0027t go into this in too much depth."},{"Start":"02:06.290 ","End":"02:08.765","Text":"But the gradient function is essentially just,"},{"Start":"02:08.765 ","End":"02:12.620","Text":"you take the partial derivative with respect to x and the partial"},{"Start":"02:12.620 ","End":"02:16.700","Text":"derivative with respect to y in the correct order,"},{"Start":"02:16.700 ","End":"02:20.435","Text":"and that could create a vector with 2 inputs."},{"Start":"02:20.435 ","End":"02:22.615","Text":"Let\u0027s just say this input is something,"},{"Start":"02:22.615 ","End":"02:25.435","Text":"we\u0027ll just call it dot and this plus."},{"Start":"02:25.435 ","End":"02:32.135","Text":"Then you would multiply this by this other vector which was x minus a, y minus b."},{"Start":"02:32.135 ","End":"02:35.975","Text":"So you can see it does work dimensionally."},{"Start":"02:35.975 ","End":"02:38.810","Text":"Then we plus this q,"},{"Start":"02:38.810 ","End":"02:43.010","Text":"which is the quadratic form associated to the Hessian of f."},{"Start":"02:43.010 ","End":"02:48.485","Text":"Then the spectral theorem says that this quadratic form,"},{"Start":"02:48.485 ","End":"02:49.940","Text":"which is a matrix,"},{"Start":"02:49.940 ","End":"02:51.830","Text":"can be diagonalized,"},{"Start":"02:51.830 ","End":"02:58.300","Text":"and this is how one decides if f has a local max or min at a critical point."},{"Start":"02:58.300 ","End":"03:01.665","Text":"We\u0027re not going to go into this in too much detail,"},{"Start":"03:01.665 ","End":"03:04.370","Text":"but if you haven\u0027t seen what this Hessian is"},{"Start":"03:04.370 ","End":"03:08.015","Text":"before then will just briefly show what this is."},{"Start":"03:08.015 ","End":"03:11.490","Text":"The Hessian is really just a matrix,"},{"Start":"03:11.490 ","End":"03:13.535","Text":"and if you\u0027ve seen the Jacobian before,"},{"Start":"03:13.535 ","End":"03:15.155","Text":"well, it\u0027s quite similar to that."},{"Start":"03:15.155 ","End":"03:18.020","Text":"But rather than taking the first partial derivatives,"},{"Start":"03:18.020 ","End":"03:21.715","Text":"we actually take the second partial derivatives."},{"Start":"03:21.715 ","End":"03:27.110","Text":"If we had a function f(x1,"},{"Start":"03:27.110 ","End":"03:31.140","Text":"x2) and we may have n variables."},{"Start":"03:31.140 ","End":"03:37.580","Text":"The Hessian of this is just equal to the matrix where in"},{"Start":"03:37.580 ","End":"03:45.460","Text":"this first entry we have the d^2 f over dx1^2."},{"Start":"03:45.460 ","End":"03:52.905","Text":"The second entry is d^2 f over dx1, dx2."},{"Start":"03:52.905 ","End":"03:58.250","Text":"How we build up this Hessian is basically you look at the i, j,"},{"Start":"03:58.250 ","End":"04:04.515","Text":"f entry of the matrix and that\u0027s what your partial derivatives will be."},{"Start":"04:04.515 ","End":"04:06.500","Text":"If we go all the way to the end,"},{"Start":"04:06.500 ","End":"04:12.505","Text":"then we\u0027ll say this is going to be d^2 f over dx1,"},{"Start":"04:12.505 ","End":"04:20.385","Text":"dxn, because this component right here is the 1 nth component."},{"Start":"04:20.385 ","End":"04:23.885","Text":"What we mean by that, it\u0027s the first row, the nth column."},{"Start":"04:23.885 ","End":"04:25.350","Text":"You can build this up,"},{"Start":"04:25.350 ","End":"04:32.370","Text":"and then this final entry in this bottom right corner would be d^2 f and then dxn^2."},{"Start":"04:32.650 ","End":"04:36.455","Text":"So that\u0027s what we mean by the Hessian,"},{"Start":"04:36.455 ","End":"04:40.204","Text":"and this is how the spectral theorem can be useful here."},{"Start":"04:40.204 ","End":"04:44.615","Text":"We\u0027re going to look at another application of the spectral theorem."},{"Start":"04:44.615 ","End":"04:50.720","Text":"The next thing we\u0027ll look at is covariance matrices."},{"Start":"04:50.720 ","End":"04:55.115","Text":"If X is a vector valued random variable with"},{"Start":"04:55.115 ","End":"05:00.305","Text":"identically distributed but not necessarily independent components,"},{"Start":"05:00.305 ","End":"05:06.465","Text":"then the covariance of X_i and X_j is symmetric,"},{"Start":"05:06.465 ","End":"05:10.190","Text":"and the fact it can be diagonalized says that there is a change of"},{"Start":"05:10.190 ","End":"05:15.590","Text":"coordinates which makes the components of x uncorrelated."},{"Start":"05:15.590 ","End":"05:24.880","Text":"Now this is useful for normal random variables where uncorrelated implies independent,"},{"Start":"05:24.880 ","End":"05:31.300","Text":"and the conclusion is that a vector valued random variable whose components are Gaussian,"},{"Start":"05:31.300 ","End":"05:32.890","Text":"which just means normal,"},{"Start":"05:32.890 ","End":"05:36.310","Text":"can be transformed into a vector valued random"},{"Start":"05:36.310 ","End":"05:41.760","Text":"variable whose components are now iid Gaussian."},{"Start":"05:41.760 ","End":"05:51.775","Text":"They\u0027re actually now independent and they\u0027re identically distributed."},{"Start":"05:51.775 ","End":"05:58.240","Text":"Why this is useful is it just makes some of the calculations a lot easier."},{"Start":"05:58.240 ","End":"06:06.955","Text":"This is useful in areas of maybe spatial statistics, spatial-temporal."},{"Start":"06:06.955 ","End":"06:12.480","Text":"This just means we\u0027re considering time as well."},{"Start":"06:12.480 ","End":"06:17.200","Text":"These are things that you\u0027d be looking at if you were"},{"Start":"06:17.200 ","End":"06:23.510","Text":"assessing some data points on a grid or within a physical area."},{"Start":"06:23.510 ","End":"06:24.950","Text":"But as I said,"},{"Start":"06:24.950 ","End":"06:26.930","Text":"we\u0027re not going to go into too much detail."},{"Start":"06:26.930 ","End":"06:32.585","Text":"We just want to motivate why we actually use the spectral theorem."},{"Start":"06:32.585 ","End":"06:38.485","Text":"Just as a final thing that we may use the spectral theorem for,"},{"Start":"06:38.485 ","End":"06:44.420","Text":"is in this thing called principal components analysis."},{"Start":"06:44.420 ","End":"06:51.520","Text":"This says that if we can sort the eigenvalues in a decreasing order,"},{"Start":"06:51.520 ","End":"06:55.500","Text":"Lambda 1 is greater than or equal to Lambda 2,"},{"Start":"06:55.500 ","End":"06:58.910","Text":"so we\u0027ve just got this ordering process."},{"Start":"06:58.910 ","End":"07:02.480","Text":"Then the spectral theorem can be applied to"},{"Start":"07:02.480 ","End":"07:06.200","Text":"reduce the dimension of the data set so as to choose the"},{"Start":"07:06.200 ","End":"07:13.490","Text":"most important d data points out of the n data points where d is less than n. Again,"},{"Start":"07:13.490 ","End":"07:15.620","Text":"this is quite a lot to take in."},{"Start":"07:15.620 ","End":"07:20.230","Text":"But really what we\u0027re saying is if we have a lot of data points,"},{"Start":"07:20.230 ","End":"07:26.720","Text":"then we can reduce the computation time or increase the computation efficiency by"},{"Start":"07:26.720 ","End":"07:33.970","Text":"just picking out the data that aligns most with what conclusions were trying to draw."},{"Start":"07:33.970 ","End":"07:41.825","Text":"Again, this is useful in all sorts of statistics or analysis procedures."},{"Start":"07:41.825 ","End":"07:43.850","Text":"Now, in the following video,"},{"Start":"07:43.850 ","End":"07:49.220","Text":"we\u0027re actually going to formally state what the spectral theorem is,"},{"Start":"07:49.220 ","End":"07:51.080","Text":"and then we\u0027re going to prove it."},{"Start":"07:51.080 ","End":"07:55.775","Text":"Then from there, it will be a bit more obvious to see"},{"Start":"07:55.775 ","End":"07:59.090","Text":"why these applications are useful and"},{"Start":"07:59.090 ","End":"08:03.150","Text":"how they can make our lives just a little bit easier."}],"ID":30949},{"Watched":false,"Name":"Exercise 1","Duration":"5m 28s","ChapterTopicVideoID":29344,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.440","Text":"In this video we\u0027re going to be looking at an example question;"},{"Start":"00:04.440 ","End":"00:06.780","Text":"it\u0027s quite similar to what we did before."},{"Start":"00:06.780 ","End":"00:10.020","Text":"Just to recap some of the ideas that we\u0027re going to be"},{"Start":"00:10.020 ","End":"00:14.625","Text":"using for maybe some of the harder questions to come."},{"Start":"00:14.625 ","End":"00:19.200","Text":"Here we have a matrix A and we need to use"},{"Start":"00:19.200 ","End":"00:26.310","Text":"the spectral theorem to diagonalize A and to find an orthonormal basis of R^2."},{"Start":"00:26.310 ","End":"00:30.180","Text":"Remember what was it that we needed to find to form"},{"Start":"00:30.180 ","End":"00:34.125","Text":"this orthonormal basis and to diagonalize A."},{"Start":"00:34.125 ","End":"00:40.230","Text":"Well, the first thing is we needed to solve the characteristic equation,"},{"Start":"00:40.230 ","End":"00:42.270","Text":"which is this thing here,"},{"Start":"00:42.270 ","End":"00:49.410","Text":"which is the determinant of A - Lambda I = 0."},{"Start":"00:49.410 ","End":"00:51.765","Text":"Now A - Lambda I,"},{"Start":"00:51.765 ","End":"00:53.430","Text":"that\u0027s just this matrix,"},{"Start":"00:53.430 ","End":"00:59.330","Text":"and then we\u0027re taking away Lambda multiplied by the identity matrix."},{"Start":"00:59.330 ","End":"01:04.226","Text":"Remember the identity matrix for 2 by 2 is just 1,"},{"Start":"01:04.226 ","End":"01:05.970","Text":"0, 0, 1."},{"Start":"01:05.970 ","End":"01:09.255","Text":"If we\u0027re taking away Lambda, lots of that,"},{"Start":"01:09.255 ","End":"01:13.940","Text":"then our matrix A - Lambda I just becomes this thing here."},{"Start":"01:13.940 ","End":"01:20.355","Text":"We\u0027ve got 4 - Lambda - 3 and 4 - Lambda."},{"Start":"01:20.355 ","End":"01:23.225","Text":"If we do the determinant of that,"},{"Start":"01:23.225 ","End":"01:26.225","Text":"then we\u0027re doing AD- BC."},{"Start":"01:26.225 ","End":"01:28.805","Text":"This remember is our AD,"},{"Start":"01:28.805 ","End":"01:31.390","Text":"this is our BC,"},{"Start":"01:31.390 ","End":"01:34.800","Text":"and then we just set that equal to 0."},{"Start":"01:34.800 ","End":"01:37.530","Text":"We have a couple lines of algebra,"},{"Start":"01:37.530 ","End":"01:43.340","Text":"we will arrive at this polynomial here for our eigenvalues of A."},{"Start":"01:43.340 ","End":"01:46.535","Text":"Now this is a nice 1 because it does factorize."},{"Start":"01:46.535 ","End":"01:48.095","Text":"When we factorize it,"},{"Start":"01:48.095 ","End":"01:52.130","Text":"we get the Lambda values as 1 and 7."},{"Start":"01:52.130 ","End":"01:57.140","Text":"Now, we also need the eigenvectors because remember,"},{"Start":"01:57.140 ","End":"02:00.230","Text":"we\u0027re trying to find this matrix P,"},{"Start":"02:00.230 ","End":"02:06.330","Text":"which was formed from the normalized eigenvectors,"},{"Start":"02:06.330 ","End":"02:09.900","Text":"so we have v_1 and v_2 here."},{"Start":"02:09.900 ","End":"02:12.510","Text":"For Lambda =1,"},{"Start":"02:12.510 ","End":"02:16.740","Text":"we get an eigenvector of 1, 1."},{"Start":"02:16.740 ","End":"02:21.380","Text":"Now, if you don\u0027t remember how we get the eigenvectors, well,"},{"Start":"02:21.380 ","End":"02:27.155","Text":"we just substitute our values for Lambda and our known value of A,"},{"Start":"02:27.155 ","End":"02:29.645","Text":"and then we solve that for v,"},{"Start":"02:29.645 ","End":"02:33.170","Text":"then that will give us the eigenvector 1,1."},{"Start":"02:33.170 ","End":"02:38.300","Text":"Then we can work out the magnitude of this vector just by"},{"Start":"02:38.300 ","End":"02:44.290","Text":"taking the sum of the squares of the components and then square rooting it."},{"Start":"02:44.290 ","End":"02:47.555","Text":"For this 1, we get square root of 2,"},{"Start":"02:47.555 ","End":"02:51.440","Text":"and for our eigenvalue of Lambda = 7,"},{"Start":"02:51.440 ","End":"02:54.740","Text":"then we get the same magnitude,"},{"Start":"02:54.740 ","End":"02:57.110","Text":"but we get a different eigenvector,"},{"Start":"02:57.110 ","End":"03:00.370","Text":"which is -1 and 1."},{"Start":"03:00.370 ","End":"03:05.600","Text":"Now what we need to do is we just need to normalize this and then put"},{"Start":"03:05.600 ","End":"03:10.805","Text":"the normalized eigenvectors into the columns of P,"},{"Start":"03:10.805 ","End":"03:15.630","Text":"and then that will give us our orthonormal basis."},{"Start":"03:16.220 ","End":"03:19.115","Text":"These are our eigenvectors,"},{"Start":"03:19.115 ","End":"03:21.500","Text":"and these are our magnitudes."},{"Start":"03:21.500 ","End":"03:25.710","Text":"P is simply the normalized eigenvectors,"},{"Start":"03:25.710 ","End":"03:27.465","Text":"which we\u0027ll put in now."},{"Start":"03:27.465 ","End":"03:30.410","Text":"We\u0027ve basically just put in 1,"},{"Start":"03:30.410 ","End":"03:35.930","Text":"1 into this column and then divided through by the magnitude."},{"Start":"03:35.930 ","End":"03:42.575","Text":"Because then what you\u0027ll notice is if we do the actual sum of the squares of this column,"},{"Start":"03:42.575 ","End":"03:52.330","Text":"then we\u0027re going to get 1/(root 2)^2 +1/(root 2)^2,"},{"Start":"03:52.330 ","End":"03:55.560","Text":"which is just 1/2 + 1/2,"},{"Start":"03:55.560 ","End":"04:00.165","Text":"which is 1, is indeed normalized as was required."},{"Start":"04:00.165 ","End":"04:04.040","Text":"We do the same thing for the second column,"},{"Start":"04:04.040 ","End":"04:07.225","Text":"but instead we put in V_2 here."},{"Start":"04:07.225 ","End":"04:10.010","Text":"Now we need to find out what P inverse is,"},{"Start":"04:10.010 ","End":"04:17.015","Text":"because remember we want to diagonalize A and that is to put it in this form, PDP^-1."},{"Start":"04:17.015 ","End":"04:21.630","Text":"But because our matrix A was real and symmetric,"},{"Start":"04:21.630 ","End":"04:24.945","Text":"that just means A is the same as a transpose,"},{"Start":"04:24.945 ","End":"04:29.600","Text":"well that means that P^-1 is just equal to"},{"Start":"04:29.600 ","End":"04:36.285","Text":"the transpose of P. These 2 entries remain the same,"},{"Start":"04:36.285 ","End":"04:40.187","Text":"and the only ones that swap are the 1,"},{"Start":"04:40.187 ","End":"04:43.830","Text":"2^th or the 1^st row,"},{"Start":"04:43.830 ","End":"04:48.705","Text":"2^nd column entry, and the 2^nd row, 1^st column entry."},{"Start":"04:48.705 ","End":"04:51.170","Text":"These are the only ones that will swap."},{"Start":"04:51.170 ","End":"04:54.720","Text":"This is our P^-1 now."},{"Start":"04:54.720 ","End":"04:58.385","Text":"Remember for D, all we had to do for that was put"},{"Start":"04:58.385 ","End":"05:02.300","Text":"the eigenvalues along the leading diagonal,"},{"Start":"05:02.300 ","End":"05:05.660","Text":"and then all the other values are 0."},{"Start":"05:05.660 ","End":"05:09.485","Text":"This is our P, this is our P^-1,"},{"Start":"05:09.485 ","End":"05:15.255","Text":"and this is our D. A is just PDP^-1,"},{"Start":"05:15.255 ","End":"05:19.130","Text":"but because we know that P^-1 is the same as P transpose,"},{"Start":"05:19.130 ","End":"05:23.090","Text":"because we have this orthonormal basis property,"},{"Start":"05:23.090 ","End":"05:28.650","Text":"then we can express it as PDP transpose."}],"ID":30951},{"Watched":false,"Name":"Exercise 2","Duration":"16m 51s","ChapterTopicVideoID":29345,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.175","Text":"As said in the last video,"},{"Start":"00:02.175 ","End":"00:06.720","Text":"we\u0027re going to now be looking at matrices of higher dimension."},{"Start":"00:06.720 ","End":"00:10.665","Text":"We\u0027re going to use the spectral theorem to diagonalize"},{"Start":"00:10.665 ","End":"00:14.940","Text":"these matrices and to find an orthonormal basis."},{"Start":"00:14.940 ","End":"00:18.660","Text":"We have this matrix A, which is 3,"},{"Start":"00:18.660 ","End":"00:20.100","Text":"2, 4, 2, 5,"},{"Start":"00:20.100 ","End":"00:22.305","Text":"2, 4, 2, 3."},{"Start":"00:22.305 ","End":"00:26.885","Text":"We\u0027re going to diagonalize it and find this orthonormal basis."},{"Start":"00:26.885 ","End":"00:33.485","Text":"You\u0027ll notice in this video that the difficulty also of the process that we need to"},{"Start":"00:33.485 ","End":"00:36.860","Text":"take to do these 2 things is a bit more"},{"Start":"00:36.860 ","End":"00:40.490","Text":"involved than in the 2 by 2 case that we\u0027ve already seen."},{"Start":"00:40.490 ","End":"00:46.385","Text":"But don\u0027t worry, and just hang in there because once the method has been established,"},{"Start":"00:46.385 ","End":"00:51.680","Text":"then it\u0027s quiet algorithmic is how you would tackle a question like this in the future."},{"Start":"00:51.680 ","End":"00:54.680","Text":"The first thing we\u0027ll notice is that"},{"Start":"00:54.680 ","End":"01:00.755","Text":"this matrix A has only real entries and it\u0027s also symmetric."},{"Start":"01:00.755 ","End":"01:04.600","Text":"Now it\u0027s easy to see that if we consider say, well,"},{"Start":"01:04.600 ","End":"01:08.420","Text":"we know that these entries here would"},{"Start":"01:08.420 ","End":"01:12.980","Text":"remain in the same position if we took the transpose because they\u0027re the 1,1,"},{"Start":"01:12.980 ","End":"01:15.500","Text":"2,2, 3,3 entries."},{"Start":"01:15.500 ","End":"01:18.075","Text":"Then if we swapped the other ones,"},{"Start":"01:18.075 ","End":"01:22.055","Text":"this is going to be the first row second column."},{"Start":"01:22.055 ","End":"01:24.545","Text":"So that maps to the second row first column,"},{"Start":"01:24.545 ","End":"01:30.320","Text":"which goes here, which is again the same number and that would go back to there."},{"Start":"01:30.320 ","End":"01:37.290","Text":"The same can be said for these 2 here and these 2 here."},{"Start":"01:37.290 ","End":"01:41.225","Text":"This matrix is indeed symmetric and real."},{"Start":"01:41.225 ","End":"01:45.260","Text":"We\u0027re just going to use basically the same approach before,"},{"Start":"01:45.260 ","End":"01:51.975","Text":"but there\u0027s just a few more steps because of the nature of the dimension of this matrix."},{"Start":"01:51.975 ","End":"01:54.965","Text":"Remember, what\u0027s the first thing that we need to do?"},{"Start":"01:54.965 ","End":"01:57.650","Text":"We need to find the eigenvalues and"},{"Start":"01:57.650 ","End":"02:02.225","Text":"corresponding eigenvectors of this real symmetric matrix."},{"Start":"02:02.225 ","End":"02:04.669","Text":"Now, as we\u0027ve seen many times,"},{"Start":"02:04.669 ","End":"02:09.770","Text":"how do we find the eigenvalues where we use the characteristic polynomial."},{"Start":"02:09.770 ","End":"02:15.520","Text":"That is the determinant of A minus Lambda I,"},{"Start":"02:15.520 ","End":"02:18.905","Text":"and then we set that equal to 0."},{"Start":"02:18.905 ","End":"02:24.260","Text":"Now, because this is a 3 by 3 the determinant is a little bit more involved."},{"Start":"02:24.260 ","End":"02:28.440","Text":"But remember what A minus Lambda I is."},{"Start":"02:28.440 ","End":"02:30.210","Text":"Well, that\u0027s just this matrix,"},{"Start":"02:30.210 ","End":"02:33.660","Text":"and then we minus Lambda on the diagonal entries."},{"Start":"02:33.660 ","End":"02:37.935","Text":"Let\u0027s just write what A minus Lambda I actually is."},{"Start":"02:37.935 ","End":"02:41.465","Text":"That\u0027s equal to this matrix that we have here,"},{"Start":"02:41.465 ","End":"02:43.655","Text":"but we minus Lambda here,"},{"Start":"02:43.655 ","End":"02:46.000","Text":"and then this is 2, 4,"},{"Start":"02:46.000 ","End":"02:49.665","Text":"this is 2, this is 5 minus Lambda,"},{"Start":"02:49.665 ","End":"02:51.795","Text":"this is 2, this is 4,"},{"Start":"02:51.795 ","End":"02:55.110","Text":"this is 2, and this is 3 minus Lambda."},{"Start":"02:55.110 ","End":"03:00.290","Text":"We need to find what the determinant of this matrix is."},{"Start":"03:00.290 ","End":"03:07.115","Text":"Now, we\u0027ve done a lot of videos that will show how to find the determinant of a 3 by 3."},{"Start":"03:07.115 ","End":"03:10.295","Text":"What we\u0027re going to do is we\u0027ll just write down the results"},{"Start":"03:10.295 ","End":"03:13.945","Text":"now and see what that gives us."},{"Start":"03:13.945 ","End":"03:18.885","Text":"This is our determinant of A minus Lambda I."},{"Start":"03:18.885 ","End":"03:21.330","Text":"We\u0027ve got this long polynomial,"},{"Start":"03:21.330 ","End":"03:25.805","Text":"and you\u0027ll notice that the highest power that features in this polynomial is"},{"Start":"03:25.805 ","End":"03:32.320","Text":"a cubic term that comes from multiplying this Lambda and this Lambda and this Lambda."},{"Start":"03:32.320 ","End":"03:36.770","Text":"It is fairly involved to tidy up this algebra."},{"Start":"03:36.770 ","End":"03:38.480","Text":"Just for the sake of time,"},{"Start":"03:38.480 ","End":"03:43.365","Text":"we\u0027ll just go to what the simplified version of this polynomial is."},{"Start":"03:43.365 ","End":"03:47.165","Text":"What you should arrive at is this polynomial here,"},{"Start":"03:47.165 ","End":"03:49.775","Text":"which does factorize quite nicely."},{"Start":"03:49.775 ","End":"03:52.925","Text":"We\u0027ve got minus Lambda minus 1,"},{"Start":"03:52.925 ","End":"03:59.475","Text":"and then we\u0027ve got Lambda minus 3 and Lambda minus 9,"},{"Start":"03:59.475 ","End":"04:02.190","Text":"and this is equal to 0."},{"Start":"04:02.190 ","End":"04:03.980","Text":"What does this tell us?"},{"Start":"04:03.980 ","End":"04:09.035","Text":"Well, this tells us that we have 3 real and distinct eigenvalues."},{"Start":"04:09.035 ","End":"04:11.070","Text":"They are Lambda 1,"},{"Start":"04:11.070 ","End":"04:13.420","Text":"which is equal to minus 1,"},{"Start":"04:13.420 ","End":"04:16.280","Text":"which comes from setting this bracket equal to 0."},{"Start":"04:16.280 ","End":"04:17.750","Text":"We\u0027ve got Lambda 2,"},{"Start":"04:17.750 ","End":"04:19.415","Text":"which is equal to 3,"},{"Start":"04:19.415 ","End":"04:21.545","Text":"and we\u0027ve got Lambda 3,"},{"Start":"04:21.545 ","End":"04:24.075","Text":"which is equal to 9."},{"Start":"04:24.075 ","End":"04:27.485","Text":"Now what we need to do is we need to find"},{"Start":"04:27.485 ","End":"04:32.945","Text":"the eigenvectors and then orthonormalized them to form."},{"Start":"04:32.945 ","End":"04:34.400","Text":"Remember what we\u0027re trying to find,"},{"Start":"04:34.400 ","End":"04:43.790","Text":"we\u0027re trying to find the matrices P and D so that A is equal to PDP inverse."},{"Start":"04:43.790 ","End":"04:47.495","Text":"Or because we\u0027re looking for an orthonormal basis,"},{"Start":"04:47.495 ","End":"04:52.345","Text":"then it\u0027s going to be PDP transpose."},{"Start":"04:52.345 ","End":"04:55.970","Text":"How do we work out the eigenvectors?"},{"Start":"04:55.970 ","End":"04:58.940","Text":"Well, it\u0027s very similar to what we did before,"},{"Start":"04:58.940 ","End":"05:02.300","Text":"but the process is just slightly more involved."},{"Start":"05:02.300 ","End":"05:04.240","Text":"Let\u0027s have a look at that now."},{"Start":"05:04.240 ","End":"05:08.380","Text":"We\u0027ve got our eigenvector eigenvalue equation,"},{"Start":"05:08.380 ","End":"05:10.315","Text":"which we\u0027ve used before,"},{"Start":"05:10.315 ","End":"05:14.825","Text":"and we just set saying that our eigenvector has this general form,"},{"Start":"05:14.825 ","End":"05:17.360","Text":"v_1, v_2, and v_3."},{"Start":"05:17.360 ","End":"05:21.130","Text":"What we\u0027re going to do is we\u0027re going to work out what one of"},{"Start":"05:21.130 ","End":"05:25.395","Text":"these eigenvectors is for Lambda 1 is equal to minus 1,"},{"Start":"05:25.395 ","End":"05:30.310","Text":"and then you can work out yourself using the same method how we arrive at"},{"Start":"05:30.310 ","End":"05:36.980","Text":"the 2 eigenvectors that correspond to Lambda 2 is equal to 3 and Lambda 3 is equal to 9."},{"Start":"05:36.980 ","End":"05:41.575","Text":"Essentially what we\u0027re doing is we\u0027re going to replace"},{"Start":"05:41.575 ","End":"05:46.780","Text":"this Lambda with minus 1 because that\u0027s the one that we\u0027re going to do here,"},{"Start":"05:46.780 ","End":"05:50.750","Text":"and then we\u0027re going to expand out this matrix on"},{"Start":"05:50.750 ","End":"05:55.580","Text":"the left-hand side and compare it with the right-hand side."},{"Start":"05:55.580 ","End":"05:57.575","Text":"On the left-hand side,"},{"Start":"05:57.575 ","End":"06:02.720","Text":"we get 3v_1 plus 2v_2"},{"Start":"06:02.720 ","End":"06:08.595","Text":"plus 4v_3 is equal to minus v_1."},{"Start":"06:08.595 ","End":"06:10.875","Text":"That\u0027s this multiplied by that,"},{"Start":"06:10.875 ","End":"06:12.540","Text":"2 multiplied by the v_2,"},{"Start":"06:12.540 ","End":"06:14.295","Text":"and the 4 multiplied by the v_3."},{"Start":"06:14.295 ","End":"06:16.395","Text":"We do the same for the next 2 rows,"},{"Start":"06:16.395 ","End":"06:23.925","Text":"then we get 2v_1 plus 5v_2 plus 2v_3,"},{"Start":"06:23.925 ","End":"06:27.615","Text":"which is equal to minus v_2."},{"Start":"06:27.615 ","End":"06:32.850","Text":"Finally, we get 4v_1 plus 2v_2"},{"Start":"06:32.850 ","End":"06:38.605","Text":"plus 3v_3 is equal to minus v_3."},{"Start":"06:38.605 ","End":"06:42.770","Text":"Now we can tidy this up just by bringing everything to the left-hand side,"},{"Start":"06:42.770 ","End":"06:45.890","Text":"I setting the right-hand side equal to 0."},{"Start":"06:45.890 ","End":"06:49.710","Text":"What that then tells us is that we get these equations."},{"Start":"06:49.710 ","End":"06:58.725","Text":"We get 4v_1 plus 2v_2 plus 4v_3 is equal to 0."},{"Start":"06:58.725 ","End":"07:08.580","Text":"Then we get 2v_1 plus 6v_2 plus 2v_3 is equal to 0."},{"Start":"07:08.580 ","End":"07:18.575","Text":"Then we get 4v_1 plus 2v_2 plus 4v_3 is equal to 0 as well."},{"Start":"07:18.575 ","End":"07:24.165","Text":"Now you\u0027ll notice that these 2 equations are exactly the same."},{"Start":"07:24.165 ","End":"07:29.000","Text":"What we have here is we have 2 equations but 3 unknowns."},{"Start":"07:29.000 ","End":"07:30.245","Text":"So at some point,"},{"Start":"07:30.245 ","End":"07:35.585","Text":"we\u0027re going to have to arbitrarily set one of these values to a particular value."},{"Start":"07:35.585 ","End":"07:39.725","Text":"How do we solve this system of equations?"},{"Start":"07:39.725 ","End":"07:46.155","Text":"Well, we can solve it using Gaussian elimination."},{"Start":"07:46.155 ","End":"07:48.590","Text":"What we mean by that is we just use"},{"Start":"07:48.590 ","End":"07:53.885","Text":"the standard row operation procedure to work out what these v_1,"},{"Start":"07:53.885 ","End":"07:57.430","Text":"v_2, and v_3 are."},{"Start":"07:57.430 ","End":"08:00.740","Text":"Here is our system of equations in"},{"Start":"08:00.740 ","End":"08:05.630","Text":"the matrix form that we wanted to apply Gaussian elimination."},{"Start":"08:05.630 ","End":"08:11.440","Text":"This is just saying that 4v_1 plus 2v_2 plus 4v_3 is equal to 0, etc."},{"Start":"08:11.440 ","End":"08:14.845","Text":"It corresponds to the system of equations we have here."},{"Start":"08:14.845 ","End":"08:16.805","Text":"How do we solve this?"},{"Start":"08:16.805 ","End":"08:19.550","Text":"Well, the first thing that we can do is we can"},{"Start":"08:19.550 ","End":"08:23.150","Text":"notice that all of these rows are divisible by 2."},{"Start":"08:23.150 ","End":"08:25.670","Text":"That\u0027s just going to make our life a bit easier."},{"Start":"08:25.670 ","End":"08:27.680","Text":"Let\u0027s just do that operation."},{"Start":"08:27.680 ","End":"08:31.050","Text":"We\u0027ll say we\u0027re doing row 1, row 2,"},{"Start":"08:31.050 ","End":"08:37.920","Text":"and row 3 and we\u0027re dividing all of them by 2."},{"Start":"08:37.920 ","End":"08:39.725","Text":"What\u0027s that going to give us?"},{"Start":"08:39.725 ","End":"08:41.405","Text":"Well, we\u0027ve got 2,"},{"Start":"08:41.405 ","End":"08:43.185","Text":"1, 2,"},{"Start":"08:43.185 ","End":"08:46.095","Text":"we got 1, 3, 1,"},{"Start":"08:46.095 ","End":"08:47.430","Text":"and then we\u0027ve got 2,"},{"Start":"08:47.430 ","End":"08:49.320","Text":"1, 2 again."},{"Start":"08:49.320 ","End":"08:54.015","Text":"Then these are of course are still equal to 0."},{"Start":"08:54.015 ","End":"08:56.680","Text":"What can we do from here?"},{"Start":"08:56.680 ","End":"09:02.755","Text":"Well, we notice that row 3 is exactly the same as row 1."},{"Start":"09:02.755 ","End":"09:06.415","Text":"We can do row 3 minus row 1,"},{"Start":"09:06.415 ","End":"09:10.015","Text":"and then this row will give us all zeros. Let\u0027s do that."},{"Start":"09:10.015 ","End":"09:14.545","Text":"We\u0027re doing row 3 minus row 1."},{"Start":"09:14.545 ","End":"09:17.455","Text":"Then when we do that, well row 1 stays the same,"},{"Start":"09:17.455 ","End":"09:18.850","Text":"row 2 stays the same."},{"Start":"09:18.850 ","End":"09:20.230","Text":"We\u0027re leaving that alone."},{"Start":"09:20.230 ","End":"09:24.475","Text":"But then the final row now just becomes all zeros."},{"Start":"09:24.475 ","End":"09:29.935","Text":"This is nice because it means that we don\u0027t have to do anything with this row now."},{"Start":"09:29.935 ","End":"09:32.290","Text":"What\u0027s the next thing that we can do?"},{"Start":"09:32.290 ","End":"09:34.735","Text":"Well, we can do row 3,"},{"Start":"09:34.735 ","End":"09:40.810","Text":"or rather we can do row 2 minus row 1,"},{"Start":"09:40.810 ","End":"09:42.430","Text":"minus 2 row 2."},{"Start":"09:42.430 ","End":"09:48.190","Text":"What we\u0027re going to"},{"Start":"09:48.190 ","End":"09:53.515","Text":"do is we\u0027re going to do row 2 minus 1/2 row 1."},{"Start":"09:53.515 ","End":"09:57.565","Text":"Because then what we\u0027ll get is we\u0027ll get 1 minus a 1/2 of 2 here,"},{"Start":"09:57.565 ","End":"09:59.830","Text":"which is 1 minus 1, which is 0."},{"Start":"09:59.830 ","End":"10:02.215","Text":"Then similarly we\u0027ll get 0 here."},{"Start":"10:02.215 ","End":"10:09.775","Text":"We\u0027re doing row 2 minus 1/2 row 1."},{"Start":"10:09.775 ","End":"10:14.830","Text":"Well, the top row stays the same remember,"},{"Start":"10:14.830 ","End":"10:17.005","Text":"so we still got 2, 1, 2 here."},{"Start":"10:17.005 ","End":"10:20.185","Text":"Now this entry is going to be 0."},{"Start":"10:20.185 ","End":"10:23.560","Text":"Then we\u0027ve got 3 minus a 1/2 of 1."},{"Start":"10:23.560 ","End":"10:26.305","Text":"That\u0027s just going to be 5/2."},{"Start":"10:26.305 ","End":"10:28.525","Text":"This is going to be 0 as well."},{"Start":"10:28.525 ","End":"10:32.050","Text":"Then the bottom row, of course just remains all zeros."},{"Start":"10:32.050 ","End":"10:36.250","Text":"Then all of these are equal to 0 as before."},{"Start":"10:36.250 ","End":"10:38.200","Text":"What can we do now?"},{"Start":"10:38.200 ","End":"10:40.960","Text":"Well, let\u0027s just make our lives a little bit easier."},{"Start":"10:40.960 ","End":"10:49.030","Text":"Let\u0027s do row 2 multiplied by 2/5 so that we can make this entry here of one."},{"Start":"10:49.030 ","End":"10:54.505","Text":"We\u0027re going to do 2/5 multiplied by row 2."},{"Start":"10:54.505 ","End":"10:56.710","Text":"Then that will give us 2, 1,"},{"Start":"10:56.710 ","End":"11:01.975","Text":"2 on the top again and we\u0027re going to get a 0,"},{"Start":"11:01.975 ","End":"11:05.950","Text":"1, 0 and then 0, 0, 0."},{"Start":"11:05.950 ","End":"11:08.815","Text":"Then these are all equal to 0 again."},{"Start":"11:08.815 ","End":"11:11.320","Text":"Then what can we do here?"},{"Start":"11:11.320 ","End":"11:15.310","Text":"Well, we\u0027re just going to do two steps in one in this go."},{"Start":"11:15.310 ","End":"11:20.200","Text":"We\u0027re going to do row 1 minus row 2."},{"Start":"11:20.200 ","End":"11:23.170","Text":"Because then that will give us a 0 here."},{"Start":"11:23.170 ","End":"11:26.665","Text":"Then we\u0027re going to divide that by 2."},{"Start":"11:26.665 ","End":"11:28.495","Text":"What does this give us?"},{"Start":"11:28.495 ","End":"11:30.940","Text":"Row 1 minus row 2 over 2."},{"Start":"11:30.940 ","End":"11:35.335","Text":"Well, that\u0027s going to give us our most simple form of this matrix."},{"Start":"11:35.335 ","End":"11:37.660","Text":"That\u0027s going to give us a 1, 0, 1."},{"Start":"11:37.660 ","End":"11:39.490","Text":"Then we\u0027ve got 0, 1,"},{"Start":"11:39.490 ","End":"11:43.180","Text":"0 and then 0, 0, 0."},{"Start":"11:43.180 ","End":"11:46.000","Text":"That is all equal."},{"Start":"11:46.000 ","End":"11:49.720","Text":"All of these equations are equal to 0."},{"Start":"11:49.720 ","End":"11:57.025","Text":"This is our final version once having applied the Gaussian elimination."},{"Start":"11:57.025 ","End":"11:58.330","Text":"What does this tell us?"},{"Start":"11:58.330 ","End":"12:06.175","Text":"Well, this tells us that x_1 plus x_3 is equal to 0."},{"Start":"12:06.175 ","End":"12:12.400","Text":"This just comes from this first row here."},{"Start":"12:12.400 ","End":"12:17.155","Text":"Then the second row tells us this row here."},{"Start":"12:17.155 ","End":"12:22.640","Text":"This tells us that x_2 is equal to 0."},{"Start":"12:22.710 ","End":"12:26.815","Text":"If we use that information we know what x_2 is."},{"Start":"12:26.815 ","End":"12:32.800","Text":"This tells us that x_1 is equal to minus x_3."},{"Start":"12:32.800 ","End":"12:40.675","Text":"Our eigenvector that corresponds to this eigenvalue Lambda 1 is just equal to,"},{"Start":"12:40.675 ","End":"12:43.690","Text":"well, let\u0027s just say this is x_1,"},{"Start":"12:43.690 ","End":"12:45.835","Text":"this is x_2 is 0,"},{"Start":"12:45.835 ","End":"12:49.105","Text":"and x_3 is minus x_1."},{"Start":"12:49.105 ","End":"12:52.375","Text":"This is going to be minus x_1."},{"Start":"12:52.375 ","End":"13:01.435","Text":"Well, if we then take x_1 out then we get x_1 multiplied by 1, 0 minus 1."},{"Start":"13:01.435 ","End":"13:03.310","Text":"Without loss of generality,"},{"Start":"13:03.310 ","End":"13:12.760","Text":"we can actually just say that the eigenvector v Lambda 1 is equal to minus 1, 0, 1."},{"Start":"13:12.760 ","End":"13:14.530","Text":"What we\u0027ve done here is actually,"},{"Start":"13:14.530 ","End":"13:23.030","Text":"we\u0027ve just said let x_1 equal minus 1 and then minus x_1 would just be equal to 1."},{"Start":"13:23.460 ","End":"13:31.330","Text":"That\u0027s how we arrive at the eigenvector for a 3 by 3."},{"Start":"13:31.330 ","End":"13:35.695","Text":"Let\u0027s just actually move this over here."},{"Start":"13:35.695 ","End":"13:38.245","Text":"This is our first eigenvector."},{"Start":"13:38.245 ","End":"13:43.735","Text":"But remember we need to find the eigenvectors that correspond"},{"Start":"13:43.735 ","End":"13:49.900","Text":"to Lambda 2 and the eigenvector that corresponds to Lambda 3."},{"Start":"13:49.900 ","End":"13:54.580","Text":"Now because the process is quite involved we\u0027re just going to give the results here."},{"Start":"13:54.580 ","End":"13:59.215","Text":"Otherwise, we would have to do that Gaussian elimination process again."},{"Start":"13:59.215 ","End":"14:02.785","Text":"But you can do this if you want but"},{"Start":"14:02.785 ","End":"14:08.310","Text":"the eigenvector corresponding to Lambda 2 is just 1 minus 2,"},{"Start":"14:08.310 ","End":"14:16.235","Text":"1, and the eigenvector corresponding to Lambda 3 is just 1, 1, 1."},{"Start":"14:16.235 ","End":"14:22.240","Text":"We need to normalize these eigenvectors and remember how we do that."},{"Start":"14:22.240 ","End":"14:26.890","Text":"We make it so that the magnitude of the components is equal to 1."},{"Start":"14:26.890 ","End":"14:31.540","Text":"What\u0027s the magnitude of the eigenvector corresponding to Lambda 1?"},{"Start":"14:31.540 ","End":"14:39.265","Text":"Well, this one is just going to be minus 1^2 plus 1^2 which is equal to 2."},{"Start":"14:39.265 ","End":"14:41.470","Text":"Then we have to square root it."},{"Start":"14:41.470 ","End":"14:50.229","Text":"Square roots, the magnitude of this 1 is just going to be 1^2 plus minus 2^2 plus 1^2,"},{"Start":"14:50.229 ","End":"14:54.430","Text":"so then that\u0027s just going to be the square root of 6."},{"Start":"14:54.430 ","End":"15:00.100","Text":"Then this final one is just going to be the square root of 3."},{"Start":"15:00.100 ","End":"15:08.170","Text":"Now that we have our orthonormalized vectors then we can actually construct p now."},{"Start":"15:08.170 ","End":"15:10.090","Text":"Remember what p was."},{"Start":"15:10.090 ","End":"15:13.810","Text":"P was the columns of the eigenvectors in"},{"Start":"15:13.810 ","End":"15:18.040","Text":"the order of the eigenvalues and then we normalize them."},{"Start":"15:18.040 ","End":"15:21.070","Text":"We just divide through by these things here."},{"Start":"15:21.070 ","End":"15:25.855","Text":"P is just going to be minus 1 over root 2,"},{"Start":"15:25.855 ","End":"15:29.740","Text":"0, 1 over root 2."},{"Start":"15:29.740 ","End":"15:31.765","Text":"Then this one here, remember,"},{"Start":"15:31.765 ","End":"15:33.760","Text":"normalized had reached 6."},{"Start":"15:33.760 ","End":"15:36.805","Text":"This is going to be 1 over root 6."},{"Start":"15:36.805 ","End":"15:39.895","Text":"Then we\u0027ve got minus 2 over root 6,"},{"Start":"15:39.895 ","End":"15:41.845","Text":"1 over root 6."},{"Start":"15:41.845 ","End":"15:44.530","Text":"All of these are going to be over root 3."},{"Start":"15:44.530 ","End":"15:46.675","Text":"We\u0027ve got 1 over root 3,"},{"Start":"15:46.675 ","End":"15:48.520","Text":"1 over root 3,"},{"Start":"15:48.520 ","End":"15:51.250","Text":"1 over root 3."},{"Start":"15:51.250 ","End":"15:54.940","Text":"This is our matrix p. Now,"},{"Start":"15:54.940 ","End":"15:58.150","Text":"remember because p is orthonormal,"},{"Start":"15:58.150 ","End":"16:06.355","Text":"then p inverse is just equal to p transpose and we\u0027ll just give the results here."},{"Start":"16:06.355 ","End":"16:11.410","Text":"This is our P transpose and D,"},{"Start":"16:11.410 ","End":"16:13.150","Text":"remember this is the easy one."},{"Start":"16:13.150 ","End":"16:19.690","Text":"This is just the eigenvalues in the diagonal order."},{"Start":"16:19.690 ","End":"16:21.385","Text":"We\u0027ve got minus 1,"},{"Start":"16:21.385 ","End":"16:24.625","Text":"0, 0, we\u0027ve got 0,"},{"Start":"16:24.625 ","End":"16:30.175","Text":"3, 0 and then we\u0027ve got 0, 0, 9."},{"Start":"16:30.175 ","End":"16:35.245","Text":"The eigenvalues as per usual are just the diagonal entries here."},{"Start":"16:35.245 ","End":"16:40.390","Text":"Then our matrix A has therefore been diagonalized which is"},{"Start":"16:40.390 ","End":"16:45.415","Text":"just PDP inverse but because P is orthonormal,"},{"Start":"16:45.415 ","End":"16:50.660","Text":"well, this is the same as saying PDP transpose."}],"ID":30952},{"Watched":false,"Name":"Exercise 3","Duration":"12m 29s","ChapterTopicVideoID":29346,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.015","Text":"In this video, we\u0027re going to be looking at"},{"Start":"00:03.015 ","End":"00:06.855","Text":"a matrix that\u0027s similar in nature to the ones we\u0027ve seen before,"},{"Start":"00:06.855 ","End":"00:12.345","Text":"but dissimilar in terms of the eigenvectors that can be generated from it."},{"Start":"00:12.345 ","End":"00:17.070","Text":"Let A=3,0,0,3."},{"Start":"00:17.070 ","End":"00:21.030","Text":"We\u0027re going to use the Spectral Theorem again to diagonalise"},{"Start":"00:21.030 ","End":"00:25.515","Text":"A and to find an orthonormal basis of R^2."},{"Start":"00:25.515 ","End":"00:32.895","Text":"It\u0027s easy to see that the eigenvalues are Lambda_1= Lambda_2= 3."},{"Start":"00:32.895 ","End":"00:39.665","Text":"However, this time, rather than having an orthogonal basis of eigenvectors,"},{"Start":"00:39.665 ","End":"00:41.105","Text":"as we\u0027ve seen before,"},{"Start":"00:41.105 ","End":"00:44.975","Text":"we now have infinitely many eigenvectors."},{"Start":"00:44.975 ","End":"00:48.170","Text":"Now if you\u0027re wondering how we see that,"},{"Start":"00:48.170 ","End":"00:54.960","Text":"we just use the eigenvalue equation Av= Lambda v,"},{"Start":"00:54.960 ","End":"01:02.715","Text":"and then we plug in our value or eigenvalues that we can read off directly."},{"Start":"01:02.715 ","End":"01:06.710","Text":"We\u0027ve got A and then let\u0027s just call our eigenvector x_1,"},{"Start":"01:06.710 ","End":"01:10.920","Text":"x_2 and that\u0027s equals 3(x_1,x_2)."},{"Start":"01:13.640 ","End":"01:18.695","Text":"If we multiply on the left and compare against the right,"},{"Start":"01:18.695 ","End":"01:24.135","Text":"that tells us that 3x_1 ="},{"Start":"01:24.135 ","End":"01:30.330","Text":"3x_1 and 3x_2 = 3x_2."},{"Start":"01:30.330 ","End":"01:37.460","Text":"What does this mean? This means that we can arbitrarily choose our x_1 and x_2 that make"},{"Start":"01:37.460 ","End":"01:47.325","Text":"our eigenvectors or we could say that our eigenvector v_1 is just equal to x_1 and x_2."},{"Start":"01:47.325 ","End":"01:51.495","Text":"Where x_1 and x_2 can be determined arbitrarily."},{"Start":"01:51.495 ","End":"01:56.060","Text":"Now, in the case that the eigenvectors we"},{"Start":"01:56.060 ","End":"02:00.590","Text":"find are already orthogonal and form a basis e.g.,"},{"Start":"02:00.590 ","End":"02:03.140","Text":"2,1 and minus 1,2,"},{"Start":"02:03.140 ","End":"02:08.975","Text":"then we just need to normalize these eigenvectors and put them into the columns of P,"},{"Start":"02:08.975 ","End":"02:12.170","Text":"as we\u0027ve done before in the preceding videos."},{"Start":"02:12.170 ","End":"02:16.640","Text":"However, if we have 2 vectors that are not orthogonal,"},{"Start":"02:16.640 ","End":"02:18.980","Text":"but they do still form a basis,"},{"Start":"02:18.980 ","End":"02:21.817","Text":"e.g., 1,1 and 2,1,"},{"Start":"02:21.817 ","End":"02:27.335","Text":"then we might want to orthogonalise or even need to orthogonalise."},{"Start":"02:27.335 ","End":"02:32.990","Text":"This can be done using the Gram-Schmidt process."},{"Start":"02:32.990 ","End":"02:37.640","Text":"Now, we\u0027ve briefly mentioned what the Gram-Schmidt process is,"},{"Start":"02:37.640 ","End":"02:45.150","Text":"but essentially it\u0027s just a way of orthonormalising a set of vectors."},{"Start":"02:49.520 ","End":"02:55.370","Text":"What this essentially does is it allows us to create a set of vectors where"},{"Start":"02:55.370 ","End":"03:00.530","Text":"the vectors within that set are orthogonal to each other i.e."},{"Start":"03:00.530 ","End":"03:03.920","Text":"the dot products between those vectors is 0."},{"Start":"03:03.920 ","End":"03:07.950","Text":"Let\u0027s just state this a little bit more formally."},{"Start":"03:08.240 ","End":"03:12.630","Text":"Given an arbitrary basis, v_1,"},{"Start":"03:12.630 ","End":"03:16.380","Text":"v_2 up to v_n,"},{"Start":"03:16.380 ","End":"03:19.620","Text":"for an n-dimensional inner product space,"},{"Start":"03:19.620 ","End":"03:24.535","Text":"the Gram-Schmidt algorithm constructs an orthogonal basis,"},{"Start":"03:24.535 ","End":"03:28.585","Text":"u_1, u_2 up to u_n."},{"Start":"03:28.585 ","End":"03:33.420","Text":"In our example, we have v_1 and v_2,"},{"Start":"03:33.420 ","End":"03:36.035","Text":"which is equal to 2,1 and 1,1."},{"Start":"03:36.035 ","End":"03:39.625","Text":"Now we can see immediately that this is not"},{"Start":"03:39.625 ","End":"03:45.790","Text":"an orthogonal basis because 2 times 1 plus 1 times 1 is not 0."},{"Start":"03:45.790 ","End":"03:49.030","Text":"What the Gram-Schmidt algorithm is going to do,"},{"Start":"03:49.030 ","End":"03:52.320","Text":"is it\u0027s going to turn these vectors v_1,"},{"Start":"03:52.320 ","End":"03:56.100","Text":"v_2 into two orthogonal vectors, u_1,"},{"Start":"03:56.100 ","End":"04:01.700","Text":"u_2, such that the dot product of these two vectors is 0."},{"Start":"04:01.700 ","End":"04:04.565","Text":"Now we\u0027re only doing this for a 2 by 2."},{"Start":"04:04.565 ","End":"04:09.095","Text":"But the Gram-Schmidt process can be used for n by n matrices,"},{"Start":"04:09.095 ","End":"04:14.510","Text":"in which case we would have n vectors where they\u0027re orthogonal to each other."},{"Start":"04:14.510 ","End":"04:18.010","Text":"But we\u0027re just going to demonstrate this with a 2 by 2."},{"Start":"04:18.010 ","End":"04:20.555","Text":"Before we actually look at the algorithm,"},{"Start":"04:20.555 ","End":"04:24.020","Text":"we first need to define what\u0027s known as the projection map."},{"Start":"04:24.020 ","End":"04:27.371","Text":"The projection map is this thing here,"},{"Start":"04:27.371 ","End":"04:33.890","Text":"is the projection of v onto u is equal to the inner product of"},{"Start":"04:33.890 ","End":"04:41.070","Text":"v and u divided by the inner product of u with itself multiplied by u."},{"Start":"04:41.070 ","End":"04:44.720","Text":"This is the same as the inner product of v and u divided by"},{"Start":"04:44.720 ","End":"04:48.860","Text":"the norm of u multiplied by u,"},{"Start":"04:48.860 ","End":"04:52.945","Text":"or the norm squared because we\u0027ve got this squared here as well."},{"Start":"04:52.945 ","End":"04:56.510","Text":"Now that we\u0027ve defined the projection map,"},{"Start":"04:56.510 ","End":"04:59.915","Text":"we can actually use this in the Gram-Schmidt algorithm because"},{"Start":"04:59.915 ","End":"05:03.770","Text":"all we need to do is know how to use the projection."},{"Start":"05:03.770 ","End":"05:06.065","Text":"Now, the inner products,"},{"Start":"05:06.065 ","End":"05:08.015","Text":"if you\u0027ve never seen it before."},{"Start":"05:08.015 ","End":"05:12.680","Text":"This is something that\u0027s a more general case of the dot products."},{"Start":"05:12.680 ","End":"05:16.070","Text":"You\u0027ve probably seen the dot products before where you just"},{"Start":"05:16.070 ","End":"05:20.870","Text":"multiply the corresponding entries in each vector."},{"Start":"05:20.870 ","End":"05:24.740","Text":"The inner products can be used for some more abstract objects."},{"Start":"05:24.740 ","End":"05:27.260","Text":"But for the sake of what we\u0027re doing here,"},{"Start":"05:27.260 ","End":"05:31.160","Text":"we\u0027re just going to treat it as if it is the dot product."},{"Start":"05:31.160 ","End":"05:37.175","Text":"Let\u0027s actually use the Gram-Schmidt algorithm to work out u_1 and u_2,"},{"Start":"05:37.175 ","End":"05:39.285","Text":"which is our aim."},{"Start":"05:39.285 ","End":"05:46.325","Text":"The first thing we do is we say that we let u_1 be equal to v_1."},{"Start":"05:46.325 ","End":"05:51.180","Text":"Remember, this is our v_1 and this is our v_2."},{"Start":"05:51.180 ","End":"05:54.275","Text":"That\u0027s nice because this tells us that our u_1,"},{"Start":"05:54.275 ","End":"05:55.550","Text":"we can just read it off,"},{"Start":"05:55.550 ","End":"06:00.670","Text":"and u_1=v_1, which is equal to 2,1."},{"Start":"06:00.670 ","End":"06:03.735","Text":"Now, how do we work out u_2?"},{"Start":"06:03.735 ","End":"06:08.505","Text":"Well, u_2 is v_2 minus this projection."},{"Start":"06:08.505 ","End":"06:11.100","Text":"We\u0027ve got v_2,"},{"Start":"06:11.100 ","End":"06:13.035","Text":"which is equal to 1,1."},{"Start":"06:13.035 ","End":"06:17.220","Text":"We\u0027ve got 1,1 minus this projection."},{"Start":"06:17.220 ","End":"06:19.625","Text":"If you\u0027ve never done anything like this before,"},{"Start":"06:19.625 ","End":"06:23.780","Text":"a nice way that you can do it is you can just compare"},{"Start":"06:23.780 ","End":"06:30.619","Text":"this element or this quantity to this quantity and replace the corresponding items."},{"Start":"06:30.619 ","End":"06:32.285","Text":"Here we have a u,"},{"Start":"06:32.285 ","End":"06:34.630","Text":"but we\u0027re going to replace that with a u_1."},{"Start":"06:34.630 ","End":"06:36.075","Text":"Here we have a v,"},{"Start":"06:36.075 ","End":"06:38.495","Text":"so we\u0027re going to replace that with a v_2."},{"Start":"06:38.495 ","End":"06:43.250","Text":"What we get here is we get 1,1 minus and then"},{"Start":"06:43.250 ","End":"06:48.415","Text":"we\u0027ve got the inner product of v_2, and u_1."},{"Start":"06:48.415 ","End":"06:50.735","Text":"Then that\u0027s divided by,"},{"Start":"06:50.735 ","End":"06:58.015","Text":"we\u0027re just going to use this one here, u_1 norm squared."},{"Start":"06:58.015 ","End":"06:59.990","Text":"If you\u0027ve never seen the norm before,"},{"Start":"06:59.990 ","End":"07:02.720","Text":"this is just a more general way of saying"},{"Start":"07:02.720 ","End":"07:07.640","Text":"the magnitude because we\u0027re not always working in 2 and 3 dimensions,"},{"Start":"07:07.640 ","End":"07:09.380","Text":"we may have n dimensions,"},{"Start":"07:09.380 ","End":"07:13.895","Text":"in which case the length of something is not so easy to understand."},{"Start":"07:13.895 ","End":"07:16.190","Text":"We use these two bars here."},{"Start":"07:16.190 ","End":"07:21.050","Text":"But you can look at it as if it\u0027s the Euclidean distance as well."},{"Start":"07:21.050 ","End":"07:25.200","Text":"Then we\u0027re multiplying this by u_1."},{"Start":"07:25.510 ","End":"07:29.000","Text":"If we just tidy this up a little bit,"},{"Start":"07:29.000 ","End":"07:37.205","Text":"well then we\u0027ve got 1,1 minus and then the inner product of v_2 and u_1."},{"Start":"07:37.205 ","End":"07:39.860","Text":"Let\u0027s just do an aside here."},{"Start":"07:39.860 ","End":"07:43.235","Text":"The inner product of v_2 and u_1,"},{"Start":"07:43.235 ","End":"07:52.440","Text":"that\u0027s equal to the inner product of 2,1 and v_2 is 1,1."},{"Start":"07:52.440 ","End":"07:57.840","Text":"This is just going to be a 1,1 here and v and u_1,"},{"Start":"07:57.840 ","End":"08:01.635","Text":"which remember we said was v_1, which is 2,1."},{"Start":"08:01.635 ","End":"08:04.715","Text":"We want the inner product of this thing."},{"Start":"08:04.715 ","End":"08:08.255","Text":"I remember we said we\u0027re just treating it like it\u0027s the dot product."},{"Start":"08:08.255 ","End":"08:13.080","Text":"This is going to be 1 times 2 plus 1 times 1,"},{"Start":"08:13.080 ","End":"08:15.300","Text":"which is just equal to 3."},{"Start":"08:15.300 ","End":"08:17.795","Text":"On this numerator here,"},{"Start":"08:17.795 ","End":"08:21.170","Text":"we just have a 3 and then on the denominator,"},{"Start":"08:21.170 ","End":"08:26.975","Text":"we can look at this as say the magnitude of u_1 because u is actually 2 dimensional here."},{"Start":"08:26.975 ","End":"08:29.780","Text":"The magnitude of u_1,"},{"Start":"08:29.780 ","End":"08:32.445","Text":"which is 2,1,"},{"Start":"08:32.445 ","End":"08:34.755","Text":"remember we said u_1 is v_1."},{"Start":"08:34.755 ","End":"08:37.830","Text":"This is equal to 2,1."},{"Start":"08:37.830 ","End":"08:42.485","Text":"That\u0027s just going to be 2^2 plus 1^2 square rooted,"},{"Start":"08:42.485 ","End":"08:43.865","Text":"but we\u0027re squaring it again."},{"Start":"08:43.865 ","End":"08:48.320","Text":"It actually is just 2^2 plus 1^2, which is 5."},{"Start":"08:48.320 ","End":"08:53.605","Text":"Then we\u0027re multiplying that by u_1, which is 2,1."},{"Start":"08:53.605 ","End":"08:56.970","Text":"This remember, let\u0027s not lose track."},{"Start":"08:56.970 ","End":"08:58.195","Text":"This was just u_2."},{"Start":"08:58.195 ","End":"08:59.825","Text":"This is what we\u0027re trying to work out."},{"Start":"08:59.825 ","End":"09:02.915","Text":"We\u0027re trying to work out this entry here."},{"Start":"09:02.915 ","End":"09:05.420","Text":"If we do that, then for u_2,"},{"Start":"09:05.420 ","End":"09:10.245","Text":"we simply get 1 minus 3/5 times 2 in this entry."},{"Start":"09:10.245 ","End":"09:13.170","Text":"That\u0027s going to give us a minus 1/5."},{"Start":"09:13.170 ","End":"09:17.430","Text":"We\u0027ve got 1 minus 3/5 in the second entry."},{"Start":"09:17.430 ","End":"09:21.765","Text":"That\u0027s going to give us 2/5."},{"Start":"09:21.765 ","End":"09:24.155","Text":"This is our u_2, which we\u0027ve gotten now,"},{"Start":"09:24.155 ","End":"09:27.470","Text":"which is minus a 1/5, 2/5."},{"Start":"09:27.470 ","End":"09:30.155","Text":"We\u0027ll try and fit this in."},{"Start":"09:30.155 ","End":"09:34.820","Text":"We\u0027ve got minus a 1/5, and 2/5."},{"Start":"09:34.820 ","End":"09:39.620","Text":"Now you can see that this is indeed orthogonal because 2 times"},{"Start":"09:39.620 ","End":"09:45.660","Text":"minus 1/5 plus 1 times 2/5 does indeed give us 0."},{"Start":"09:45.660 ","End":"09:50.345","Text":"We\u0027ve used the Gram-Schmidt to give us this orthogonal basis where"},{"Start":"09:50.345 ","End":"09:55.505","Text":"these elements here are u_1 and u_2."},{"Start":"09:55.505 ","End":"10:00.410","Text":"Now we just need to construct our matrix P,"},{"Start":"10:00.410 ","End":"10:05.845","Text":"P inverse, and D in the way that we have done in previous videos."},{"Start":"10:05.845 ","End":"10:12.375","Text":"As before, P is just made of the normalized eigenvectors."},{"Start":"10:12.375 ","End":"10:15.140","Text":"We use these as the columns for P,"},{"Start":"10:15.140 ","End":"10:17.645","Text":"but we must normalize them first."},{"Start":"10:17.645 ","End":"10:19.345","Text":"What does that give us?"},{"Start":"10:19.345 ","End":"10:22.965","Text":"We get P as being equal to,"},{"Start":"10:22.965 ","End":"10:25.235","Text":"what\u0027s the magnitude of this eigenvector?"},{"Start":"10:25.235 ","End":"10:29.150","Text":"That\u0027s just 2^2 plus1^2 Square rooted,"},{"Start":"10:29.150 ","End":"10:30.605","Text":"which is root 5."},{"Start":"10:30.605 ","End":"10:32.510","Text":"In this first column,"},{"Start":"10:32.510 ","End":"10:36.735","Text":"we\u0027re just going to get 2 over the square root of 5,"},{"Start":"10:36.735 ","End":"10:40.035","Text":"then 1 over the square root of 5,"},{"Start":"10:40.035 ","End":"10:49.260","Text":"and the second column or the magnitude of this is just going to be 1/5^2 plus 2/5^2."},{"Start":"10:49.260 ","End":"10:51.130","Text":"Let\u0027s actually just do right, this one."},{"Start":"10:51.130 ","End":"10:59.250","Text":"We\u0027ve got 1/5^2 plus 2/5^2."},{"Start":"10:59.250 ","End":"11:01.065","Text":"Then we square root this."},{"Start":"11:01.065 ","End":"11:07.680","Text":"Then that would give us the square root of 1/5 because this is just 520/5."},{"Start":"11:07.680 ","End":"11:12.090","Text":"Then that\u0027s the same as 1 over the square root of 5."},{"Start":"11:12.090 ","End":"11:15.375","Text":"We just divide these 2 things by root 5."},{"Start":"11:15.375 ","End":"11:16.715","Text":"Then in this column,"},{"Start":"11:16.715 ","End":"11:21.300","Text":"we just get minus 1 over the square root of 5."},{"Start":"11:21.300 ","End":"11:25.885","Text":"Here we get 2 over the square root of 5."},{"Start":"11:25.885 ","End":"11:30.475","Text":"Now, P inverse, because this is orthonormal."},{"Start":"11:30.475 ","End":"11:33.320","Text":"That\u0027s just equal to the transpose of P,"},{"Start":"11:33.320 ","End":"11:35.465","Text":"which we\u0027ve seen before as well."},{"Start":"11:35.465 ","End":"11:40.400","Text":"Then that\u0027s easy because these 2 things the same and then we just swap these 2."},{"Start":"11:40.400 ","End":"11:43.210","Text":"We\u0027ve got 2 over root 5 here,"},{"Start":"11:43.210 ","End":"11:53.085","Text":"2 over root 5 here and we\u0027ve got 1 over root 5 here now and a minus 1 over root 5 here."},{"Start":"11:53.085 ","End":"11:55.615","Text":"Now, all that remains is D,"},{"Start":"11:55.615 ","End":"11:58.190","Text":"and this is the easy one because remember,"},{"Start":"11:58.190 ","End":"12:01.340","Text":"this is just the matrix that\u0027s formed of the eigenvalues to"},{"Start":"12:01.340 ","End":"12:05.060","Text":"go in the entries of the leading diagonal."},{"Start":"12:05.060 ","End":"12:06.509","Text":"This is just 3, 0, 0,"},{"Start":"12:06.509 ","End":"12:10.470","Text":"3, as we\u0027ve seen before."},{"Start":"12:10.470 ","End":"12:14.855","Text":"Now we can say that we have indeed diagonalised A"},{"Start":"12:14.855 ","End":"12:19.415","Text":"because we can now write it in this form, PDP inverse,"},{"Start":"12:19.415 ","End":"12:24.405","Text":"which is actually the same as PDP transpose,"},{"Start":"12:24.405 ","End":"12:30.300","Text":"which is these 3 matrices multiplied together."}],"ID":30953},{"Watched":false,"Name":"Exercise 4","Duration":"7m 22s","ChapterTopicVideoID":29347,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.230 ","End":"00:06.570","Text":"In this video, we\u0027re going to be looking at a 3 by 3 matrix and using"},{"Start":"00:06.570 ","End":"00:13.935","Text":"the Spectral Theorem to diagonalize this matrix and to find an orthonormal basis of R_3."},{"Start":"00:13.935 ","End":"00:16.350","Text":"We\u0027ve got this matrix A,"},{"Start":"00:16.350 ","End":"00:17.790","Text":"which is 3, 0,"},{"Start":"00:17.790 ","End":"00:19.320","Text":"0, 0, 4,"},{"Start":"00:19.320 ","End":"00:21.960","Text":"1, 0, 1, 4."},{"Start":"00:21.960 ","End":"00:27.885","Text":"What\u0027s nice about this matrix is that because these 2 entries are 0,"},{"Start":"00:27.885 ","End":"00:32.985","Text":"it\u0027s actually quite easy to determine what\u0027s the eigenvalues are."},{"Start":"00:32.985 ","End":"00:35.635","Text":"Because remember from our previous videos,"},{"Start":"00:35.635 ","End":"00:38.735","Text":"to form this orthonormal basis and to diagonalize,"},{"Start":"00:38.735 ","End":"00:43.985","Text":"we need the eigenvalues and the corresponding eigenvectors."},{"Start":"00:43.985 ","End":"00:45.830","Text":"How do we get the eigenvalues?"},{"Start":"00:45.830 ","End":"00:48.395","Text":"Well, as you\u0027ve seen many times now,"},{"Start":"00:48.395 ","End":"00:53.165","Text":"we just look at the characteristic equation,"},{"Start":"00:53.165 ","End":"00:58.700","Text":"which is the determinant of A minus Lambda I equals to 0."},{"Start":"00:58.700 ","End":"01:04.810","Text":"A minus Lambda I,"},{"Start":"01:04.810 ","End":"01:10.365","Text":"that\u0027s just equal to this matrix and then we minus Lambda on the diagonal entries."},{"Start":"01:10.365 ","End":"01:14.475","Text":"We just get 3 minus Lambda 0,"},{"Start":"01:14.475 ","End":"01:16.515","Text":"0, 0,"},{"Start":"01:16.515 ","End":"01:19.710","Text":"4 minus Lambda 1,"},{"Start":"01:19.710 ","End":"01:21.345","Text":"0, 1,"},{"Start":"01:21.345 ","End":"01:24.405","Text":"and then 4 minus Lambda again."},{"Start":"01:24.405 ","End":"01:27.075","Text":"This is our A minus Lambda I,"},{"Start":"01:27.075 ","End":"01:28.850","Text":"so the determinant of that."},{"Start":"01:28.850 ","End":"01:31.430","Text":"Well, we just do that in the usual way."},{"Start":"01:31.430 ","End":"01:34.865","Text":"That\u0027s just 3 minus Lambda."},{"Start":"01:34.865 ","End":"01:40.730","Text":"Then in this bracket we do this times this takeaway this times this."},{"Start":"01:40.730 ","End":"01:47.500","Text":"We\u0027ve got 4 minus Lambda^2 minus 1."},{"Start":"01:47.500 ","End":"01:51.170","Text":"Then because these 2 and these 2 are 0, well,"},{"Start":"01:51.170 ","End":"01:55.550","Text":"then that\u0027s just the determinant and we set that equal to 0."},{"Start":"01:55.550 ","End":"01:59.435","Text":"If we expand the inside of this bracket,"},{"Start":"01:59.435 ","End":"02:02.640","Text":"then we get 3 minus Lambda,"},{"Start":"02:02.640 ","End":"02:05.790","Text":"which just stays the same on the outside."},{"Start":"02:05.790 ","End":"02:07.350","Text":"Then on the inside,"},{"Start":"02:07.350 ","End":"02:13.965","Text":"we get Lambda^2 minus 8 Lambda plus 16."},{"Start":"02:13.965 ","End":"02:16.125","Text":"That comes from this thing squared."},{"Start":"02:16.125 ","End":"02:18.420","Text":"Then we need to minus 1."},{"Start":"02:18.420 ","End":"02:29.595","Text":"That is the same as 3 minus Lambda multiplied by Lambda^2 minus 8 Lambda plus 15."},{"Start":"02:29.595 ","End":"02:32.390","Text":"Remember we\u0027re setting this equal to 0."},{"Start":"02:32.390 ","End":"02:37.995","Text":"Well, what\u0027s nice is that this bracket here actually factorizes"},{"Start":"02:37.995 ","End":"02:44.700","Text":"into Lambda minus 3, Lambda minus 5."},{"Start":"02:44.700 ","End":"02:51.590","Text":"We can take a minus out of this factor and then we get"},{"Start":"02:51.590 ","End":"03:00.710","Text":"minus Lambda minus 3^2 multiplied by Lambda minus 5."},{"Start":"03:00.710 ","End":"03:03.745","Text":"Then this is equal to 0."},{"Start":"03:03.745 ","End":"03:06.275","Text":"Quite nicely from this,"},{"Start":"03:06.275 ","End":"03:10.850","Text":"we can deduce that we\u0027ve got 0 values of Lambda,"},{"Start":"03:10.850 ","End":"03:13.385","Text":"1 Lambda value repeats itself."},{"Start":"03:13.385 ","End":"03:16.025","Text":"We\u0027ve got Lambda 1,"},{"Start":"03:16.025 ","End":"03:22.905","Text":"which we\u0027ll say is equal to 5 and then we\u0027ve got Lambda 2 and Lambda 3,"},{"Start":"03:22.905 ","End":"03:25.745","Text":"which are equal to 3."},{"Start":"03:25.745 ","End":"03:29.380","Text":"So, now we just need to find the eigenvectors that are"},{"Start":"03:29.380 ","End":"03:34.745","Text":"associated with these eigenvalues and see what that gives us."},{"Start":"03:34.745 ","End":"03:39.565","Text":"We\u0027ve seen how to work out eigenvectors for"},{"Start":"03:39.565 ","End":"03:43.990","Text":"a 3 by 3 matrix in a previous video in this playlist."},{"Start":"03:43.990 ","End":"03:49.395","Text":"We will spare the gory details and go directly to the results."},{"Start":"03:49.395 ","End":"03:52.295","Text":"For Lambda 1=5,"},{"Start":"03:52.295 ","End":"03:55.645","Text":"we get an eigenvector V_1,"},{"Start":"03:55.645 ","End":"03:57.814","Text":"which is equal to 0,"},{"Start":"03:57.814 ","End":"04:01.670","Text":"1, 1, like that."},{"Start":"04:01.670 ","End":"04:06.635","Text":"This is what we get for our first eigenvector."},{"Start":"04:06.635 ","End":"04:11.030","Text":"For Lambda 2 equals Lambda 3=3, well,"},{"Start":"04:11.030 ","End":"04:15.755","Text":"this corresponds to the eigenvectors, say V_2,"},{"Start":"04:15.755 ","End":"04:18.180","Text":"which is equal to 1, 0, 0,"},{"Start":"04:18.550 ","End":"04:26.975","Text":"and V_3, which is equal to 0 minus 1 and 1."},{"Start":"04:26.975 ","End":"04:34.744","Text":"Now, what do we notice about this set of eigenvectors V_1, V_2, and V_3?"},{"Start":"04:34.744 ","End":"04:37.180","Text":"Well, they\u0027re all orthogonal."},{"Start":"04:37.180 ","End":"04:39.625","Text":"We can see that just by comparing them."},{"Start":"04:39.625 ","End":"04:43.255","Text":"If we were to do the dot product of V_1 and V_2, well,"},{"Start":"04:43.255 ","End":"04:46.970","Text":"this is clearly 0 because we\u0027ve got 0 here and 1 here,"},{"Start":"04:46.970 ","End":"04:49.280","Text":"and then 0 in the other 2 and V_2."},{"Start":"04:49.280 ","End":"04:52.730","Text":"The dot-product of V_1 and V_3 is 0 for"},{"Start":"04:52.730 ","End":"04:57.830","Text":"the same reason and the dot-product of V_3 and V_2 is also 0."},{"Start":"04:57.830 ","End":"05:02.315","Text":"We\u0027ve actually got an orthogonal set of vectors."},{"Start":"05:02.315 ","End":"05:07.610","Text":"Really, now we just need to normalize them as we did before."},{"Start":"05:07.610 ","End":"05:10.420","Text":"Let\u0027s just create a section over here."},{"Start":"05:10.420 ","End":"05:13.515","Text":"What\u0027s the magnitude of V_1?"},{"Start":"05:13.515 ","End":"05:18.635","Text":"The magnitude of V_1 is just going to be the square root of 2."},{"Start":"05:18.635 ","End":"05:22.085","Text":"Because we\u0027re doing 1^2 plus 1^2 square rooted,"},{"Start":"05:22.085 ","End":"05:25.490","Text":"the magnitude of V_2."},{"Start":"05:25.490 ","End":"05:32.950","Text":"Well, that\u0027s just going to be 1 and the magnitude of V_3 is again just root 2."},{"Start":"05:32.950 ","End":"05:37.280","Text":"After normalizing these and putting these into the columns, well,"},{"Start":"05:37.280 ","End":"05:41.180","Text":"we can form our matrix P and that\u0027s just going to"},{"Start":"05:41.180 ","End":"05:47.825","Text":"be 0,1 over root 2,1 over root 2 again."},{"Start":"05:47.825 ","End":"05:50.825","Text":"Then this one, because the magnitude was just one,"},{"Start":"05:50.825 ","End":"05:52.730","Text":"this column remains the same,"},{"Start":"05:52.730 ","End":"05:54.810","Text":"so we\u0027ve got 1, 0, 0."},{"Start":"05:55.540 ","End":"06:00.050","Text":"Then here we\u0027re just doing this column divided by root 2."},{"Start":"06:00.050 ","End":"06:04.245","Text":"We\u0027ve got 0 minus 1 over the square root of 2,"},{"Start":"06:04.245 ","End":"06:07.965","Text":"and 1 over the square root of 2 again."},{"Start":"06:07.965 ","End":"06:11.570","Text":"That\u0027s our P. Because this is orthonormal, well,"},{"Start":"06:11.570 ","End":"06:16.130","Text":"we can construct P inverse as we have many times before,"},{"Start":"06:16.130 ","End":"06:22.775","Text":"so P inverse is just equal to the transpose of P and it\u0027s really easy to work out."},{"Start":"06:22.775 ","End":"06:27.086","Text":"These 3 entries stay the same so you\u0027ve got 0,"},{"Start":"06:27.086 ","End":"06:28.945","Text":"0, 1 over root 2."},{"Start":"06:28.945 ","End":"06:30.960","Text":"These 2 entries swap,"},{"Start":"06:30.960 ","End":"06:33.225","Text":"so this becomes a 1 over root 2."},{"Start":"06:33.225 ","End":"06:34.785","Text":"This becomes a 1."},{"Start":"06:34.785 ","End":"06:36.480","Text":"These 2 entries swap,"},{"Start":"06:36.480 ","End":"06:41.805","Text":"so this now becomes a 1 over root 2 and this is a 0 and finally,"},{"Start":"06:41.805 ","End":"06:43.350","Text":"these 2 entries swap."},{"Start":"06:43.350 ","End":"06:47.925","Text":"This is a minus 1 over root 2 and this is a 0."},{"Start":"06:47.925 ","End":"06:51.890","Text":"Then finally, we just need our matrix D,"},{"Start":"06:51.890 ","End":"06:56.630","Text":"which is the easy one and that\u0027s just formed of the eigenvalues."},{"Start":"06:56.630 ","End":"06:59.180","Text":"But remember we have to do them in order."},{"Start":"06:59.180 ","End":"07:02.810","Text":"Because we did this corresponded to V_1,"},{"Start":"07:02.810 ","End":"07:05.000","Text":"this corresponding to V_2 and V_3,"},{"Start":"07:05.000 ","End":"07:08.190","Text":"then we\u0027ve got to have 5 in the first one,"},{"Start":"07:08.190 ","End":"07:09.920","Text":"and then 0,"},{"Start":"07:09.920 ","End":"07:14.885","Text":"0 and then we\u0027ve got threes in the other positions on the diagonal."},{"Start":"07:14.885 ","End":"07:22.540","Text":"We\u0027ve diagonalized the matrix and we formed an orthonormal basis of R_3."}],"ID":30954},{"Watched":false,"Name":"Exercise 5","Duration":"9m 18s","ChapterTopicVideoID":29348,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.050","Text":"In this video, we\u0027re looking at an exercise that\u0027s somewhat"},{"Start":"00:04.050 ","End":"00:08.700","Text":"resembling of something you might find in an exam situation."},{"Start":"00:08.700 ","End":"00:12.510","Text":"Let this matrix A be equal to 4,"},{"Start":"00:12.510 ","End":"00:16.026","Text":"3a minus 2, a^2,"},{"Start":"00:16.026 ","End":"00:18.645","Text":"4 where a is a real number,"},{"Start":"00:18.645 ","End":"00:23.535","Text":"we need to find the values of a for which A is symmetric."},{"Start":"00:23.535 ","End":"00:25.695","Text":"Then in the second part,"},{"Start":"00:25.695 ","End":"00:28.230","Text":"as we\u0027ve done many times before,"},{"Start":"00:28.230 ","End":"00:31.395","Text":"we need to use the spectral theorem to diagonalize"},{"Start":"00:31.395 ","End":"00:36.340","Text":"A for the values of a that we found in this first part."},{"Start":"00:36.340 ","End":"00:40.950","Text":"How do we find the values of a for which A is symmetric?"},{"Start":"00:40.950 ","End":"00:44.390","Text":"We need to know what it means for a matrix to be symmetric."},{"Start":"00:44.390 ","End":"00:51.000","Text":"That means that A is actually equal to its transpose."},{"Start":"00:51.000 ","End":"00:56.900","Text":"What that means is that this first row,"},{"Start":"00:56.900 ","End":"01:04.730","Text":"second column entry must be equal to the second row,"},{"Start":"01:04.730 ","End":"01:10.055","Text":"first column entry, because that\u0027s what the transpose operation does,"},{"Start":"01:10.055 ","End":"01:13.235","Text":"is it takes this one over to here."},{"Start":"01:13.235 ","End":"01:17.315","Text":"What that means is if we\u0027re focusing on part I now,"},{"Start":"01:17.315 ","End":"01:25.425","Text":"well what that means is that a squared must be equal to 3a minus 2."},{"Start":"01:25.425 ","End":"01:29.630","Text":"What we have is we just have a simple quadratic in a."},{"Start":"01:29.630 ","End":"01:32.810","Text":"If we bring the as or everything to one side,"},{"Start":"01:32.810 ","End":"01:38.430","Text":"then we get a^2 minus 3a plus 2 is equal to 0."},{"Start":"01:38.430 ","End":"01:41.270","Text":"This is nice because it factorizes,"},{"Start":"01:41.270 ","End":"01:45.445","Text":"and it factorizes into a minus 2,"},{"Start":"01:45.445 ","End":"01:47.505","Text":"a minus 1,"},{"Start":"01:47.505 ","End":"01:49.275","Text":"that\u0027s equal to 0."},{"Start":"01:49.275 ","End":"01:54.005","Text":"We can just read off straight away that this matrix A is symmetric,"},{"Start":"01:54.005 ","End":"01:59.000","Text":"so A is symmetric if little"},{"Start":"01:59.000 ","End":"02:07.755","Text":"a is equal to either 1 or 2."},{"Start":"02:07.755 ","End":"02:09.701","Text":"That\u0027s the first part done."},{"Start":"02:09.701 ","End":"02:11.809","Text":"Fairly simple, fairly standard."},{"Start":"02:11.809 ","End":"02:13.835","Text":"Now in this second part,"},{"Start":"02:13.835 ","End":"02:17.535","Text":"we\u0027re substituting in these values of a."},{"Start":"02:17.535 ","End":"02:22.745","Text":"Then we\u0027re going to diagonalize as we\u0027ve seen before in many different videos."},{"Start":"02:22.745 ","End":"02:27.935","Text":"But we\u0027re going to skip a couple of steps because some of them we\u0027ve seen multiple times."},{"Start":"02:27.935 ","End":"02:31.540","Text":"We\u0027re just going to go straight to the result."},{"Start":"02:31.540 ","End":"02:34.275","Text":"For a is equal to 1,"},{"Start":"02:34.275 ","End":"02:35.780","Text":"well what does that tell us?"},{"Start":"02:35.780 ","End":"02:38.010","Text":"Well, we put a into here, is 1."},{"Start":"02:38.010 ","End":"02:42.140","Text":"Well, then we get our big matrix, or A,"},{"Start":"02:42.140 ","End":"02:44.605","Text":"is equal to 4,"},{"Start":"02:44.605 ","End":"02:46.320","Text":"and we\u0027ve got 3,"},{"Start":"02:46.320 ","End":"02:49.500","Text":"lots of 1 minus 2, which is 1."},{"Start":"02:49.500 ","End":"02:52.135","Text":"Then this is, of course, 1 and this is 4."},{"Start":"02:52.135 ","End":"02:55.100","Text":"Now, this is good because this shows that we\u0027ve actually worked out"},{"Start":"02:55.100 ","End":"02:59.420","Text":"the correct value of a because these two entries are the same."},{"Start":"02:59.420 ","End":"03:02.000","Text":"We need to diagonalize this,"},{"Start":"03:02.000 ","End":"03:03.395","Text":"and remember what we do."},{"Start":"03:03.395 ","End":"03:05.320","Text":"It\u0027s the same procedure every time."},{"Start":"03:05.320 ","End":"03:07.190","Text":"We look for the eigenvalues,"},{"Start":"03:07.190 ","End":"03:13.325","Text":"we find the corresponding eigenvectors and then we normalize those eigenvectors"},{"Start":"03:13.325 ","End":"03:20.325","Text":"and we put those into the columns for our invertible matrix P. Let\u0027s do that."},{"Start":"03:20.325 ","End":"03:24.485","Text":"The eigenvalues that we get from this matrix,"},{"Start":"03:24.485 ","End":"03:26.885","Text":"we get Lambda_1,"},{"Start":"03:26.885 ","End":"03:29.090","Text":"which is equal to 3,"},{"Start":"03:29.090 ","End":"03:31.160","Text":"and we get Lambda_2,"},{"Start":"03:31.160 ","End":"03:32.795","Text":"which is equal to 5."},{"Start":"03:32.795 ","End":"03:37.010","Text":"Just as a recap, in case you\u0027ve forgotten how we get the eigenvalues,"},{"Start":"03:37.010 ","End":"03:39.620","Text":"is we use the characteristic polynomial,"},{"Start":"03:39.620 ","End":"03:43.115","Text":"and we say that A minus Lambda I,"},{"Start":"03:43.115 ","End":"03:45.455","Text":"which is the characteristic polynomial,"},{"Start":"03:45.455 ","End":"03:47.430","Text":"is equal to 0,"},{"Start":"03:47.430 ","End":"03:50.160","Text":"so that\u0027s the characteristic equation."},{"Start":"03:50.160 ","End":"03:54.890","Text":"If you do that, then you should arrive at these two eigenvalues."},{"Start":"03:54.890 ","End":"04:01.755","Text":"Now, these eigenvalues correspond to the eigenvectors,"},{"Start":"04:01.755 ","End":"04:06.090","Text":"v_1 which is equal to minus 1, 1."},{"Start":"04:06.090 ","End":"04:11.815","Text":"This eigenvalue corresponds to the eigenvector 1, 1."},{"Start":"04:11.815 ","End":"04:15.365","Text":"Now, what do we notice about v_1 and v_2?"},{"Start":"04:15.365 ","End":"04:16.754","Text":"Well, they are orthogonal,"},{"Start":"04:16.754 ","End":"04:19.780","Text":"because if we take the dot product of these two,"},{"Start":"04:19.780 ","End":"04:22.285","Text":"then we get 0. So that\u0027s good."},{"Start":"04:22.285 ","End":"04:25.615","Text":"We don\u0027t need to use the Gram-Schmidt to orthogonalize,"},{"Start":"04:25.615 ","End":"04:28.255","Text":"but we do need to normalize."},{"Start":"04:28.255 ","End":"04:29.755","Text":"How do we normalize again?"},{"Start":"04:29.755 ","End":"04:32.125","Text":"Well, we look at the magnitude of each of them."},{"Start":"04:32.125 ","End":"04:34.670","Text":"The magnitude of v_1,"},{"Start":"04:34.670 ","End":"04:40.255","Text":"well that\u0027s just equal to minus 1^2 plus 1^2 square rooted,"},{"Start":"04:40.255 ","End":"04:42.996","Text":"which is equal to the square root of 2."},{"Start":"04:42.996 ","End":"04:46.870","Text":"And the normal or the magnitude of"},{"Start":"04:46.870 ","End":"04:51.565","Text":"v_2 is going to be the square root of 2 for the same reason."},{"Start":"04:51.565 ","End":"04:54.610","Text":"Now we can construct our matrix P,"},{"Start":"04:54.610 ","End":"04:58.005","Text":"which is just the normalized eigenvectors,"},{"Start":"04:58.005 ","End":"05:01.825","Text":"and we put those into the columns as we\u0027ve done before."},{"Start":"05:01.825 ","End":"05:05.485","Text":"We\u0027re just going to get minus 1 over root 2,"},{"Start":"05:05.485 ","End":"05:07.840","Text":"1 over root 2."},{"Start":"05:07.840 ","End":"05:10.360","Text":"Then here, 1 over root 2,"},{"Start":"05:10.360 ","End":"05:12.890","Text":"1 over root 2."},{"Start":"05:12.890 ","End":"05:15.255","Text":"Now, P inverse,"},{"Start":"05:15.255 ","End":"05:17.845","Text":"well because this is orthonormal,"},{"Start":"05:17.845 ","End":"05:22.450","Text":"P inverse is just going to be equal to P transpose."},{"Start":"05:22.450 ","End":"05:25.060","Text":"P transpose is just this thing,"},{"Start":"05:25.060 ","End":"05:27.430","Text":"but we swap these two entries."},{"Start":"05:27.430 ","End":"05:29.875","Text":"But because these two entries are the same,"},{"Start":"05:29.875 ","End":"05:32.830","Text":"then we actually see that P transpose is the same"},{"Start":"05:32.830 ","End":"05:36.450","Text":"as the original matrix P. What do we get?"},{"Start":"05:36.450 ","End":"05:38.100","Text":"Well, it\u0027s just the same thing."},{"Start":"05:38.100 ","End":"05:40.515","Text":"It\u0027s minus 1 over root 2,"},{"Start":"05:40.515 ","End":"05:43.065","Text":"1 over root 2,"},{"Start":"05:43.065 ","End":"05:44.955","Text":"1 over root 2,"},{"Start":"05:44.955 ","End":"05:47.930","Text":"and then 1 over root 2 here."},{"Start":"05:47.930 ","End":"05:51.905","Text":"Now finally, all we need is our diagonal matrix"},{"Start":"05:51.905 ","End":"05:57.470","Text":"D. That\u0027s just the eigenvalues in the diagonal entries."},{"Start":"05:57.470 ","End":"06:00.155","Text":"But we have to be consistent with the order."},{"Start":"06:00.155 ","End":"06:02.020","Text":"Here is going to be 3,"},{"Start":"06:02.020 ","End":"06:03.180","Text":"this is going to be 0,"},{"Start":"06:03.180 ","End":"06:06.150","Text":"0, and this is going to be 5."},{"Start":"06:06.150 ","End":"06:09.180","Text":"We\u0027ve diagonalized A now,"},{"Start":"06:09.180 ","End":"06:14.929","Text":"and we can say that A is equal to PDP inverse,"},{"Start":"06:14.929 ","End":"06:18.950","Text":"but we\u0027ve shown that P inverse is equal to P itself,"},{"Start":"06:18.950 ","End":"06:22.170","Text":"so this is just equal to PDP."},{"Start":"06:22.250 ","End":"06:27.365","Text":"Now be careful, you can\u0027t write this as say,"},{"Start":"06:27.365 ","End":"06:32.220","Text":"P^2D because matrices are not commutative."},{"Start":"06:32.220 ","End":"06:35.160","Text":"You couldn\u0027t just bring this P to the front."},{"Start":"06:35.160 ","End":"06:38.595","Text":"So we leave it in this order."},{"Start":"06:38.595 ","End":"06:42.365","Text":"Now we\u0027re going to look at what happens for a is equal to 2."},{"Start":"06:42.365 ","End":"06:44.975","Text":"If a is equal to 2,"},{"Start":"06:44.975 ","End":"06:48.710","Text":"our big matrix A is just equal to,"},{"Start":"06:48.710 ","End":"06:50.720","Text":"well remember what a was, it was 4,"},{"Start":"06:50.720 ","End":"06:53.650","Text":"3a minus 2, a^2, 4."},{"Start":"06:53.650 ","End":"06:55.500","Text":"These two are the same."},{"Start":"06:55.500 ","End":"06:57.600","Text":"Then if we put 2 into here,"},{"Start":"06:57.600 ","End":"06:58.750","Text":"where we get 4,"},{"Start":"06:58.750 ","End":"07:01.130","Text":"and then hopefully we should get 4 in this other one,"},{"Start":"07:01.130 ","End":"07:03.520","Text":"so 2^2 is 4, so yes."},{"Start":"07:03.520 ","End":"07:07.650","Text":"Correct. What are the eigenvalues of this one?"},{"Start":"07:07.650 ","End":"07:10.760","Text":"Well, we get an eigenvalue of Lambda_1,"},{"Start":"07:10.760 ","End":"07:13.410","Text":"which is equal to 0."},{"Start":"07:13.410 ","End":"07:17.700","Text":"We also get an eigenvalue Lambda_2,"},{"Start":"07:17.700 ","End":"07:19.605","Text":"which is equal to 8."},{"Start":"07:19.605 ","End":"07:23.450","Text":"You can solve this in the way that we\u0027ve seen many times now."},{"Start":"07:23.450 ","End":"07:27.410","Text":"Now this first eigenvalue, which is 0,"},{"Start":"07:27.410 ","End":"07:32.740","Text":"corresponds to the eigenvector minus 1, 1."},{"Start":"07:32.740 ","End":"07:36.063","Text":"The second eigenvalue, 8,"},{"Start":"07:36.063 ","End":"07:40.665","Text":"corresponds to the eigenvector 1, 1."},{"Start":"07:40.665 ","End":"07:43.685","Text":"Now again, these are orthogonal."},{"Start":"07:43.685 ","End":"07:45.520","Text":"You can see that quite easily."},{"Start":"07:45.520 ","End":"07:47.090","Text":"If you take the dot product, it\u0027s 0."},{"Start":"07:47.090 ","End":"07:49.910","Text":"But we do need to normalize them."},{"Start":"07:49.910 ","End":"07:53.870","Text":"The magnitude of v_1, well that\u0027s easy."},{"Start":"07:53.870 ","End":"07:55.425","Text":"That\u0027s just root 2,"},{"Start":"07:55.425 ","End":"07:57.690","Text":"it\u0027s the same thing that we just saw."},{"Start":"07:57.690 ","End":"08:01.230","Text":"Then the magnitude of v_2 is also root 2."},{"Start":"08:01.230 ","End":"08:03.677","Text":"Let\u0027s form our P matrix."},{"Start":"08:03.677 ","End":"08:10.800","Text":"So P is just equal to the column of this vector divided by the magnitude."},{"Start":"08:10.800 ","End":"08:13.395","Text":"We\u0027ve got minus 1 over root 2,"},{"Start":"08:13.395 ","End":"08:16.275","Text":"then we\u0027ve got 1 over root 2,"},{"Start":"08:16.275 ","End":"08:18.950","Text":"and then these other ones, this next column,"},{"Start":"08:18.950 ","End":"08:20.690","Text":"it\u0027s just going to be 1 over root 2,"},{"Start":"08:20.690 ","End":"08:23.290","Text":"1 over root 2."},{"Start":"08:23.290 ","End":"08:26.025","Text":"What\u0027s P transpose,"},{"Start":"08:26.025 ","End":"08:30.070","Text":"or rather what\u0027s P inverse?"},{"Start":"08:30.070 ","End":"08:32.410","Text":"Well, that\u0027s equal to P transpose."},{"Start":"08:32.410 ","End":"08:35.865","Text":"Because remember, P is orthonormal,"},{"Start":"08:35.865 ","End":"08:41.040","Text":"so this is just equal to minus 1 over root 2."},{"Start":"08:41.040 ","End":"08:42.840","Text":"These are actually the same again,"},{"Start":"08:42.840 ","End":"08:45.555","Text":"so it is just 1 over root 2,"},{"Start":"08:45.555 ","End":"08:47.820","Text":"1 over root 2,"},{"Start":"08:47.820 ","End":"08:50.070","Text":"1 over root 2,"},{"Start":"08:50.070 ","End":"08:53.250","Text":"which is equal to P. Finally,"},{"Start":"08:53.250 ","End":"08:55.470","Text":"we just need our D matrix,"},{"Start":"08:55.470 ","End":"08:58.635","Text":"and that\u0027s just the eigenvalues in the correct order."},{"Start":"08:58.635 ","End":"09:00.985","Text":"This first eigenvalue is 0,"},{"Start":"09:00.985 ","End":"09:02.755","Text":"then these two are 0s,"},{"Start":"09:02.755 ","End":"09:05.615","Text":"and then we just have 8 here."},{"Start":"09:05.615 ","End":"09:09.260","Text":"Again, we have diagonalized and we"},{"Start":"09:09.260 ","End":"09:12.950","Text":"have actually noticed that P is equal to its inverse,"},{"Start":"09:12.950 ","End":"09:18.780","Text":"so A is actually just equal to PD multiplied by P."}],"ID":30955},{"Watched":false,"Name":"Exercise 6","Duration":"3m 11s","ChapterTopicVideoID":29349,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.200","Text":"Welcome back to the Spectral Theorem playlist."},{"Start":"00:04.200 ","End":"00:06.150","Text":"It says, let A=8,5a-10,"},{"Start":"00:06.150 ","End":"00:11.580","Text":"a^2 minus 2,"},{"Start":"00:11.580 ","End":"00:14.339","Text":"where a is in the real numbers."},{"Start":"00:14.339 ","End":"00:18.345","Text":"Is there a value of a for which A is symmetric?"},{"Start":"00:18.345 ","End":"00:24.000","Text":"If it\u0027s exists, use the spectral theorem to diagonalize A."},{"Start":"00:24.000 ","End":"00:27.380","Text":"Now remember what we said in the previous video."},{"Start":"00:27.380 ","End":"00:30.155","Text":"If a is symmetric,"},{"Start":"00:30.155 ","End":"00:35.085","Text":"then these 2 entries must be the same."},{"Start":"00:35.085 ","End":"00:40.190","Text":"Let\u0027s see what happens when we equate these 2 entries."},{"Start":"00:40.190 ","End":"00:42.620","Text":"When we equate them,"},{"Start":"00:42.620 ","End":"00:47.140","Text":"we get this equation or this polynomial in a."},{"Start":"00:47.140 ","End":"00:51.070","Text":"We get a^2= 5a-10."},{"Start":"00:51.070 ","End":"00:53.165","Text":"Bringing everything to the same side,"},{"Start":"00:53.165 ","End":"00:57.080","Text":"we get a^2 minus 5a plus 10=0."},{"Start":"00:57.080 ","End":"01:00.170","Text":"By using the quadratic formula,"},{"Start":"01:00.170 ","End":"01:06.540","Text":"that tells us that a=5 plus or minus this quantity here,"},{"Start":"01:06.540 ","End":"01:14.205","Text":"which is actually in the complex numbers because 25 minus 40 is less than 0."},{"Start":"01:14.205 ","End":"01:19.954","Text":"We\u0027ve got 25 minus 40 square rooted,"},{"Start":"01:19.954 ","End":"01:23.340","Text":"which is -15,"},{"Start":"01:23.340 ","End":"01:25.755","Text":"or the square root of -15,"},{"Start":"01:25.755 ","End":"01:29.325","Text":"and clearly that\u0027s not within the real numbers."},{"Start":"01:29.325 ","End":"01:35.495","Text":"We can conclude that A is not a real symmetric matrix."},{"Start":"01:35.495 ","End":"01:37.280","Text":"It may be symmetric,"},{"Start":"01:37.280 ","End":"01:39.995","Text":"but that would mean that A is complex."},{"Start":"01:39.995 ","End":"01:42.830","Text":"But because it\u0027s not real,"},{"Start":"01:42.830 ","End":"01:45.500","Text":"we can\u0027t use the version of the Spectral Theorem that"},{"Start":"01:45.500 ","End":"01:48.520","Text":"we have been using in the previous videos."},{"Start":"01:48.520 ","End":"01:53.090","Text":"There is however, another version of this theorem for the complex case,"},{"Start":"01:53.090 ","End":"01:57.840","Text":"but it is beyond the scope of this playlist in this topic."}],"ID":30956},{"Watched":false,"Name":"Exercise 7","Duration":"6m 45s","ChapterTopicVideoID":29350,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:04.950","Text":"Welcome back to the next video in our Spectral Theorem Playlist."},{"Start":"00:04.950 ","End":"00:08.535","Text":"This question is very similar to ones that we\u0027ve seen before."},{"Start":"00:08.535 ","End":"00:12.030","Text":"But we actually have the scalar values for a"},{"Start":"00:12.030 ","End":"00:16.680","Text":"now rather than numbers as what we\u0027re used to seeing."},{"Start":"00:16.680 ","End":"00:18.900","Text":"We\u0027ve got this matrix A,"},{"Start":"00:18.900 ","End":"00:20.340","Text":"which is 9,"},{"Start":"00:20.340 ","End":"00:24.225","Text":"a, a, 9 where a is a real number."},{"Start":"00:24.225 ","End":"00:28.590","Text":"We need to use the spectral theorem to diagonalize a."},{"Start":"00:28.590 ","End":"00:32.970","Text":"We approach it in the usual way as we\u0027ve seen."},{"Start":"00:32.970 ","End":"00:37.740","Text":"So what we\u0027re looking for are the eigenvalues of this matrix,"},{"Start":"00:37.740 ","End":"00:41.220","Text":"and also the corresponding eigenvectors."},{"Start":"00:41.220 ","End":"00:43.595","Text":"How do we work out the eigenvalues?"},{"Start":"00:43.595 ","End":"00:47.360","Text":"Remember, we use the characteristic equation,"},{"Start":"00:47.360 ","End":"00:53.320","Text":"which is the determinant of A minus Lambda I equals 0."},{"Start":"00:53.320 ","End":"00:55.190","Text":"What is this?"},{"Start":"00:55.190 ","End":"00:58.190","Text":"Well, A minus Lambda I,"},{"Start":"00:58.190 ","End":"01:04.790","Text":"that is just equal to this matrix taking away lambda on the leading diagonal parts."},{"Start":"01:04.790 ","End":"01:10.270","Text":"That\u0027s just going to give us 9 minus Lambda a,"},{"Start":"01:10.270 ","End":"01:13.895","Text":"a, and 9 minus Lambda."},{"Start":"01:13.895 ","End":"01:19.550","Text":"Now you\u0027ll notice that this matrix assumes the same form as matrices we\u0027ve seen before,"},{"Start":"01:19.550 ","End":"01:25.395","Text":"and that is that it\u0027s both real and symmetric."},{"Start":"01:25.395 ","End":"01:31.190","Text":"We can proceed with the spectral theorem in exactly the same way,"},{"Start":"01:31.190 ","End":"01:33.770","Text":"but note here that we\u0027ve just got scalars here"},{"Start":"01:33.770 ","End":"01:37.390","Text":"that aren\u0027t defined to be specific numbers."},{"Start":"01:37.390 ","End":"01:41.345","Text":"If we want to set the determinant equal to 0,"},{"Start":"01:41.345 ","End":"01:44.225","Text":"well that\u0027s going to give us this equation here."},{"Start":"01:44.225 ","End":"01:48.370","Text":"Remember, the determinant is just AD minus BC,"},{"Start":"01:48.370 ","End":"01:59.330","Text":"so we\u0027ve got 9 minus Lambda squared minus a^2 and that\u0027s equal to 0."},{"Start":"01:59.330 ","End":"02:00.815","Text":"Now to solve this,"},{"Start":"02:00.815 ","End":"02:05.075","Text":"we can just bring the a^2 to the other side so that we get"},{"Start":"02:05.075 ","End":"02:10.540","Text":"9 minus Lambda squared is equal to a^2."},{"Start":"02:10.540 ","End":"02:14.000","Text":"Then we can just square root both sides because remember"},{"Start":"02:14.000 ","End":"02:17.105","Text":"what we\u0027re looking for is lambda, the eigenvalues."},{"Start":"02:17.105 ","End":"02:20.615","Text":"When we square root both sides, don\u0027t forget,"},{"Start":"02:20.615 ","End":"02:24.770","Text":"we have 2 plus or minus one of the sides as well."},{"Start":"02:24.770 ","End":"02:27.860","Text":"Just the same way we would square root any number."},{"Start":"02:27.860 ","End":"02:31.285","Text":"Once we\u0027ve done that, we can rearrange for Lambda,"},{"Start":"02:31.285 ","End":"02:38.520","Text":"and then that tells us that Lambda is just equal to 9 plus or minus a."},{"Start":"02:38.520 ","End":"02:41.045","Text":"We\u0027re just bringing this lambda to the other side,"},{"Start":"02:41.045 ","End":"02:44.630","Text":"and this plus or minus a to that side."},{"Start":"02:44.630 ","End":"02:49.235","Text":"9 minus Lambda is equal to plus or minus a."},{"Start":"02:49.235 ","End":"02:52.820","Text":"Now, how do we work out the eigenvectors?"},{"Start":"02:52.820 ","End":"02:57.425","Text":"Well, we just substitute these values for Lambda into the eigenvector,"},{"Start":"02:57.425 ","End":"03:02.120","Text":"eigenvalue equation, and that\u0027s what we\u0027ve seen multiple times in this playlist,"},{"Start":"03:02.120 ","End":"03:10.935","Text":"is just Av equals Lambda v. If we look at Lambda is 9 plus a,"},{"Start":"03:10.935 ","End":"03:12.825","Text":"which is our first eigenvalue,"},{"Start":"03:12.825 ","End":"03:16.950","Text":"so Lambda is equal to 9 plus a."},{"Start":"03:16.950 ","End":"03:20.685","Text":"Well, this gives us the eigenvector,"},{"Start":"03:20.685 ","End":"03:23.115","Text":"let\u0027s just call it v1,"},{"Start":"03:23.115 ","End":"03:27.090","Text":"and that\u0027s equal to 1, 1."},{"Start":"03:27.090 ","End":"03:31.700","Text":"Now, it\u0027s easy to see this and you can see this straight away if you just"},{"Start":"03:31.700 ","End":"03:37.810","Text":"substitute this value of Lambda into this equation using our matrix A,"},{"Start":"03:37.810 ","End":"03:45.000","Text":"and you\u0027ll see that what we actually get is we get x_2 being equal to x_1."},{"Start":"03:45.000 ","End":"03:48.113","Text":"We just set x_1 as being 1."},{"Start":"03:48.113 ","End":"03:50.030","Text":"Arbitrarily without loss of generality,"},{"Start":"03:50.030 ","End":"03:51.410","Text":"we\u0027re allowed to do that,"},{"Start":"03:51.410 ","End":"03:54.545","Text":"and then we set x_2=x_1,"},{"Start":"03:54.545 ","End":"03:56.870","Text":"so then that\u0027s 1 as well."},{"Start":"03:56.870 ","End":"03:59.540","Text":"Now our second eigenvalue,"},{"Start":"03:59.540 ","End":"04:03.320","Text":"Lambda_2, which is 9 minus a."},{"Start":"04:03.320 ","End":"04:09.510","Text":"Well, this corresponds to the eigenvector minus 1,1."},{"Start":"04:09.680 ","End":"04:14.290","Text":"Again, we use the same process of using this equation,"},{"Start":"04:14.290 ","End":"04:21.090","Text":"and what we find here is we just get the x_2 is equal to minus x_1."},{"Start":"04:21.090 ","End":"04:23.595","Text":"Now we\u0027ve got our eigenvectors,"},{"Start":"04:23.595 ","End":"04:26.190","Text":"we can see that they are indeed orthogonal,"},{"Start":"04:26.190 ","End":"04:30.520","Text":"so we just need to normalize these in the usual way to"},{"Start":"04:30.520 ","End":"04:35.140","Text":"form our columns of the invertible matrix P. Remember"},{"Start":"04:35.140 ","End":"04:39.430","Text":"how we formed P is just the columns are"},{"Start":"04:39.430 ","End":"04:44.190","Text":"the eigenvectors of the matrix A form the columns of P,"},{"Start":"04:44.190 ","End":"04:46.080","Text":"but we need to normalize them."},{"Start":"04:46.080 ","End":"04:51.490","Text":"The first thing we need to look at is the magnitude of these two vectors, and in fact,"},{"Start":"04:51.490 ","End":"04:54.375","Text":"this should say v_2,"},{"Start":"04:54.375 ","End":"04:57.120","Text":"and the magnitude of v_1,"},{"Start":"04:57.120 ","End":"05:01.080","Text":"well that\u0027s just equal to 1^2 plus 1^2 square rooted."},{"Start":"05:01.080 ","End":"05:04.565","Text":"That\u0027s just the square root of 2, and similarly,"},{"Start":"05:04.565 ","End":"05:09.675","Text":"the magnitude of v_2 is also the square root of 2."},{"Start":"05:09.675 ","End":"05:12.645","Text":"Now we can form our P matrix,"},{"Start":"05:12.645 ","End":"05:17.295","Text":"and that\u0027s just this first eigenvector 1, 1 normalized,"},{"Start":"05:17.295 ","End":"05:19.695","Text":"so it\u0027s going to be 1 over root 2,"},{"Start":"05:19.695 ","End":"05:21.945","Text":"1 over root 2,"},{"Start":"05:21.945 ","End":"05:27.405","Text":"and the second eigenvector is just going to be minus 1 over root 2,"},{"Start":"05:27.405 ","End":"05:29.625","Text":"1 over root 2."},{"Start":"05:29.625 ","End":"05:33.350","Text":"Now, as we\u0027ve seen many times in this playlist,"},{"Start":"05:33.350 ","End":"05:35.555","Text":"because this is now orthonormal,"},{"Start":"05:35.555 ","End":"05:43.340","Text":"we know that the inverse of P is equal to the transpose of P. What\u0027s the transpose?"},{"Start":"05:43.340 ","End":"05:46.030","Text":"Will these two entries remain the same?"},{"Start":"05:46.030 ","End":"05:48.420","Text":"We\u0027ve got a 1 over root 2 here,"},{"Start":"05:48.420 ","End":"05:50.715","Text":"and 1 over root 2 here,"},{"Start":"05:50.715 ","End":"05:53.715","Text":"and then we just swap these two entries."},{"Start":"05:53.715 ","End":"05:56.565","Text":"This now becomes a 1 over root 2,"},{"Start":"05:56.565 ","End":"05:59.545","Text":"and this is a minus 1 over root 2."},{"Start":"05:59.545 ","End":"06:01.970","Text":"Then finally, what do we need?"},{"Start":"06:01.970 ","End":"06:04.115","Text":"We need our matrix D,"},{"Start":"06:04.115 ","End":"06:07.235","Text":"and this is the easiest one because remember,"},{"Start":"06:07.235 ","End":"06:12.185","Text":"this is just made of the eigenvalues on the leading diagonal."},{"Start":"06:12.185 ","End":"06:16.280","Text":"In this one, we have 9 plus a,"},{"Start":"06:16.280 ","End":"06:19.285","Text":"and then this is going to be 0, and 0,"},{"Start":"06:19.285 ","End":"06:20.985","Text":"and then down here,"},{"Start":"06:20.985 ","End":"06:24.135","Text":"we have 9 minus a."},{"Start":"06:24.135 ","End":"06:29.870","Text":"We have indeed diagonalized our matrix A in that we can write it in this form,"},{"Start":"06:29.870 ","End":"06:34.115","Text":"PDP inverse, or as we\u0027ve seen,"},{"Start":"06:34.115 ","End":"06:38.015","Text":"that\u0027s the same as PDP transpose,"},{"Start":"06:38.015 ","End":"06:45.150","Text":"where P is this matrix and P transpose is this matrix."}],"ID":30957},{"Watched":false,"Name":"Exercise 8","Duration":"6m 32s","ChapterTopicVideoID":29351,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.260 ","End":"00:07.695","Text":"In this video, we have a 3-by-3 matrix a with these inputs."},{"Start":"00:07.695 ","End":"00:10.380","Text":"We\u0027ve got some numbers and we\u0027ve got"},{"Start":"00:10.380 ","End":"00:16.065","Text":"some trigonometric functions that are functions of this value a,"},{"Start":"00:16.065 ","End":"00:18.300","Text":"which is a real number."},{"Start":"00:18.300 ","End":"00:22.835","Text":"We need to find the values of a for which a is symmetric."},{"Start":"00:22.835 ","End":"00:29.765","Text":"Then we need to use the spectral theorem to diagonalize a for one of these values."},{"Start":"00:29.765 ","End":"00:34.085","Text":"For this to be symmetric, what do we need?"},{"Start":"00:34.085 ","End":"00:37.955","Text":"Well, remember what it means for a matrix to be symmetric,"},{"Start":"00:37.955 ","End":"00:43.925","Text":"it means that the matrix is equal to its transpose."},{"Start":"00:43.925 ","End":"00:47.210","Text":"If we look at this matrix, well,"},{"Start":"00:47.210 ","End":"00:50.734","Text":"we don\u0027t need to deal with the diagonal entries"},{"Start":"00:50.734 ","End":"00:54.575","Text":"because when you take the transpose of a square matrix,"},{"Start":"00:54.575 ","End":"00:57.470","Text":"the diagonal entries stay in the same place."},{"Start":"00:57.470 ","End":"00:59.720","Text":"What are the things that are swapping?"},{"Start":"00:59.720 ","End":"01:04.570","Text":"Well, these two entries are going to swap."},{"Start":"01:06.590 ","End":"01:13.285","Text":"Then we\u0027ve got these two entries that are going to swap as well."},{"Start":"01:13.285 ","End":"01:17.780","Text":"But the thing is, the zeros are exactly the same in each entry,"},{"Start":"01:17.780 ","End":"01:21.200","Text":"so we don\u0027t actually need to worry about these two."},{"Start":"01:21.200 ","End":"01:27.530","Text":"So the only things that we\u0027re concerned with are sine a being equal to cosine"},{"Start":"01:27.530 ","End":"01:35.005","Text":"of Pi over 2 minus a and sine a being equal to sine 2Pi minus a."},{"Start":"01:35.005 ","End":"01:38.445","Text":"If we write that formally,"},{"Start":"01:38.445 ","End":"01:40.460","Text":"for a to be symmetric,"},{"Start":"01:40.460 ","End":"01:45.425","Text":"we require that sine a is equal to cosine of Pi over 2 minus a."},{"Start":"01:45.425 ","End":"01:50.165","Text":"Sine a is equal to sine 2Pi minus a."},{"Start":"01:50.165 ","End":"01:53.600","Text":"We\u0027re going to have to play some constraints on a in"},{"Start":"01:53.600 ","End":"01:57.260","Text":"order for both of these 2 things to be satisfied."},{"Start":"01:57.260 ","End":"01:59.990","Text":"Now, for this first condition,"},{"Start":"01:59.990 ","End":"02:03.650","Text":"sine a is equal to cosine Pi over 2 minus a."},{"Start":"02:03.650 ","End":"02:08.720","Text":"Well, this is actually satisfied for all values of a in the real numbers."},{"Start":"02:08.720 ","End":"02:12.185","Text":"This is actually an identity."},{"Start":"02:12.185 ","End":"02:17.645","Text":"If you\u0027re curious as to where this identity comes from, well,"},{"Start":"02:17.645 ","End":"02:24.455","Text":"all you have to do is consider a right-angled triangle like this."},{"Start":"02:24.455 ","End":"02:33.605","Text":"Now, if we call the hypotenuse on this triangle y and this length for this value here,"},{"Start":"02:33.605 ","End":"02:37.565","Text":"x and we say that this is an angle a,"},{"Start":"02:37.565 ","End":"02:39.260","Text":"then we know because it\u0027s right-angle,"},{"Start":"02:39.260 ","End":"02:42.965","Text":"this is going to be Pi by 2 radians,"},{"Start":"02:42.965 ","End":"02:45.275","Text":"which is equivalent to 90 degrees."},{"Start":"02:45.275 ","End":"02:47.720","Text":"Then this angle here, well,"},{"Start":"02:47.720 ","End":"02:52.160","Text":"that\u0027s going to be Pi over 2 minus a."},{"Start":"02:52.160 ","End":"02:54.890","Text":"What\u0027s sine a?"},{"Start":"02:54.890 ","End":"02:58.535","Text":"Sine a is just opposite over hypotenuse."},{"Start":"02:58.535 ","End":"03:08.190","Text":"That\u0027s going to be x over y. Cosine of Pi over 2 minus a."},{"Start":"03:08.190 ","End":"03:12.320","Text":"Or remember, if we\u0027re looking at the cosine of this angle here,"},{"Start":"03:12.320 ","End":"03:15.395","Text":"well, we want the adjacent over the hypotenuse."},{"Start":"03:15.395 ","End":"03:21.480","Text":"The adjacent in this sense is going to be x and the hypotenuse is still y."},{"Start":"03:21.480 ","End":"03:23.310","Text":"Equals x over y, well,"},{"Start":"03:23.310 ","End":"03:25.380","Text":"that\u0027s equal to sine a."},{"Start":"03:25.380 ","End":"03:29.300","Text":"That\u0027s where this identity comes from."},{"Start":"03:29.300 ","End":"03:30.865","Text":"Now for 2,"},{"Start":"03:30.865 ","End":"03:35.735","Text":"we need to satisfy sine a is equal to sine 2Pi minus a."},{"Start":"03:35.735 ","End":"03:37.340","Text":"If we do that,"},{"Start":"03:37.340 ","End":"03:41.270","Text":"then if we just consider the angles, so the inside,"},{"Start":"03:41.270 ","End":"03:45.110","Text":"so we got a is equal to 2Pi minus a,"},{"Start":"03:45.110 ","End":"03:48.035","Text":"which means that a is equal to Pi."},{"Start":"03:48.035 ","End":"03:54.170","Text":"But remember that the sine function is trigonometric and periodic."},{"Start":"03:54.170 ","End":"04:01.250","Text":"Sine x is actually equal to sine 2Pi plus x for every x in R. Therefore,"},{"Start":"04:01.250 ","End":"04:05.690","Text":"we get a as being equal to the Pi that we had before."},{"Start":"04:05.690 ","End":"04:11.255","Text":"But we can also plus 2k Pi to it as well,"},{"Start":"04:11.255 ","End":"04:14.839","Text":"where k is just some integer."},{"Start":"04:14.839 ","End":"04:18.770","Text":"That just comes from the nature of the sine function."},{"Start":"04:18.770 ","End":"04:22.805","Text":"Because 1 is satisfied for all a,"},{"Start":"04:22.805 ","End":"04:26.705","Text":"then we only need to actually consider 2 for our constraints."},{"Start":"04:26.705 ","End":"04:28.970","Text":"That means that in summary,"},{"Start":"04:28.970 ","End":"04:33.245","Text":"if a is equal to 2k plus 1 multiplied by Pi,"},{"Start":"04:33.245 ","End":"04:35.690","Text":"then both equations are maintained,"},{"Start":"04:35.690 ","End":"04:40.300","Text":"in which case our matrix a is symmetric."},{"Start":"04:40.300 ","End":"04:42.615","Text":"Now for part 2,"},{"Start":"04:42.615 ","End":"04:48.370","Text":"remember we needed to diagonalize this matrix for a value of a."},{"Start":"04:48.370 ","End":"04:51.565","Text":"If we actually put in these values of a,"},{"Start":"04:51.565 ","End":"04:55.600","Text":"then you\u0027ll see wherever we had the trigonometric expression,"},{"Start":"04:55.600 ","End":"04:58.600","Text":"then those are just equal to 0 now,"},{"Start":"04:58.600 ","End":"05:00.055","Text":"because that\u0027s just from,"},{"Start":"05:00.055 ","End":"05:05.305","Text":"if we just quickly draw the graph of sine x."},{"Start":"05:05.305 ","End":"05:11.340","Text":"Well, we were only looking at these integer multiples of Pi,"},{"Start":"05:11.340 ","End":"05:13.760","Text":"so 0, Pi, 2Pi,"},{"Start":"05:13.760 ","End":"05:17.050","Text":"and at these points, it\u0027s always equal to 0."},{"Start":"05:17.050 ","End":"05:19.945","Text":"These entries are now just 0 in the matrix."},{"Start":"05:19.945 ","End":"05:23.375","Text":"Now, this is very easy to diagonalize."},{"Start":"05:23.375 ","End":"05:26.480","Text":"The reason for that is because we can"},{"Start":"05:26.480 ","End":"05:30.460","Text":"actually just read off the eigenvalues straightaway."},{"Start":"05:30.460 ","End":"05:35.225","Text":"We\u0027ve got 3 eigenvalues and they are going to be equal to."},{"Start":"05:35.225 ","End":"05:38.000","Text":"So we got Lambda_1 is equal to 3,"},{"Start":"05:38.000 ","End":"05:40.955","Text":"Lambda_2 is equal to 0,"},{"Start":"05:40.955 ","End":"05:44.635","Text":"and Lambda_3 is equal to minus 6."},{"Start":"05:44.635 ","End":"05:45.935","Text":"Now, how did we get that?"},{"Start":"05:45.935 ","End":"05:51.050","Text":"Well, all we\u0027ve done is we\u0027ve taken the characteristic equation,"},{"Start":"05:51.050 ","End":"05:56.645","Text":"that is a minus Lambda_I, which is equal to 0,"},{"Start":"05:56.645 ","End":"05:59.240","Text":"because all the other entries are 0,"},{"Start":"05:59.240 ","End":"06:03.530","Text":"essentially the determinant is actually just 3 minus"},{"Start":"06:03.530 ","End":"06:08.445","Text":"Lambda multiplied by minus Lambda multiplied by"},{"Start":"06:08.445 ","End":"06:13.160","Text":"minus 6 minus Lambda and then we just set that equal to 0 and there we"},{"Start":"06:13.160 ","End":"06:18.485","Text":"can just read off exactly what the eigenvalues are."},{"Start":"06:18.485 ","End":"06:23.465","Text":"Now, we can work out the eigenvectors as well in the exact same way."},{"Start":"06:23.465 ","End":"06:28.490","Text":"But just for the sake of not repeating too much from previous videos,"},{"Start":"06:28.490 ","End":"06:32.760","Text":"I\u0027ll leave it to you to do as an exercise."}],"ID":30958},{"Watched":false,"Name":"Exercise 9","Duration":"3m 44s","ChapterTopicVideoID":29352,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:03.420","Text":"We\u0027re looking at a slightly different type of"},{"Start":"00:03.420 ","End":"00:07.440","Text":"question as to the ones we\u0027ve seen so far in this playlist."},{"Start":"00:07.440 ","End":"00:11.760","Text":"Hopefully, this will address some useful theory that you may have wondered"},{"Start":"00:11.760 ","End":"00:16.770","Text":"about potentially could come up in an exam situation."},{"Start":"00:16.770 ","End":"00:23.114","Text":"Let A be a real symmetric matrix with the following diagonalization."},{"Start":"00:23.114 ","End":"00:26.388","Text":"A is equal to P_1,"},{"Start":"00:26.388 ","End":"00:28.215","Text":"D_1, P_1 transpose."},{"Start":"00:28.215 ","End":"00:37.035","Text":"Remember, P_1 was this invertible matrix comprised of the eigenvectors in its columns,"},{"Start":"00:37.035 ","End":"00:43.530","Text":"and D_1 was this diagonal matrix that was comprised of the eigenvalues."},{"Start":"00:43.700 ","End":"00:46.385","Text":"Go to matrix B,"},{"Start":"00:46.385 ","End":"00:52.712","Text":"which is a real symmetric matrix with the diagonalization B is equal to P_2, D_2,"},{"Start":"00:52.712 ","End":"00:59.655","Text":"P_2 transpose, where P_1 and P_2 are orthogonal matrices."},{"Start":"00:59.655 ","End":"01:02.310","Text":"The question that we have is,"},{"Start":"01:02.310 ","End":"01:07.910","Text":"can this matrix A plus B be diagonalized into"},{"Start":"01:07.910 ","End":"01:14.865","Text":"the following form: A plus B is equal to P_1 plus P_2 D,"},{"Start":"01:14.865 ","End":"01:18.535","Text":"P_1 plus P_2 transpose."},{"Start":"01:18.535 ","End":"01:24.890","Text":"Now, this question is a little bit tricky because the matrix A plus B,"},{"Start":"01:24.890 ","End":"01:26.720","Text":"so we\u0027ll make a note of this."},{"Start":"01:26.720 ","End":"01:33.995","Text":"If A and B are in fact, diagonalizable,"},{"Start":"01:33.995 ","End":"01:44.510","Text":"then that also means that the matrix A plus B is also diagonalizable."},{"Start":"01:44.510 ","End":"01:53.095","Text":"Now, is it necessarily true that A plus B can be diagonalizable into this exact form?"},{"Start":"01:53.095 ","End":"01:55.720","Text":"Well, let\u0027s just see."},{"Start":"01:55.720 ","End":"01:59.420","Text":"The answer is no,"},{"Start":"01:59.420 ","End":"02:05.590","Text":"it cannot be diagonalized or not generally diagonalized into the following form."},{"Start":"02:05.590 ","End":"02:09.940","Text":"We will just prove this by a brief counter example."},{"Start":"02:09.940 ","End":"02:17.565","Text":"Let\u0027s say that our matrix P_1 was equal to, I don\u0027t know, 1,"},{"Start":"02:17.565 ","End":"02:20.090","Text":"0, 0, 1,"},{"Start":"02:20.090 ","End":"02:22.225","Text":"which as we can see,"},{"Start":"02:22.225 ","End":"02:26.890","Text":"is orthogonal because the dot product of these columns is 0."},{"Start":"02:26.890 ","End":"02:32.105","Text":"Let\u0027s say maybe P_2 is equal to 0,"},{"Start":"02:32.105 ","End":"02:39.020","Text":"1, 1, 0."},{"Start":"02:39.020 ","End":"02:43.130","Text":"We\u0027ll then, these are both orthogonal matrices,"},{"Start":"02:43.130 ","End":"02:46.625","Text":"so you can see that the dot product of each of these columns is 0."},{"Start":"02:46.625 ","End":"02:51.035","Text":"But if we do P_1 plus P_2,"},{"Start":"02:51.035 ","End":"02:52.985","Text":"well, then what does that give us?"},{"Start":"02:52.985 ","End":"02:59.105","Text":"Well, we\u0027re just adding the corresponding entries just the way we usually add matrices."},{"Start":"02:59.105 ","End":"03:01.155","Text":"That\u0027s going to give us 1,"},{"Start":"03:01.155 ","End":"03:03.160","Text":"1, 1, 1."},{"Start":"03:03.160 ","End":"03:09.150","Text":"We can immediately see that this matrix is not orthogonal."},{"Start":"03:09.150 ","End":"03:14.280","Text":"If we take the dot product of these 2 vectors, remember,"},{"Start":"03:14.280 ","End":"03:22.055","Text":"usually these are our eigenvectors that we found from the matrix to be diagonalized."},{"Start":"03:22.055 ","End":"03:26.420","Text":"Well, we can see that we\u0027ve got 1 times 1 plus 1 times 1,"},{"Start":"03:26.420 ","End":"03:30.380","Text":"which is 2, and obviously not equal to 0."},{"Start":"03:30.380 ","End":"03:34.190","Text":"The question is, or the answer to this question,"},{"Start":"03:34.190 ","End":"03:37.550","Text":"can A plus B be diagonalized in the following form?"},{"Start":"03:37.550 ","End":"03:40.310","Text":"We\u0027ll, generally no, it cannot."},{"Start":"03:40.310 ","End":"03:44.790","Text":"We have proved this by a counterexample."}],"ID":30959},{"Watched":false,"Name":"Exercise 10","Duration":"4m 41s","ChapterTopicVideoID":29353,"CourseChapterTopicPlaylistID":294460,"HasSubtitles":true,"ThumbnailPath":null,"UploadDate":null,"DurationForVideoObject":null,"Description":null,"MetaTitle":null,"MetaDescription":null,"Canonical":null,"VideoComments":[],"Subtitles":[{"Start":"00:00.000 ","End":"00:02.550","Text":"Like the previous video,"},{"Start":"00:02.550 ","End":"00:06.375","Text":"we\u0027re going to be looking at a proof-type question"},{"Start":"00:06.375 ","End":"00:10.710","Text":"relating to diagnolization of matrices."},{"Start":"00:10.710 ","End":"00:12.960","Text":"Here\u0027s the question. It says,"},{"Start":"00:12.960 ","End":"00:16.500","Text":"let A be a real symmetric matrix."},{"Start":"00:16.500 ","End":"00:22.875","Text":"Just like the matrices that we\u0027ve seen all throughout this playlist, the question is,"},{"Start":"00:22.875 ","End":"00:31.965","Text":"does there exist a matrix B such that A is equal to B multiplied by B transpose."},{"Start":"00:31.965 ","End":"00:35.810","Text":"Now, the answer to this is yes,"},{"Start":"00:35.810 ","End":"00:38.435","Text":"and we\u0027re going to prove it."},{"Start":"00:38.435 ","End":"00:41.690","Text":"There are a couple of steps to this proof,"},{"Start":"00:41.690 ","End":"00:44.690","Text":"but it\u0027s really not too difficult and actually hinges"},{"Start":"00:44.690 ","End":"00:48.275","Text":"on most of the ideas that we\u0027ve already seen before."},{"Start":"00:48.275 ","End":"00:52.250","Text":"But it\u0027s one of those proofs that\u0027s just a bit more obvious"},{"Start":"00:52.250 ","End":"00:56.720","Text":"once you see it and not really that easy to come up with yourself."},{"Start":"00:56.720 ","End":"00:59.470","Text":"Here\u0027s how the proof goes."},{"Start":"00:59.470 ","End":"01:04.970","Text":"A is a real symmetric matrix and according to the spectral theorem,"},{"Start":"01:04.970 ","End":"01:09.215","Text":"there exists an invertible and orthogonal matrix P, which,"},{"Start":"01:09.215 ","End":"01:16.860","Text":"as we recall, was made via the eigenvectors of the matrix A and then normalized,"},{"Start":"01:16.860 ","End":"01:26.955","Text":"so putting its columns and a diagonal matrix D such that A is equal to PDP transpose."},{"Start":"01:26.955 ","End":"01:31.340","Text":"Because remember when P was orthonormal,"},{"Start":"01:31.340 ","End":"01:37.750","Text":"then the transpose of P was equal to its inverse or vice versa."},{"Start":"01:37.750 ","End":"01:40.895","Text":"D has the following form."},{"Start":"01:40.895 ","End":"01:48.215","Text":"Remember, D was just comprised of the eigenvalues in the leading diagonal of its matrix."},{"Start":"01:48.215 ","End":"01:53.270","Text":"If we take C, which is equal to the square root of D,"},{"Start":"01:53.270 ","End":"01:57.455","Text":"because remember when we have a diagonal matrix in this form,"},{"Start":"01:57.455 ","End":"02:01.310","Text":"then whatever power we raise this to, for example,"},{"Start":"02:01.310 ","End":"02:05.515","Text":"if we raise this to the power of say, 1/2,"},{"Start":"02:05.515 ","End":"02:08.120","Text":"then all we\u0027re going to be doing is we are going to be square"},{"Start":"02:08.120 ","End":"02:12.480","Text":"rooting all these entries on the diagonal."},{"Start":"02:12.480 ","End":"02:15.910","Text":"That\u0027s just a nice feature of diagonal matrices."},{"Start":"02:15.910 ","End":"02:21.185","Text":"If we take C as being the square root of all these leading diagonal entries,"},{"Start":"02:21.185 ","End":"02:25.430","Text":"then D is actually just going to be C squared."},{"Start":"02:25.430 ","End":"02:28.625","Text":"Because remember, if we square this matrix,"},{"Start":"02:28.625 ","End":"02:31.505","Text":"then we\u0027re squaring all of the square roots."},{"Start":"02:31.505 ","End":"02:37.950","Text":"Then we just get back to D. We\u0027re choosing C is this matrix,"},{"Start":"02:37.950 ","End":"02:40.410","Text":"so that D is equal to C squared."},{"Start":"02:40.410 ","End":"02:42.695","Text":"If we choose our matrix B,"},{"Start":"02:42.695 ","End":"02:47.765","Text":"because remember we\u0027re looking for a matrix B such that this thing is satisfied."},{"Start":"02:47.765 ","End":"02:50.225","Text":"If we choose B is equal to"},{"Start":"02:50.225 ","End":"02:56.850","Text":"this invertible matrix P multiplied by C which is the square root of D,"},{"Start":"02:56.850 ","End":"03:02.850","Text":"then BB transpose, well that\u0027s just equal to PC,"},{"Start":"03:02.850 ","End":"03:08.385","Text":"because remember P is equal to PC multiplied by PC transpose,"},{"Start":"03:08.385 ","End":"03:13.485","Text":"which is equal to PCC transpose P transpose."},{"Start":"03:13.485 ","End":"03:17.550","Text":"Just to clarify from here to here,"},{"Start":"03:17.550 ","End":"03:19.580","Text":"we\u0027re just using the identity."},{"Start":"03:19.580 ","End":"03:21.050","Text":"If we have 2 matrices,"},{"Start":"03:21.050 ","End":"03:23.920","Text":"say M_1 and M_2,"},{"Start":"03:23.920 ","End":"03:27.065","Text":"and we take the transpose of that."},{"Start":"03:27.065 ","End":"03:33.395","Text":"Well, then what that gives us is M_2 transpose M_1 transpose."},{"Start":"03:33.395 ","End":"03:38.940","Text":"This is equal to PC^2 P transpose."},{"Start":"03:38.940 ","End":"03:42.840","Text":"Because remember, C is a diagonal matrix,"},{"Start":"03:42.840 ","End":"03:45.965","Text":"so C is real and symmetric,"},{"Start":"03:45.965 ","End":"03:50.600","Text":"and so C transpose is just equal to C. That\u0027s how we go"},{"Start":"03:50.600 ","End":"03:57.120","Text":"from this step to this step because C is just equal to C transpose."},{"Start":"03:57.120 ","End":"04:01.305","Text":"Then we\u0027ve got PC squared Pt."},{"Start":"04:01.305 ","End":"04:03.675","Text":"Remember what C^2 was?"},{"Start":"04:03.675 ","End":"04:08.555","Text":"C^2 was just equal to D. Then we\u0027ve got,"},{"Start":"04:08.555 ","End":"04:14.760","Text":"in other words, this thing here is equal to PDP transpose."},{"Start":"04:14.760 ","End":"04:16.429","Text":"As we said before,"},{"Start":"04:16.429 ","End":"04:18.395","Text":"that this is equal to A."},{"Start":"04:18.395 ","End":"04:21.440","Text":"We\u0027ve actually found a matrix B,"},{"Start":"04:21.440 ","End":"04:25.035","Text":"which we said is equal to PC,"},{"Start":"04:25.035 ","End":"04:29.105","Text":"that when we do BB transpose,"},{"Start":"04:29.105 ","End":"04:31.160","Text":"then we get our matrix A."},{"Start":"04:31.160 ","End":"04:35.795","Text":"We have solved or proven this theorem."},{"Start":"04:35.795 ","End":"04:41.250","Text":"That will conclude this playlist on the spectral theorem."}],"ID":30960}],"Thumbnail":null,"ID":294460}]

[{"ID":7308,"Videos":[10133,10134,10135,10136,10137,10138,10139,10140,10141,10142]},{"ID":7309,"Videos":[10143,10144,10145,10146,10147,10148,10149,10150,10151,10152,10153]},{"ID":7310,"Videos":[10123,10124,10125,10126,10127,10128,10129,10130,10131,10132]},{"ID":7311,"Videos":[14159,10180,10181,10182,10183,10184,10185,14160,10186,10187]},{"ID":7312,"Videos":[10154,10155,10156,10157,10158,10159,10160,10161,10162,10163,10164,10165,10166,10167]},{"ID":7313,"Videos":[10168,10169,10170,10171,10172,10173,10175,10174,10176,10177,10178,10179]},{"ID":253224,"Videos":[27109,27105,27106,27107,27108]},{"ID":253225,"Videos":[27114,27115,27116,27117,27119,27118,27120,27110,27121,27111,27112,27113]},{"ID":253226,"Videos":[27128,27126,27125,27127,27129,27130,27131,27132,27133,27134,27135,27136,27138,27137,27122,27123,27124]},{"ID":294460,"Videos":[30947,30948,30949,30951,30952,30953,30954,30955,30956,30957,30958,30959,30960]}];

[10154,10155,10156,10157,10158,10159,10160,10161,10162,10163,10164,10165,10166,10167];

1.1

1

Get unlimited access to **1500 subjects** including **personalised modules**

Start your free trial
We couldn't find any results for

Upload your syllabus now and our team will create a customised module especially for you!