Hey, everyone! I’ve got another quick footnote for you between

chapters today. When I talked about linear transformation

so far, I’ve only really talked about transformations

from 2-D vectors to other 2-D vectors, represented with 2-by-2 matrices; or from 3-D vectors to other 3-D vectors,

represented with 3-by-3 matrices. But several commenters have asked about non-square

matrices, so I thought I’d take a moment to just show

with those means geometrically. By now in the series, you actually have most

of the background you need to start pondering a question like this on

your own. But I’ll start talking through it, just to

give a little mental momentum. It’s perfectly reasonable to talk about transformations

between dimensions, such as one that takes 2-D vectors to 3-D

vectors. Again, what makes one of these linear is that grid lines remain parallel and evenly

spaced, and that the origin maps to the origin. What I have pictured here is the input space

on the left, which is just 2-D space, and the output of the transformation shown

on the right. The reason I’m not showing the inputs move

over to the outputs, like I usually do, is not just animation laziness. It’s worth emphasizing the 2-D vector inputs

are very different animals from these 3-D vector outputs, living in a completely separate unconnected

space. Encoding one of these transformations with

a matrix is really just the same thing as what we’ve done before. You look at where each basis vector lands and write the coordinates of the landing spots

as the columns of a matrix. For example, what you’re looking at here is

an output of a transformation that takes i-hat to the coordinates (2, -1,

-2) and j-hat to the coordinates (0, 1, 1). Notice, this means the matrix encoding our

transformation has 3 rows and 2 columns, which, to use standard terminology, makes

it a 3-by-2 matrix. In the language of last video, the column

space of this matrix, the place where all the vectors land is a

2-D plane slicing through the origin of 3-D space. But the matrix is still full rank, since the number of dimensions in this column

space is the same as the number of dimensions of the input space. So, if you see a 3-by-2 matrix out in the

wild, you can know that it has the geometric interpretation

of mapping two dimensions to three dimensions, Since the two columns indicate that the input

space has two basis vectors, and the three rows indicate that the landing

spots for each of those basis vectors is described with three separate coordinates. Likewise, if you see a 2-by-3 matrix with

two rows and three columns, what do you think that means? Well, the three columns indicate that you’re

starting in a space that has three basis vectors, so we’re starting in three dimensions; and the two rows indicate that the landing

spot for each of those three basis vectors is described with only two coordinates, so they must be landing in two dimensions. So it’s a transformation from 3-D space onto

the 2-D plane. A transformation that should feel very uncomfortable

if you imagine going through it. You could also have a transformation from

two dimensions to one dimension. One-dimensional space is really just the number

line, so transformation like this takes in 2-D vectors

and spits out numbers. Thinking about gridlines remaining parallel

and evenly spaced is a little bit messy to all of the squishification

happening here. So in this case, the visual understanding

for what linearity means is that if you have a line of evenly spaced dots, it would remain evenly spaced once they’re

mapped onto the number line. One of these transformations is encoded with

a 1-by-2 matrix, each of whose two columns as just a single

entry. The two columns represent where the basis

vectors land and each one of those columns requires just

one number, the number that that basis vector landed on. This is actually a surprisingly meaningful

type of transformation with close ties to the dot product, and I’ll be talking about that next video. Until then, I encourage you to play around

with this idea on your own, contemplating the meanings of things like

matrix multiplication and linear systems of equations in the context of transformations between

different dimensions. Have fun!

Thanks for this video this question irritated me for about 4 days. Thank you so much.

I am familiar with linear algebra. I first learnt from the Khan Academy videos. That took quite a while because a) I was new to the topic and b) Khan Academy goes into the number crunching quite a bit so it can be difficult to see the wood for the trees. I persevered and grasped a lot of what the videos were saying.

Next stop was MIT's Linear Algebra videos here on YouTube with Gilbert Strang. I

stronglyrecommend people watch these. Search "MIT 18.06". Simply excellent! Just like 3Blue1Brown, the conversation is conducted at the higher level where the important concepts are grasped.Linear Algebra is best understood when looking at matrix-matrix or matrix-vector products without drilling down into the component-wise arithmetic which gets verbose real quick.

So a neural net with 2 inputs and 3 outputs has Weights = 3 X 2 can be visualized as linearly mapping 2D points in input space to 3D points in output space…or deforming 2D plane by crumpling it up in 3D.

thank you very much .very nice.

Could you please do a diffferential equation series?

I see your sneaky pi!

Love the way you teach. Seeing the solution of a simple linear equation in the light of a 2d to 1d mapping that takes a line to a point via two 1d basis vectors is such a beautiful and powerful idea. My thanks to you.

Question, so if we can map 3d into 2d, is there then a transformation to map 3D graphics in isometric view and another for orthogonal view? are those standard?

is the transformation on 3:04 a transformation squishing the space to a plane? (nullspace)

I wish I had known this in my early years. Thank you! Do you plan to do Tensors? That beast needs to be tamed 🙂

Best channel ever

I've been trying to follow your idea of areas and the determinant that you explained in one of your past videos; in cases for nonsquare matrices, there is no way to tell, for example, what happens to a volumen in a 3D transformation to a 2D, and vice versa?

x & y axis in 2d input are not the same x & y axis of the 3d output , since the plane of input basis vectors after transformation passes through fixed origin at an inclination to the xy plane of the 3d output space….

In short if you measure the area in input and output spaces you will be using different cordinate systems, x & y axis of input space and x & y axis of output space respectively….

Therefore u cannot measure the determinant of a non square matrix

Hinting that dot product is a linear transformation into 1-dimensional space

When math has cliffhangers

100/10! I am currently studying physics at university. How can it be, that this is the only place I can find that shows this explanation? No book, no video, no script, no teacher, tutor or professor have ever explained linear algebra via this eye opening geometric interpretation. I struggled with remembering all the different proofs and definitions. Now all my problems vanished. Simply by working through this series. Most of the time I can now instantly see and explain the solution to 99% of the problems in my Linear Algebra class. – All praise to you!

So what you're saying is we've already discovered inter-dimensional travel

Please make a video on Singular Value Decomposition. The animations are great and Linear Algebra is even better with visualizations.

Please. Factorizations are awesome!!!

If I have a geometric shape say parabola, and I want to find the transform which will create a space where all the points on parabola are mapped onto a line. How to find the transform that does this?

@3Blue1Brown in last video you said A inverse dont exist when det A=0 as there is no way to convert a line into a plane but in this video you tell us about 1d to 2d tranformation. I am confused

so diving deeper into determinants of non square matrices (case by case):

going from a lower dimension (2) to a higher one (3) with linearly independent column vectors, ie

[1 2

3 4

5 6],

why can't we define the determinant? since the two column vectors will lie on a single plane, we can calculate the area enclosed by the vectors by looking at the quadrilateral formed, in the same way that we calculate the area enclosed by the quadrilateral (unit square) formed by i and j.

then, since determinant = area enclosed by column vectors / area enclosed by i and j

= area enclosed by column vectors

in the case that the column vectors are linearly dependent in R3, ie they lie on the same line, the area enclosed is 0, and thus the determinant = 0

going from a higher dimension (3) to a lower one (2), ie

[1 2 3

4 5 6],

the ratio between the volume of the shape formed by i, j and k, and the volume of the three column vectors, ie the determinant, must equal 0, as the column vectors will always be linearly dependent, thus forming a shape of a lower dimension. in the case of R3 –> R2, determinant = volume enclosed by column vectors / volume enclosed by i, j and k

= 0 / 1

= 0

PS. i love your videos

PSS. can i get a fields medal now k thanks

I didn't know the word squishification existed.

ty !

It's always nice to see some digits of pi thrown in at random places 😉

at 2:46 you say that a 2×3 matrix has 3 basis vectors… wouldn't it only have 2? if you think of it as a 3×3 with a bottom row of 0's, then the third column would be linearly depended(as it has no pivot). Then wouldn't the column space be the span of the first two columns, and the matrix have 2 basis vectors?

Recommendable

What if there is a transformation of 2D space that squishes things onto a line? Isn't it also an example of transformation between dimensions? If yes, then why have we been representing that with square matrices till now?

Can somebody explain why determinant can't exist for non square matrices?(With regard to the fact that determinant of a matrix gives the measure of increase or decrease in signed area/volume of the vector space)

Why determinants exists only for square matrices?

Eg: we have a 3×2 matrix(ie. it takes a 2D space and transforms it into a 3d space),then any area in the 2d space will have a corresponding scaled area in the 3d space."Which makes sense that determinants exists for non-square matrices."But how can we find them?

This series of videos is incredible. It would be perfect that you made one about analysis of differential elements to build integrals and form objects

But, if the column space of a non-square matrix is same as that of the original matrix and has a full rank, then the determinant must exist right? I mean, even when we look at the animation, we clearly see that the area/volume has expanded or squished in contrast to the previous one. THEN WHY ISN'T THERE ANY DETERMINANT?

Leaving the time behind I was only curious to see when the red arrow will dance.

[1 2]

really helpful

What is the name of the music in the beginning?

thank you for making these videos, you inspire me with every video

Your voice is transforming my sexual preferences

Does this mean that all of the vector outputs from R-two to R-three are 3 dimensional vectors, and that they will only lie on the plane in the 3-D Space?

2:57 there are 3 basis vectors shouldn't at least one of them lie on the same plane of the other two on 3d world?

wow amazing, really appreciate your work. Thanks for sharing

If this guy can't make you learn, nothing can.

I cannot comprehend the 2×3 case. If columns are vectors with two components (shown matrix) so how do they lie in the 3-D space, and then, by interdimensional transformation, become two-component vectors in 2-D

I ahve become your fan.Awesome job.

I thought it wasn't possible to transform from a lower dimension to a higher dimension?

Just awesome

but there is a bit of animation laziness 🙂

Hello, sorry to ask this but: could someone tell me how to find the determinant of a 3×2 matrix? (where the rank stays the same)

I was pondering on the fact that a nonsquare matrix has not a determinant. Here my tentative answer. The determinant of a matrix is a number. This number tell us how to scale an area (in 2D space) or a volume (in 3d square) or an hypervolume (in n space). If you go from a 3D space to a 2 D space, there is non scalar that can scale a volume into an area and vice-versa.

Can you make a video talking about the pseudo-inverse, and how does it works and what does it mean, like minimizing the space where the matrix cannot represent?, and also the sensitivity of the values of a big matrix to keep having a determinat different from 0, and why do pseudo inverse still works? BTW Thank You for your videos, they are awesome!!!

pi reference at 2:48, delicious

How does a 2d vector transform into one d ?? When row is 2,(2×1)

I binge watch this.

You´re crazy good doing this!! How do you do it? you study data on best practices or something, awesome!! 😀

What would be the inverse of a nxm matrix if n<m? Or does it not exist? I can visualise it if n>=m

what software you use

to make animation

the real curiosity is how you learned this knowledge ? , must learn this for applying own learning path

, @3Blue1Brown

Another fantastic, mind-blowing video, 3Blue1Brown. I was wondering, what does it mean geometrically to take the transpose of a matrix or vector? When I took classes on optimization, we'd often just transpose a vector of decision variables to make the math work, and I'd wonder how legit that was.

I understand the fact I hat and j hat can be projected in 3D via 3×2 matrix but im more confused when talking about 3 vectors from 3D becoming 2D vectors.

And is there by any chance a Det = 0 in this second operation (2×3) matrices ?

I came back here from the dot.product since i stuggled a bit and now im even more confused.

Please help I dont want to go back to ' what is a vector ' …

What about transpose of a matrix

Nonsquare matrices don't have determinants. Only square matrices do.

0:48 it's a little bit confusing if u think about the input for example as a matrix 2*1 (1d colomn space) so in this case we have a transformation of a 2*1 matrix to 3*1 matrix

so with square matrices you can just squish space into lower dimensions, but not expand it to higher ones (rank<=dimension). Are 2×3 matrices able to represent the same transformation as 3×3 matrices of rank 2? As both of them could "squish the same span of vectors onto the same plane"?

Somehow I understood everything about non-square matrices (except that their rank does not change)… After I read the title of the video and the joke at the beginning…

So why we can't calculate the determinant of a nonsquare matrix, e,g the 3×2 matrix A mentioned above. like TA(x), before the transformation, X represents a plane in 2D space, then, after transformation, it's still a plane in 3D space. in this sense, we should be able to calculate the determinant of it.

Oh shit! So with a nonsquare, 2 x 3 matrix

[0 0]

[0 0]

[1 0]

one could figure out that a 2D plane exists within a higher dimensional 3D space that the plane rotated 90° through along it's y axis to end with ĵ at z coordinate 1.

So with this kind of matrix, one could hypothetically infer higher dimensional spaces without being able to directly observe them, such as with a 3 x 4.

Oh! Shit!

I can't understand why the concept of determinant and inverse is not defined on non-square matrices

thanks

we can not unsquish line to a plane or unsquish plane to space so why we can map two dimensions two three dimensions I can't understand it. Sorry about the language because I am kind of weak in english

Are you sure the statement at 2:08 that A 3×2 is full rank is correct? I don't believe this can be true for a rectangular matrix since this matrix does not have full row rank, since rank is at most 2. However it does have full column rank. This is what I have learned in Strangs linear algebra course on MIT OCW

Is there a geometric intuition for what the rows of a matrix represent akin to the way how columns represent where the basis vectors land?

Ab. So. Lutely. Fabulous.

Thank you for understanding how we learn.

Is it possible to find the determinants of a non-square matrix

Isn't 3×2 matrix just a normal x-y plane (vectors i and j) with additional dependant vector (k)? I mean how can you start in 3-d if z-coordinate doesn't exist? To me vector K looks like a dependant vector, or maybe as a projection from higher dimension, but then again there is no 3rd coordinate to project from

hmmm doesn't this explanation somehow falls a bit short? It seems the case of the 3×2 matrix is equivalent to the det(A) = 0 case, respectively a case where the third column is a linear combination of the other two columns.

If i and j are scaled from 2d space to 3d space , what happens to k of 3d space?? Does it remain at (0,0,1) ?

How do you write 1xn matrices (that take the number line and transform it into a line in n-dimensional space) without confusing them for output n-dimensional vectors?

This video series should be the pre-requisite for any linear algebra course

These educational videos are really excellent for deep learning to the fundamental and roots of the subject. Really thanks for the inspiration and ideas presented. Our support for you will be infinite.

@2:15 Given any mxn matrix (with linearly independent column vectors) is the dimension of the column space always ''n" (the number of basis vectors/the number of columns)?

would you please like to upload a video on consistency of an over-determined system of equations???

He is so good millions of hugs and dollars and good fortunes to you!!!

I love u

Could you please please do a series on real analysis. I would be infinitely grateful! This is amazing stuff.

Determinant of a 2×3 matrix = x * y * ki, for k = k arrow zero to infinity. In other words, real space can be interpenetrated by imaginary space, and it is only upon placing stress upon real relations of dimensions, ie, 0 x 1, 1 x 2, 2 x 3, 3 x 4, that we need invoke imaginary space. I was reminded of Schopenhauer, and rightly so it turns out: "Hence there results a strange contrast between what a man knows

potentiaand what he knowsactu". But ultimately the field of experience has far more than 3 dimensions and thus the opportunities for these aporia of dimensions opening imaginary space into actionable dimensional space are vast and vanishing.I can understand a 2×3 matrix not having a determinant, since you can't calculate the volume formed by the 3 basis vectors on the 2D plane.

But on a 3×2 matrix, if I take a unit square and do the transformation from 2D to moving the basis vectors onto a 3D space, I should still have some shape even if it is oriented some way in the 3D space. So I can still calculate its area.

This part actually confuses me

Edit: another question. Suppose I multiply a 3×2 matrix and a 2×3 matrix. I should get a 3×3 matrix.

The 2×3 transformation condenses the 3 basis vectors onto a plane. But when I do the 3×2 transformation, which 2 basis vectors out of the 3 should I select to be changed to the 3D space?

Awesome video, very intuitive explanation! Is losing rank during a transformation the same as moving between different dimensions? For example, if a matrix 3×3 has a rank of 2, is it the same as a 3×3 matrix becoming a 2×3 matrix?

i hat and j hat in 3*2 matrix they are the basis of a plan

So 2×3 matrices are actually one possibility to compute a 3d image on a flat plane

Given that a 1 x 2 matrix can be understood as a transformation from 2d to the number line, could a regular 2d vector, written 2×1, also be understood as a transformation from the number line to 2d space?

You know in Hindu tradition we have the personification of learning/knowledge, Goddess Saraswati, who removes obstacles in learning and helps you learn things better. You, sir, are the GOD!

this is amazing! even though my professor is great and explains everything amazingly, I still wish he would incorporate your videos in lectures to help us visualise. your videos are the gems of the internet!!

2:18 Sneaky Pi

hi,

thank you Sir.

it 's good to understand maths like you did. i wonder what does is mean the following product: 2 by 1 matrix times 1 by 2 matrix ??!! what kind of transformation could be? 🙂

Great video! How do we know that the outputs of a 3×2 matrix will always be on the same plane in 3d space?

This idea just came to my mind: For a transformation from 3D space to the 2D plane, I think you can think of it as a projection. Imagine the two equations as planes in 3D. Now you position yourself vertically above one of them (also. centered) and look at the projection of the base vectors on the other plane. This should be the outcoming Picture in 2D, is it not? I haven't tried to prove it yet though. What are your thoughts?

At 1:11 : It's not animation laziness, I just want to prove my point… (continues at 3:33 to animate 2D to 1D though, hmmmm :D)

I really wish if there is another video in between nossquared matrices and dot product talking about vectors projection on subspaces and how we go from 3d to 2d or even 1d. I think such a subject will help receivers transient to dot product concepts way easier.

Completely off-topic but what is that music in the beginning of the video?

Can anyone explain me about annihilator???

What does mutilpication of A of dimensions 2*3 by matrix A transpose of dimension 3*2 what geometrical means

2:37