Matrices $M$ and $N$ with $MNneq NM$.
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Matrices $M=beginpmatrix-0.6&0.8\0.8&0.6endpmatrix$ and $N=beginpmatrix0.8&0.6\0.6&-0.8endpmatrix$ represent $y = 2x$ and $3y = x$, respectively. Verify that $MN$ is not equal to $NM$, and explain why this should have been expected.
What transformations do the two products represent?
I tried multiplying and finding the products of $MN$ and $NM$, but when I calculated them, they seemed to be equal. I got beginpmatrix-0.48& 0.48\ 0.48& -0.48endpmatrix both times.
Am I doing something wrong?
matrices
add a comment |Â
up vote
0
down vote
favorite
Matrices $M=beginpmatrix-0.6&0.8\0.8&0.6endpmatrix$ and $N=beginpmatrix0.8&0.6\0.6&-0.8endpmatrix$ represent $y = 2x$ and $3y = x$, respectively. Verify that $MN$ is not equal to $NM$, and explain why this should have been expected.
What transformations do the two products represent?
I tried multiplying and finding the products of $MN$ and $NM$, but when I calculated them, they seemed to be equal. I got beginpmatrix-0.48& 0.48\ 0.48& -0.48endpmatrix both times.
Am I doing something wrong?
matrices
$left(matrixa & b \ c & dright)$
gives $left(matrixa & b \ c & dright)$. You can edit your question here.
â Arnaud Mortier
Aug 17 at 5:24
@ArnaudMortier You can get the same result shorter withpmatrix
like this:$pmatrix a & b \ c & d $
â $pmatrix a & b \ c & d $
â CiaPan
Aug 17 at 5:29
2
It looks like you're multiplying the matrices wrong. Matrix multiplication is not element wise, like how you seem to be doing it.
â Sriram Gopalakrishnan
Aug 17 at 5:33
What do you mean they represent $y=2x$ and $3y=x$? And yes, I agree with Sriram. Recheck the definition of matrix multiplication.
â zahbaz
Aug 17 at 5:42
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Matrices $M=beginpmatrix-0.6&0.8\0.8&0.6endpmatrix$ and $N=beginpmatrix0.8&0.6\0.6&-0.8endpmatrix$ represent $y = 2x$ and $3y = x$, respectively. Verify that $MN$ is not equal to $NM$, and explain why this should have been expected.
What transformations do the two products represent?
I tried multiplying and finding the products of $MN$ and $NM$, but when I calculated them, they seemed to be equal. I got beginpmatrix-0.48& 0.48\ 0.48& -0.48endpmatrix both times.
Am I doing something wrong?
matrices
Matrices $M=beginpmatrix-0.6&0.8\0.8&0.6endpmatrix$ and $N=beginpmatrix0.8&0.6\0.6&-0.8endpmatrix$ represent $y = 2x$ and $3y = x$, respectively. Verify that $MN$ is not equal to $NM$, and explain why this should have been expected.
What transformations do the two products represent?
I tried multiplying and finding the products of $MN$ and $NM$, but when I calculated them, they seemed to be equal. I got beginpmatrix-0.48& 0.48\ 0.48& -0.48endpmatrix both times.
Am I doing something wrong?
matrices
edited Aug 17 at 5:25
Cornman
2,60721128
2,60721128
asked Aug 17 at 5:18
Patricia M.
122
122
$left(matrixa & b \ c & dright)$
gives $left(matrixa & b \ c & dright)$. You can edit your question here.
â Arnaud Mortier
Aug 17 at 5:24
@ArnaudMortier You can get the same result shorter withpmatrix
like this:$pmatrix a & b \ c & d $
â $pmatrix a & b \ c & d $
â CiaPan
Aug 17 at 5:29
2
It looks like you're multiplying the matrices wrong. Matrix multiplication is not element wise, like how you seem to be doing it.
â Sriram Gopalakrishnan
Aug 17 at 5:33
What do you mean they represent $y=2x$ and $3y=x$? And yes, I agree with Sriram. Recheck the definition of matrix multiplication.
â zahbaz
Aug 17 at 5:42
add a comment |Â
$left(matrixa & b \ c & dright)$
gives $left(matrixa & b \ c & dright)$. You can edit your question here.
â Arnaud Mortier
Aug 17 at 5:24
@ArnaudMortier You can get the same result shorter withpmatrix
like this:$pmatrix a & b \ c & d $
â $pmatrix a & b \ c & d $
â CiaPan
Aug 17 at 5:29
2
It looks like you're multiplying the matrices wrong. Matrix multiplication is not element wise, like how you seem to be doing it.
â Sriram Gopalakrishnan
Aug 17 at 5:33
What do you mean they represent $y=2x$ and $3y=x$? And yes, I agree with Sriram. Recheck the definition of matrix multiplication.
â zahbaz
Aug 17 at 5:42
$left(matrixa & b \ c & dright)$
gives $left(matrixa & b \ c & dright)$. You can edit your question here.â Arnaud Mortier
Aug 17 at 5:24
$left(matrixa & b \ c & dright)$
gives $left(matrixa & b \ c & dright)$. You can edit your question here.â Arnaud Mortier
Aug 17 at 5:24
@ArnaudMortier You can get the same result shorter with
pmatrix
like this: $pmatrix a & b \ c & d $
â $pmatrix a & b \ c & d $â CiaPan
Aug 17 at 5:29
@ArnaudMortier You can get the same result shorter with
pmatrix
like this: $pmatrix a & b \ c & d $
â $pmatrix a & b \ c & d $â CiaPan
Aug 17 at 5:29
2
2
It looks like you're multiplying the matrices wrong. Matrix multiplication is not element wise, like how you seem to be doing it.
â Sriram Gopalakrishnan
Aug 17 at 5:33
It looks like you're multiplying the matrices wrong. Matrix multiplication is not element wise, like how you seem to be doing it.
â Sriram Gopalakrishnan
Aug 17 at 5:33
What do you mean they represent $y=2x$ and $3y=x$? And yes, I agree with Sriram. Recheck the definition of matrix multiplication.
â zahbaz
Aug 17 at 5:42
What do you mean they represent $y=2x$ and $3y=x$? And yes, I agree with Sriram. Recheck the definition of matrix multiplication.
â zahbaz
Aug 17 at 5:42
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
2
down vote
Yes, you are doing something wrong. But it is hard to tell, when you do not show your calculation.
For example when we calculate $MN$.
Then:
$beginpmatrix-0.6&0.8\0.8&0.6endpmatrixcdot beginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix (-0.6)cdot 0.8+0.6cdot 0.8& (-0.6)cdot 0.6+0.8cdot (-0.8)\0.8cdot 0.8+0.6cdot 0.6&0.8cdot 0.6+0.6cdot (-0.8)endpmatrix=beginpmatrix0&-1\1&0endpmatrix$
Similar you calculate $NM$.
add a comment |Â
up vote
0
down vote
Take a point $(1,1)$ and multiply by the matrices:
$$beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix0.2&1.4endpmatrix text(M is reflecting the point via $y=2x$);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1.4&-0.2endpmatrix text(N is reflecting the point via $y=fracx3$);\
beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&-1\1&0endpmatrix=\
=beginpmatrix1&-1endpmatrix text(MN is reflecting the point via $x$-axis);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&1\-1&0endpmatrix=\
=beginpmatrix-1&1endpmatrix text(NM is reflecting the point via $y$-axis).$$
add a comment |Â
up vote
0
down vote
First off, matrix multiplication is not simply multiplying term wise. Here, what you did, is said that
$$beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11&a_12b_12 \
a_21b_21&a_22b_22
endbmatrix
$$
which is not right.
It should really be
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix
$$
As has been calculated above in another answer, one gets with $MN$ ...
beginbmatrix
0&-1 \
1&0
endbmatrix
and with $NM$...
$$
beginbmatrix
0&1\
-1&0\
endbmatrix
$$, so
$MN neq MN$.
Now, I think the bigger problem is making sense of matrix multiplication.
Now, you may wonder, why is THIS the way we multiply matrices? While the way you multiplied might make the most 'sense' in terms of computation when you think about matrices, in terms of structure and linear maps, the bottom definition is far superior, because it represents the combination of two linear maps. Here is a nice way to think about matrix multiplication. As you should probably know, a linear map just maps a vector space to another, linearly. A matrix represents a linear map with respect to a given basis, the matrix
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
and a vector $b=[b_1 , , b_2]$, that we can write $b$ in terms of basis vectors $e_1$ and $e_2$, so that we write for some values $v_1$ and $v_2$, $b=e_1v_1 + e_2v_2 $. A linear transformation, $f$ of this, one finds $f(b) = v_1f(e_1)+v_2f(e_2)$. Now, clearly, since $f$ brings a vector to a vector, we can write, with $f()_x$ the $x$th element of the vector...
$$
left{
beginarrayc
f(b)_1&=v_1f(e_1)_1+v_2f(e_2)_1\
f(b)_2&=v_1f(e_1)_2+v_2f(e_2)_2
endarray
right.
$$
Notice something familiar? Let us take $e_1=[1 , , 0]$ and $e_2=[0 , , 1]$, the standard basis vectors as they are called. Let us represent $f(e_m)_n$ as $A_nm$. The 'swap' in indices is just notation.
From this, we declare the matrix
$
A=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
$
Now, when we compose two linear maps, we do this two times. That is, I have a matrix, $A$, take a vector, $v$, to an new vector, $w$. I then take this vector, $w$ using $B$, to once again a new vector, $u$. Then, $u=B(Av)$. We define the value $C=AB$ to be the matrix such that $u=Cv$. You can think of this as associativity of mappings.
Actually calculating it out, one finds
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
left(
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
beginbmatrix
v_1 \
v_2
endbmatrix
right)
=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11v_1+b_12v_2 \
b_21v_1+b_22v_2
endbmatrix
=
beginbmatrix
(a_11b_11+a_12b_21)v_1+(a_11b_12+a_12b_22)v_2 \
(a_21b_11+a_22b_21)v_1+(a_21b_12+a_22b_22)v_2
endbmatrix
$$
We can 'factor out' these terms and find that
$$AB=beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix$$.
Or, another way of seeing this is that by definition of the matrix, the first column represents the transformed basis vector, which we in turn transform again, so 'singling out' this vector and transforming it normally will be like tipping the $m$th column of the right matrix into the $n$th row of the left matrix, and doing the 'linear calculation'(matching first with first, second with second and adding), to get the $nm$th index.
Can you see, why, in general, this is not commutative? Just try swapping around the a's and b's and you will find that in the top left, since in general, $a_12 neq a_21$, so the top left clearly isn't commutative and so neither the whole matrix.
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Yes, you are doing something wrong. But it is hard to tell, when you do not show your calculation.
For example when we calculate $MN$.
Then:
$beginpmatrix-0.6&0.8\0.8&0.6endpmatrixcdot beginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix (-0.6)cdot 0.8+0.6cdot 0.8& (-0.6)cdot 0.6+0.8cdot (-0.8)\0.8cdot 0.8+0.6cdot 0.6&0.8cdot 0.6+0.6cdot (-0.8)endpmatrix=beginpmatrix0&-1\1&0endpmatrix$
Similar you calculate $NM$.
add a comment |Â
up vote
2
down vote
Yes, you are doing something wrong. But it is hard to tell, when you do not show your calculation.
For example when we calculate $MN$.
Then:
$beginpmatrix-0.6&0.8\0.8&0.6endpmatrixcdot beginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix (-0.6)cdot 0.8+0.6cdot 0.8& (-0.6)cdot 0.6+0.8cdot (-0.8)\0.8cdot 0.8+0.6cdot 0.6&0.8cdot 0.6+0.6cdot (-0.8)endpmatrix=beginpmatrix0&-1\1&0endpmatrix$
Similar you calculate $NM$.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Yes, you are doing something wrong. But it is hard to tell, when you do not show your calculation.
For example when we calculate $MN$.
Then:
$beginpmatrix-0.6&0.8\0.8&0.6endpmatrixcdot beginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix (-0.6)cdot 0.8+0.6cdot 0.8& (-0.6)cdot 0.6+0.8cdot (-0.8)\0.8cdot 0.8+0.6cdot 0.6&0.8cdot 0.6+0.6cdot (-0.8)endpmatrix=beginpmatrix0&-1\1&0endpmatrix$
Similar you calculate $NM$.
Yes, you are doing something wrong. But it is hard to tell, when you do not show your calculation.
For example when we calculate $MN$.
Then:
$beginpmatrix-0.6&0.8\0.8&0.6endpmatrixcdot beginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix (-0.6)cdot 0.8+0.6cdot 0.8& (-0.6)cdot 0.6+0.8cdot (-0.8)\0.8cdot 0.8+0.6cdot 0.6&0.8cdot 0.6+0.6cdot (-0.8)endpmatrix=beginpmatrix0&-1\1&0endpmatrix$
Similar you calculate $NM$.
answered Aug 17 at 5:31
Cornman
2,60721128
2,60721128
add a comment |Â
add a comment |Â
up vote
0
down vote
Take a point $(1,1)$ and multiply by the matrices:
$$beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix0.2&1.4endpmatrix text(M is reflecting the point via $y=2x$);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1.4&-0.2endpmatrix text(N is reflecting the point via $y=fracx3$);\
beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&-1\1&0endpmatrix=\
=beginpmatrix1&-1endpmatrix text(MN is reflecting the point via $x$-axis);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&1\-1&0endpmatrix=\
=beginpmatrix-1&1endpmatrix text(NM is reflecting the point via $y$-axis).$$
add a comment |Â
up vote
0
down vote
Take a point $(1,1)$ and multiply by the matrices:
$$beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix0.2&1.4endpmatrix text(M is reflecting the point via $y=2x$);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1.4&-0.2endpmatrix text(N is reflecting the point via $y=fracx3$);\
beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&-1\1&0endpmatrix=\
=beginpmatrix1&-1endpmatrix text(MN is reflecting the point via $x$-axis);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&1\-1&0endpmatrix=\
=beginpmatrix-1&1endpmatrix text(NM is reflecting the point via $y$-axis).$$
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Take a point $(1,1)$ and multiply by the matrices:
$$beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix0.2&1.4endpmatrix text(M is reflecting the point via $y=2x$);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1.4&-0.2endpmatrix text(N is reflecting the point via $y=fracx3$);\
beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&-1\1&0endpmatrix=\
=beginpmatrix1&-1endpmatrix text(MN is reflecting the point via $x$-axis);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&1\-1&0endpmatrix=\
=beginpmatrix-1&1endpmatrix text(NM is reflecting the point via $y$-axis).$$
Take a point $(1,1)$ and multiply by the matrices:
$$beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix0.2&1.4endpmatrix text(M is reflecting the point via $y=2x$);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1.4&-0.2endpmatrix text(N is reflecting the point via $y=fracx3$);\
beginpmatrix1&1endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&-1\1&0endpmatrix=\
=beginpmatrix1&-1endpmatrix text(MN is reflecting the point via $x$-axis);\
beginpmatrix1&1endpmatrixbeginpmatrix0.8&0.6\0.6&-0.8endpmatrixbeginpmatrix-0.6&0.8\0.8&0.6endpmatrix=beginpmatrix1&1endpmatrixbeginpmatrix0&1\-1&0endpmatrix=\
=beginpmatrix-1&1endpmatrix text(NM is reflecting the point via $y$-axis).$$
answered Aug 17 at 6:30
farruhota
14k2632
14k2632
add a comment |Â
add a comment |Â
up vote
0
down vote
First off, matrix multiplication is not simply multiplying term wise. Here, what you did, is said that
$$beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11&a_12b_12 \
a_21b_21&a_22b_22
endbmatrix
$$
which is not right.
It should really be
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix
$$
As has been calculated above in another answer, one gets with $MN$ ...
beginbmatrix
0&-1 \
1&0
endbmatrix
and with $NM$...
$$
beginbmatrix
0&1\
-1&0\
endbmatrix
$$, so
$MN neq MN$.
Now, I think the bigger problem is making sense of matrix multiplication.
Now, you may wonder, why is THIS the way we multiply matrices? While the way you multiplied might make the most 'sense' in terms of computation when you think about matrices, in terms of structure and linear maps, the bottom definition is far superior, because it represents the combination of two linear maps. Here is a nice way to think about matrix multiplication. As you should probably know, a linear map just maps a vector space to another, linearly. A matrix represents a linear map with respect to a given basis, the matrix
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
and a vector $b=[b_1 , , b_2]$, that we can write $b$ in terms of basis vectors $e_1$ and $e_2$, so that we write for some values $v_1$ and $v_2$, $b=e_1v_1 + e_2v_2 $. A linear transformation, $f$ of this, one finds $f(b) = v_1f(e_1)+v_2f(e_2)$. Now, clearly, since $f$ brings a vector to a vector, we can write, with $f()_x$ the $x$th element of the vector...
$$
left{
beginarrayc
f(b)_1&=v_1f(e_1)_1+v_2f(e_2)_1\
f(b)_2&=v_1f(e_1)_2+v_2f(e_2)_2
endarray
right.
$$
Notice something familiar? Let us take $e_1=[1 , , 0]$ and $e_2=[0 , , 1]$, the standard basis vectors as they are called. Let us represent $f(e_m)_n$ as $A_nm$. The 'swap' in indices is just notation.
From this, we declare the matrix
$
A=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
$
Now, when we compose two linear maps, we do this two times. That is, I have a matrix, $A$, take a vector, $v$, to an new vector, $w$. I then take this vector, $w$ using $B$, to once again a new vector, $u$. Then, $u=B(Av)$. We define the value $C=AB$ to be the matrix such that $u=Cv$. You can think of this as associativity of mappings.
Actually calculating it out, one finds
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
left(
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
beginbmatrix
v_1 \
v_2
endbmatrix
right)
=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11v_1+b_12v_2 \
b_21v_1+b_22v_2
endbmatrix
=
beginbmatrix
(a_11b_11+a_12b_21)v_1+(a_11b_12+a_12b_22)v_2 \
(a_21b_11+a_22b_21)v_1+(a_21b_12+a_22b_22)v_2
endbmatrix
$$
We can 'factor out' these terms and find that
$$AB=beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix$$.
Or, another way of seeing this is that by definition of the matrix, the first column represents the transformed basis vector, which we in turn transform again, so 'singling out' this vector and transforming it normally will be like tipping the $m$th column of the right matrix into the $n$th row of the left matrix, and doing the 'linear calculation'(matching first with first, second with second and adding), to get the $nm$th index.
Can you see, why, in general, this is not commutative? Just try swapping around the a's and b's and you will find that in the top left, since in general, $a_12 neq a_21$, so the top left clearly isn't commutative and so neither the whole matrix.
add a comment |Â
up vote
0
down vote
First off, matrix multiplication is not simply multiplying term wise. Here, what you did, is said that
$$beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11&a_12b_12 \
a_21b_21&a_22b_22
endbmatrix
$$
which is not right.
It should really be
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix
$$
As has been calculated above in another answer, one gets with $MN$ ...
beginbmatrix
0&-1 \
1&0
endbmatrix
and with $NM$...
$$
beginbmatrix
0&1\
-1&0\
endbmatrix
$$, so
$MN neq MN$.
Now, I think the bigger problem is making sense of matrix multiplication.
Now, you may wonder, why is THIS the way we multiply matrices? While the way you multiplied might make the most 'sense' in terms of computation when you think about matrices, in terms of structure and linear maps, the bottom definition is far superior, because it represents the combination of two linear maps. Here is a nice way to think about matrix multiplication. As you should probably know, a linear map just maps a vector space to another, linearly. A matrix represents a linear map with respect to a given basis, the matrix
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
and a vector $b=[b_1 , , b_2]$, that we can write $b$ in terms of basis vectors $e_1$ and $e_2$, so that we write for some values $v_1$ and $v_2$, $b=e_1v_1 + e_2v_2 $. A linear transformation, $f$ of this, one finds $f(b) = v_1f(e_1)+v_2f(e_2)$. Now, clearly, since $f$ brings a vector to a vector, we can write, with $f()_x$ the $x$th element of the vector...
$$
left{
beginarrayc
f(b)_1&=v_1f(e_1)_1+v_2f(e_2)_1\
f(b)_2&=v_1f(e_1)_2+v_2f(e_2)_2
endarray
right.
$$
Notice something familiar? Let us take $e_1=[1 , , 0]$ and $e_2=[0 , , 1]$, the standard basis vectors as they are called. Let us represent $f(e_m)_n$ as $A_nm$. The 'swap' in indices is just notation.
From this, we declare the matrix
$
A=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
$
Now, when we compose two linear maps, we do this two times. That is, I have a matrix, $A$, take a vector, $v$, to an new vector, $w$. I then take this vector, $w$ using $B$, to once again a new vector, $u$. Then, $u=B(Av)$. We define the value $C=AB$ to be the matrix such that $u=Cv$. You can think of this as associativity of mappings.
Actually calculating it out, one finds
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
left(
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
beginbmatrix
v_1 \
v_2
endbmatrix
right)
=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11v_1+b_12v_2 \
b_21v_1+b_22v_2
endbmatrix
=
beginbmatrix
(a_11b_11+a_12b_21)v_1+(a_11b_12+a_12b_22)v_2 \
(a_21b_11+a_22b_21)v_1+(a_21b_12+a_22b_22)v_2
endbmatrix
$$
We can 'factor out' these terms and find that
$$AB=beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix$$.
Or, another way of seeing this is that by definition of the matrix, the first column represents the transformed basis vector, which we in turn transform again, so 'singling out' this vector and transforming it normally will be like tipping the $m$th column of the right matrix into the $n$th row of the left matrix, and doing the 'linear calculation'(matching first with first, second with second and adding), to get the $nm$th index.
Can you see, why, in general, this is not commutative? Just try swapping around the a's and b's and you will find that in the top left, since in general, $a_12 neq a_21$, so the top left clearly isn't commutative and so neither the whole matrix.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
First off, matrix multiplication is not simply multiplying term wise. Here, what you did, is said that
$$beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11&a_12b_12 \
a_21b_21&a_22b_22
endbmatrix
$$
which is not right.
It should really be
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix
$$
As has been calculated above in another answer, one gets with $MN$ ...
beginbmatrix
0&-1 \
1&0
endbmatrix
and with $NM$...
$$
beginbmatrix
0&1\
-1&0\
endbmatrix
$$, so
$MN neq MN$.
Now, I think the bigger problem is making sense of matrix multiplication.
Now, you may wonder, why is THIS the way we multiply matrices? While the way you multiplied might make the most 'sense' in terms of computation when you think about matrices, in terms of structure and linear maps, the bottom definition is far superior, because it represents the combination of two linear maps. Here is a nice way to think about matrix multiplication. As you should probably know, a linear map just maps a vector space to another, linearly. A matrix represents a linear map with respect to a given basis, the matrix
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
and a vector $b=[b_1 , , b_2]$, that we can write $b$ in terms of basis vectors $e_1$ and $e_2$, so that we write for some values $v_1$ and $v_2$, $b=e_1v_1 + e_2v_2 $. A linear transformation, $f$ of this, one finds $f(b) = v_1f(e_1)+v_2f(e_2)$. Now, clearly, since $f$ brings a vector to a vector, we can write, with $f()_x$ the $x$th element of the vector...
$$
left{
beginarrayc
f(b)_1&=v_1f(e_1)_1+v_2f(e_2)_1\
f(b)_2&=v_1f(e_1)_2+v_2f(e_2)_2
endarray
right.
$$
Notice something familiar? Let us take $e_1=[1 , , 0]$ and $e_2=[0 , , 1]$, the standard basis vectors as they are called. Let us represent $f(e_m)_n$ as $A_nm$. The 'swap' in indices is just notation.
From this, we declare the matrix
$
A=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
$
Now, when we compose two linear maps, we do this two times. That is, I have a matrix, $A$, take a vector, $v$, to an new vector, $w$. I then take this vector, $w$ using $B$, to once again a new vector, $u$. Then, $u=B(Av)$. We define the value $C=AB$ to be the matrix such that $u=Cv$. You can think of this as associativity of mappings.
Actually calculating it out, one finds
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
left(
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
beginbmatrix
v_1 \
v_2
endbmatrix
right)
=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11v_1+b_12v_2 \
b_21v_1+b_22v_2
endbmatrix
=
beginbmatrix
(a_11b_11+a_12b_21)v_1+(a_11b_12+a_12b_22)v_2 \
(a_21b_11+a_22b_21)v_1+(a_21b_12+a_22b_22)v_2
endbmatrix
$$
We can 'factor out' these terms and find that
$$AB=beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix$$.
Or, another way of seeing this is that by definition of the matrix, the first column represents the transformed basis vector, which we in turn transform again, so 'singling out' this vector and transforming it normally will be like tipping the $m$th column of the right matrix into the $n$th row of the left matrix, and doing the 'linear calculation'(matching first with first, second with second and adding), to get the $nm$th index.
Can you see, why, in general, this is not commutative? Just try swapping around the a's and b's and you will find that in the top left, since in general, $a_12 neq a_21$, so the top left clearly isn't commutative and so neither the whole matrix.
First off, matrix multiplication is not simply multiplying term wise. Here, what you did, is said that
$$beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11&a_12b_12 \
a_21b_21&a_22b_22
endbmatrix
$$
which is not right.
It should really be
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
=
beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix
$$
As has been calculated above in another answer, one gets with $MN$ ...
beginbmatrix
0&-1 \
1&0
endbmatrix
and with $NM$...
$$
beginbmatrix
0&1\
-1&0\
endbmatrix
$$, so
$MN neq MN$.
Now, I think the bigger problem is making sense of matrix multiplication.
Now, you may wonder, why is THIS the way we multiply matrices? While the way you multiplied might make the most 'sense' in terms of computation when you think about matrices, in terms of structure and linear maps, the bottom definition is far superior, because it represents the combination of two linear maps. Here is a nice way to think about matrix multiplication. As you should probably know, a linear map just maps a vector space to another, linearly. A matrix represents a linear map with respect to a given basis, the matrix
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
and a vector $b=[b_1 , , b_2]$, that we can write $b$ in terms of basis vectors $e_1$ and $e_2$, so that we write for some values $v_1$ and $v_2$, $b=e_1v_1 + e_2v_2 $. A linear transformation, $f$ of this, one finds $f(b) = v_1f(e_1)+v_2f(e_2)$. Now, clearly, since $f$ brings a vector to a vector, we can write, with $f()_x$ the $x$th element of the vector...
$$
left{
beginarrayc
f(b)_1&=v_1f(e_1)_1+v_2f(e_2)_1\
f(b)_2&=v_1f(e_1)_2+v_2f(e_2)_2
endarray
right.
$$
Notice something familiar? Let us take $e_1=[1 , , 0]$ and $e_2=[0 , , 1]$, the standard basis vectors as they are called. Let us represent $f(e_m)_n$ as $A_nm$. The 'swap' in indices is just notation.
From this, we declare the matrix
$
A=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
$
Now, when we compose two linear maps, we do this two times. That is, I have a matrix, $A$, take a vector, $v$, to an new vector, $w$. I then take this vector, $w$ using $B$, to once again a new vector, $u$. Then, $u=B(Av)$. We define the value $C=AB$ to be the matrix such that $u=Cv$. You can think of this as associativity of mappings.
Actually calculating it out, one finds
$$
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
left(
beginbmatrix
b_11&b_12 \
b_21&b_22
endbmatrix
beginbmatrix
v_1 \
v_2
endbmatrix
right)
=
beginbmatrix
a_11&a_12 \
a_21&a_22
endbmatrix
beginbmatrix
b_11v_1+b_12v_2 \
b_21v_1+b_22v_2
endbmatrix
=
beginbmatrix
(a_11b_11+a_12b_21)v_1+(a_11b_12+a_12b_22)v_2 \
(a_21b_11+a_22b_21)v_1+(a_21b_12+a_22b_22)v_2
endbmatrix
$$
We can 'factor out' these terms and find that
$$AB=beginbmatrix
a_11b_11+a_12b_21&a_11b_12+a_12b_22 \
a_21b_11+a_22b_21&a_21b_12+a_22b_22
endbmatrix$$.
Or, another way of seeing this is that by definition of the matrix, the first column represents the transformed basis vector, which we in turn transform again, so 'singling out' this vector and transforming it normally will be like tipping the $m$th column of the right matrix into the $n$th row of the left matrix, and doing the 'linear calculation'(matching first with first, second with second and adding), to get the $nm$th index.
Can you see, why, in general, this is not commutative? Just try swapping around the a's and b's and you will find that in the top left, since in general, $a_12 neq a_21$, so the top left clearly isn't commutative and so neither the whole matrix.
edited Aug 17 at 7:26
answered Aug 17 at 7:05
VgAcid
1285
1285
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2885429%2fmatrices-m-and-n-with-mn-neq-nm%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
$left(matrixa & b \ c & dright)$
gives $left(matrixa & b \ c & dright)$. You can edit your question here.â Arnaud Mortier
Aug 17 at 5:24
@ArnaudMortier You can get the same result shorter with
pmatrix
like this:$pmatrix a & b \ c & d $
â $pmatrix a & b \ c & d $â CiaPan
Aug 17 at 5:29
2
It looks like you're multiplying the matrices wrong. Matrix multiplication is not element wise, like how you seem to be doing it.
â Sriram Gopalakrishnan
Aug 17 at 5:33
What do you mean they represent $y=2x$ and $3y=x$? And yes, I agree with Sriram. Recheck the definition of matrix multiplication.
â zahbaz
Aug 17 at 5:42