Why $det(AB) =det(A) det(B)$?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite
1












I see this in my textbook:



enter image description here



I don't really understand the proof or why elementary matrices are involved. What is going on here?







share|cite|improve this question


















  • 8




    Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
    – Dietrich Burde
    Aug 27 at 18:14















up vote
3
down vote

favorite
1












I see this in my textbook:



enter image description here



I don't really understand the proof or why elementary matrices are involved. What is going on here?







share|cite|improve this question


















  • 8




    Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
    – Dietrich Burde
    Aug 27 at 18:14













up vote
3
down vote

favorite
1









up vote
3
down vote

favorite
1






1





I see this in my textbook:



enter image description here



I don't really understand the proof or why elementary matrices are involved. What is going on here?







share|cite|improve this question














I see this in my textbook:



enter image description here



I don't really understand the proof or why elementary matrices are involved. What is going on here?









share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 27 at 18:48









dmtri

810317




810317










asked Aug 27 at 18:12









Jwan622

1,75711224




1,75711224







  • 8




    Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
    – Dietrich Burde
    Aug 27 at 18:14













  • 8




    Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
    – Dietrich Burde
    Aug 27 at 18:14








8




8




Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
– Dietrich Burde
Aug 27 at 18:14





Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
– Dietrich Burde
Aug 27 at 18:14











1 Answer
1






active

oldest

votes

















up vote
5
down vote













First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.




I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.




The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying



  1. multi-linearity, i.e. $det$ is linear in each argument.

  2. anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$

  3. normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$



That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.




Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.



By the rules of matrix multiplication, we have that $c_j=Ab_j$:



$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$



Thus, you we obtain:



$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$



Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have



$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$




In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:




Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$




where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that



$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$



This is known as the Leibniz-formula for determinants.




EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.






share|cite|improve this answer






















  • I would like to know the reason of the downvote. Is there something that can be improved?
    – zzuussee
    Aug 27 at 19:14











  • @MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
    – zzuussee
    Aug 27 at 22:24










  • @MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
    – zzuussee
    Aug 27 at 22:28











  • I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
    – Steve D
    Aug 28 at 0:34










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2896489%2fwhy-detab-deta-detb%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
5
down vote













First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.




I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.




The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying



  1. multi-linearity, i.e. $det$ is linear in each argument.

  2. anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$

  3. normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$



That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.




Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.



By the rules of matrix multiplication, we have that $c_j=Ab_j$:



$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$



Thus, you we obtain:



$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$



Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have



$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$




In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:




Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$




where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that



$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$



This is known as the Leibniz-formula for determinants.




EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.






share|cite|improve this answer






















  • I would like to know the reason of the downvote. Is there something that can be improved?
    – zzuussee
    Aug 27 at 19:14











  • @MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
    – zzuussee
    Aug 27 at 22:24










  • @MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
    – zzuussee
    Aug 27 at 22:28











  • I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
    – Steve D
    Aug 28 at 0:34














up vote
5
down vote













First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.




I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.




The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying



  1. multi-linearity, i.e. $det$ is linear in each argument.

  2. anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$

  3. normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$



That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.




Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.



By the rules of matrix multiplication, we have that $c_j=Ab_j$:



$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$



Thus, you we obtain:



$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$



Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have



$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$




In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:




Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$




where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that



$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$



This is known as the Leibniz-formula for determinants.




EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.






share|cite|improve this answer






















  • I would like to know the reason of the downvote. Is there something that can be improved?
    – zzuussee
    Aug 27 at 19:14











  • @MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
    – zzuussee
    Aug 27 at 22:24










  • @MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
    – zzuussee
    Aug 27 at 22:28











  • I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
    – Steve D
    Aug 28 at 0:34












up vote
5
down vote










up vote
5
down vote









First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.




I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.




The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying



  1. multi-linearity, i.e. $det$ is linear in each argument.

  2. anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$

  3. normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$



That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.




Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.



By the rules of matrix multiplication, we have that $c_j=Ab_j$:



$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$



Thus, you we obtain:



$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$



Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have



$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$




In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:




Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$




where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that



$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$



This is known as the Leibniz-formula for determinants.




EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.






share|cite|improve this answer














First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.




I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.




The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying



  1. multi-linearity, i.e. $det$ is linear in each argument.

  2. anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$

  3. normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$



That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.




Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.



By the rules of matrix multiplication, we have that $c_j=Ab_j$:



$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$



Thus, you we obtain:



$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$



Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have



$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$




In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:




Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$




where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that



$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$



This is known as the Leibniz-formula for determinants.




EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Aug 27 at 18:58

























answered Aug 27 at 18:50









zzuussee

2,470624




2,470624











  • I would like to know the reason of the downvote. Is there something that can be improved?
    – zzuussee
    Aug 27 at 19:14











  • @MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
    – zzuussee
    Aug 27 at 22:24










  • @MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
    – zzuussee
    Aug 27 at 22:28











  • I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
    – Steve D
    Aug 28 at 0:34
















  • I would like to know the reason of the downvote. Is there something that can be improved?
    – zzuussee
    Aug 27 at 19:14











  • @MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
    – zzuussee
    Aug 27 at 22:24










  • @MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
    – zzuussee
    Aug 27 at 22:28











  • I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
    – Steve D
    Aug 28 at 0:34















I would like to know the reason of the downvote. Is there something that can be improved?
– zzuussee
Aug 27 at 19:14





I would like to know the reason of the downvote. Is there something that can be improved?
– zzuussee
Aug 27 at 19:14













@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
– zzuussee
Aug 27 at 22:24




@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
– zzuussee
Aug 27 at 22:24












@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
– zzuussee
Aug 27 at 22:28





@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
– zzuussee
Aug 27 at 22:28













I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
– Steve D
Aug 28 at 0:34




I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
– Steve D
Aug 28 at 0:34

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2896489%2fwhy-detab-deta-detb%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

How to combine Bézier curves to a surface?

Mutual Information Always Non-negative

Why am i infinitely getting the same tweet with the Twitter Search API?