Why $det(AB) =det(A) det(B)$?
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
I see this in my textbook:
I don't really understand the proof or why elementary matrices are involved. What is going on here?
linear-algebra
add a comment |Â
up vote
3
down vote
favorite
I see this in my textbook:
I don't really understand the proof or why elementary matrices are involved. What is going on here?
linear-algebra
8
Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
â Dietrich Burde
Aug 27 at 18:14
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I see this in my textbook:
I don't really understand the proof or why elementary matrices are involved. What is going on here?
linear-algebra
I see this in my textbook:
I don't really understand the proof or why elementary matrices are involved. What is going on here?
linear-algebra
edited Aug 27 at 18:48
dmtri
810317
810317
asked Aug 27 at 18:12
Jwan622
1,75711224
1,75711224
8
Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
â Dietrich Burde
Aug 27 at 18:14
add a comment |Â
8
Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
â Dietrich Burde
Aug 27 at 18:14
8
8
Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
â Dietrich Burde
Aug 27 at 18:14
Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
â Dietrich Burde
Aug 27 at 18:14
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
5
down vote
First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.
I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.
The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying
- multi-linearity, i.e. $det$ is linear in each argument.
- anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$
- normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$
That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.
Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.
By the rules of matrix multiplication, we have that $c_j=Ab_j$:
$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$
Thus, you we obtain:
$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$
Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have
$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$
In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:
Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$
where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that
$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$
This is known as the Leibniz-formula for determinants.
EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.
I would like to know the reason of the downvote. Is there something that can be improved?
â zzuussee
Aug 27 at 19:14
@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
â zzuussee
Aug 27 at 22:24
@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
â zzuussee
Aug 27 at 22:28
I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
â Steve D
Aug 28 at 0:34
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
5
down vote
First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.
I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.
The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying
- multi-linearity, i.e. $det$ is linear in each argument.
- anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$
- normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$
That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.
Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.
By the rules of matrix multiplication, we have that $c_j=Ab_j$:
$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$
Thus, you we obtain:
$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$
Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have
$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$
In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:
Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$
where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that
$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$
This is known as the Leibniz-formula for determinants.
EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.
I would like to know the reason of the downvote. Is there something that can be improved?
â zzuussee
Aug 27 at 19:14
@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
â zzuussee
Aug 27 at 22:24
@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
â zzuussee
Aug 27 at 22:28
I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
â Steve D
Aug 28 at 0:34
add a comment |Â
up vote
5
down vote
First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.
I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.
The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying
- multi-linearity, i.e. $det$ is linear in each argument.
- anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$
- normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$
That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.
Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.
By the rules of matrix multiplication, we have that $c_j=Ab_j$:
$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$
Thus, you we obtain:
$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$
Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have
$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$
In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:
Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$
where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that
$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$
This is known as the Leibniz-formula for determinants.
EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.
I would like to know the reason of the downvote. Is there something that can be improved?
â zzuussee
Aug 27 at 19:14
@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
â zzuussee
Aug 27 at 22:24
@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
â zzuussee
Aug 27 at 22:28
I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
â Steve D
Aug 28 at 0:34
add a comment |Â
up vote
5
down vote
up vote
5
down vote
First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.
I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.
The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying
- multi-linearity, i.e. $det$ is linear in each argument.
- anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$
- normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$
That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.
Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.
By the rules of matrix multiplication, we have that $c_j=Ab_j$:
$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$
Thus, you we obtain:
$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$
Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have
$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$
In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:
Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$
where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that
$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$
This is known as the Leibniz-formula for determinants.
EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.
First, let me give you an slight outline of the approach of your textbook: You first establish the result only for elementary matrices, i.e. that $|EB|=|E|cdot |B|$. You then generalizes this for arbitrary finite products of elementary matrices, i.e. $|E_1cdots E_nB|=|E_1|cdots|E_n||B|$. Why this fondling with elementary matrices? The results above are easy to establish and regular matrices have a representation as a product of elementary matrices. From there on, you can generalize the result.
I want to give you a slightly different approach for a proof: I'm not sure how you have defined the determinant in your textbook, so let me give a convenient definition for my layout. Let $mathbbF$ be a field.
The determinant function of dimension $n$ over $mathbbF$, $det:(mathbbF^n)^ntomathbbF$, is the unique such function satisfying
- multi-linearity, i.e. $det$ is linear in each argument.
- anti-symmetry, i.e. $a_i=a_jRightarrow det(a_1,dots,a_n)=0$
- normalization, i.e. $det(e_1,dots,e_n)=1$ for the unit vectors $e_1,dots,e_n$ of $mathbbF^n$
That this function is unique actually has to be proven, but I skip this. The key thing is to define the determinant directly in the columns of a matrix instead on the whole matrix object. Thus, for a matrix $AinmathbbF^(n,n)$, the expression $det(A)$ per se has no meaning, but we define $det(A):=det(a_1,dots,a_n)$ for $A=(a_1,dots,a_n)$ being the corresponding column vector representation, as a an abuse of notation.
Let $A,BinmathbbF^(n,n)$ and let $C=AB$ for $A=(a_1,dots,a_n), B=(b_1,dots,b_n)$ and $C=(c_1,dots,c_n)$ as their column vector representation.
By the rules of matrix multiplication, we have that $c_j=Ab_j$:
$$c_j=Ce_j=sum_i=1^nc_ije_i=ABe_j=Ab_j=A(sum_i=1^nb_ije_i)=sum_i=1^nb_ijAe_i=sum_i=1^nb_ija_i$$
Thus, you we obtain:
$$det(C)=det(c_1,dots,c_n)=det(Ab_1,dots,Ab_n)=det(sum_k=1^nb_k1a_k,dots,sum_k=1^nb_kna_k)=sum_k_1=1^nsum_k_2=1^ndotssum_k_n=1^ndet(b_k_11a_k_1,dots,b_k_nna_k_n)$$
Now, in this nested sum, all these states of the summation indices $k_1,dots,k_n$ vanish for which the map $imapsto k_i$ is not a permutation(bijection) on $1,dots,n$ as in these cases anti-symmetry turns the determinant values to $0$. Thus, we can see every remaining such map as a permutation $sigma$ in the set of all permutation $S_n$. The nested sums vanish and are replaced by one large sum over all permutation, where we additionally pull out(by multi-linearity) the scalar values of $B$. Thus, we have
$$det(C)=sum_sigmain S_nb_sigma(1)1cdots b_sigma(n)ndet(a_sigma(1),dots,a_sigma(n))=sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)ndet(a_1,dots,a_n)=det(a_1,dots,a_n)cdotsum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(A)cdotdet(B)$$
In the last steps, I've used two facts about determinants and permutations. First, I've used the following lemma:
Lemma: For any anty-symmetric function $f:(mathbbF^n)^ntomathbbF$, $sigmain S_n$, any $a_1,dots,a_ninmathbbF^n$: $$f(a_sigma(1),dots,a_sigma(n))=mathrmsign(sigma)f(a_1,dots,a_n)$$
where $mathrmsign(sigma)$ is a scalar value, the sign of the permutation $sigma$. Secondly, I've used that
$$sum_sigmain S_nmathrmsign(sigma)b_sigma(1)1cdots b_sigma(n)n=det(B)$$
This is known as the Leibniz-formula for determinants.
EDIT: Since this is a long post and an extensive topic I've tried to cover with the least text possible, any continuing questions or just pointing out errors are/is well-appreciated.
edited Aug 27 at 18:58
answered Aug 27 at 18:50
zzuussee
2,470624
2,470624
I would like to know the reason of the downvote. Is there something that can be improved?
â zzuussee
Aug 27 at 19:14
@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
â zzuussee
Aug 27 at 22:24
@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
â zzuussee
Aug 27 at 22:28
I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
â Steve D
Aug 28 at 0:34
add a comment |Â
I would like to know the reason of the downvote. Is there something that can be improved?
â zzuussee
Aug 27 at 19:14
@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
â zzuussee
Aug 27 at 22:24
@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
â zzuussee
Aug 27 at 22:28
I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
â Steve D
Aug 28 at 0:34
I would like to know the reason of the downvote. Is there something that can be improved?
â zzuussee
Aug 27 at 19:14
I would like to know the reason of the downvote. Is there something that can be improved?
â zzuussee
Aug 27 at 19:14
@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
â zzuussee
Aug 27 at 22:24
@MorganRodgers I gave some outline of the general concept of the proof of his textbook. However, I think that different viewpoints on proofs are always helpful, which was the main reason for my answer. My proof is not necessarily more complicated as is shows in a nice way to use permutations and elementary properties of the determinant, in a similar way as to obtain the Leibniz-formula, to obtain this result. It also mainly follows straightforward computations.
â zzuussee
Aug 27 at 22:24
@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
â zzuussee
Aug 27 at 22:28
@MorganRodgers I would like to find out how to improve my posts. I did not want to argue with it, it is a perfectly fine critique point and completely true. I just wanted to express my thoughts on your reason and explain my motives, also for the OP.
â zzuussee
Aug 27 at 22:28
I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
â Steve D
Aug 28 at 0:34
I think it's a little disingenuous to sweep the uniqueness under the rug, when it is certainly not trivial.
â Steve D
Aug 28 at 0:34
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2896489%2fwhy-detab-deta-detb%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
8
Elementary matrices are involved, because we want to apply Theorem $3.3$ and Theorem $2.12$.
â Dietrich Burde
Aug 27 at 18:14