If A is invertible, then it can be represented as a product of elementary matrices.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












Can someone jog my intuition as to why this is true:




If A is nonsingular (a matrix exists that if multiplied by A, gives you the identity matrix), then, by Theorem 2.14, it can be written as the product $A = E_k ... E_2E_1$




And Theoreum 2.14 is this:



enter image description here







share|cite|improve this question




















  • Row operations correspond to multiplication by elementary matrices. This is the Lay text, no?
    – Sean Roberson
    Aug 8 at 17:17










  • Think about row operations, which leads to $I_n$.
    – Anik Bhowmick
    Aug 8 at 17:18










  • Can one of you guys show me an example? I'll give you credit!
    – Jwan622
    Aug 8 at 17:24















up vote
0
down vote

favorite












Can someone jog my intuition as to why this is true:




If A is nonsingular (a matrix exists that if multiplied by A, gives you the identity matrix), then, by Theorem 2.14, it can be written as the product $A = E_k ... E_2E_1$




And Theoreum 2.14 is this:



enter image description here







share|cite|improve this question




















  • Row operations correspond to multiplication by elementary matrices. This is the Lay text, no?
    – Sean Roberson
    Aug 8 at 17:17










  • Think about row operations, which leads to $I_n$.
    – Anik Bhowmick
    Aug 8 at 17:18










  • Can one of you guys show me an example? I'll give you credit!
    – Jwan622
    Aug 8 at 17:24













up vote
0
down vote

favorite









up vote
0
down vote

favorite











Can someone jog my intuition as to why this is true:




If A is nonsingular (a matrix exists that if multiplied by A, gives you the identity matrix), then, by Theorem 2.14, it can be written as the product $A = E_k ... E_2E_1$




And Theoreum 2.14 is this:



enter image description here







share|cite|improve this question












Can someone jog my intuition as to why this is true:




If A is nonsingular (a matrix exists that if multiplied by A, gives you the identity matrix), then, by Theorem 2.14, it can be written as the product $A = E_k ... E_2E_1$




And Theoreum 2.14 is this:



enter image description here









share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Aug 8 at 17:15









Jwan622

1,61211224




1,61211224











  • Row operations correspond to multiplication by elementary matrices. This is the Lay text, no?
    – Sean Roberson
    Aug 8 at 17:17










  • Think about row operations, which leads to $I_n$.
    – Anik Bhowmick
    Aug 8 at 17:18










  • Can one of you guys show me an example? I'll give you credit!
    – Jwan622
    Aug 8 at 17:24

















  • Row operations correspond to multiplication by elementary matrices. This is the Lay text, no?
    – Sean Roberson
    Aug 8 at 17:17










  • Think about row operations, which leads to $I_n$.
    – Anik Bhowmick
    Aug 8 at 17:18










  • Can one of you guys show me an example? I'll give you credit!
    – Jwan622
    Aug 8 at 17:24
















Row operations correspond to multiplication by elementary matrices. This is the Lay text, no?
– Sean Roberson
Aug 8 at 17:17




Row operations correspond to multiplication by elementary matrices. This is the Lay text, no?
– Sean Roberson
Aug 8 at 17:17












Think about row operations, which leads to $I_n$.
– Anik Bhowmick
Aug 8 at 17:18




Think about row operations, which leads to $I_n$.
– Anik Bhowmick
Aug 8 at 17:18












Can one of you guys show me an example? I'll give you credit!
– Jwan622
Aug 8 at 17:24





Can one of you guys show me an example? I'll give you credit!
– Jwan622
Aug 8 at 17:24











2 Answers
2






active

oldest

votes

















up vote
2
down vote



accepted










A square matrix $A$ is invertible if and only if you can row reduce $A$ to an identity matrix $I$. Now each row operation that you use to reduce $A$ to $I$ can be represented by an elementary matrix, which is denoted by $E$. Suppose you need $n$ row operations in order to reduce $A$ to $I$. That means that $$(E_nE_n-1 ldots E_1)A=I.$$ Now you probably know that the inverse of a matrix is unique if it exists. So the product $(E_nE_n-1 ldots E_1)$ MUST be the inverse of $A$ because the uniqueness of inverse means that there is only one matrix $A^-1$ such that $A^-1A=I$ holds. Thus, we know $$(E_nE_n-1ldots E_1)=A^-1.$$



To write $A$ as the product of elementary matrices, note that $$A=(A^-1)^-1=(E_nE_n-1 ldots E_1)^-1=E_1^-1E_2^-1ldots E_n^-1.$$



The inverse of an elementary matrix is also an elementary matrix. So the last equation shows $A$ as the product of $n$ elementary metrics.






share|cite|improve this answer




















  • oh I see.... the heart of this is that the inverse matrix $A^-$ is unique and represents a series of row reductions.
    – Jwan622
    Aug 9 at 13:54










  • The theorem just isn't intuitive to me without your explanation here. Thanks. The first part of the theorem would be more intuitive to me if it said: "A is invertible if its inverse can be written as a series of elementary matrices"...which I now know is equivalent to the original theorem.
    – Jwan622
    Aug 9 at 13:56










  • @Jwan622 Do you mean "If $A$ is invertible, then its inverse can be written as the product of elementary matrices"? Your sentence "A is invertible if its inverse can be written as a series of elementary matrices" does not make sense because you cannot speak of the inverse of $A$ before knowing $A$ is invertible.
    – kmiyazaki
    Aug 9 at 19:10


















up vote
1
down vote













I'll try to give a more detailed proof of this. First, let's recap the following:



Lemma: Let $AinmathbbF^(m,n)$. For any of the three elementary row operations on matrices



  1. $r_imapsto r_j$, $r_jmapsto r_i$ (swap)

  2. $r_imapsto lambda r_i$, $lambdaneq 0$ (scale)

  3. $r_imapsto r_i+lambda r_j$, $jneq i$ (addition, with scale)

there exists a regular matrix s.t. $CinmathbbF^(m,m)$ s.t. $CA$ is the matrix after the transformation and this matrix is elementary.



Proof: I divide between the different types of transformations I've listed. Let




  1. Exchanging row $i$ with row $j$. Let $$C=beginpmatrixu_1\vdots\u_mendpmatrix$$ for row vectors $u_kinmathbbF^m$ where $u_k=e_k$ for $kneq i,j$, $u_j=e_i$, $u_i=e_j$. Now, $(CA)_kl=langle u_k,a_lrangle =langle e_k,a_lrangle=A_kl$ for $kneq i$. For $k=i$, $(CA)_il=langle e_j,a_lrangle=A_jl$ and for $k=j$, $(CA)_jl=langle e_i,a_lrangle=A_il$. Thus, $CA$ is $A$ with row $i$ and $j$ swapped. Note, that $C$ itself was created from $E_m$ by swapping row $i$ and $j$, i.e. is elementary. Also, $det C=-1$, as swapping two rows swaps the sign of the determinant, as the determinant is an alternating function. Thus $C$ is regular as well.


  2. Scaling row $i$ by $lambdaneq 0$. Let $C$ be as above with $u_k=e_k$ for $kneq i$ and $u_k=lambda e_k$ for $k=i$. Now, it is clear that $C$ is again elementary and that its determinant is $lambdaneq 0$. For verifying the effect, we look again at $$(CA)_kl=langle u_k,a_lrangle=begincaseslangle e_k,a_lrangle=A_kl &kneq i\langlelambda e_k,a_lrangle=lambdalangle e_k,a_lrangle=lambda A_kl &k=iendcases$$


  3. Adding $lambda r_j$ to $r_i$, $ineq j$. Look at the same $C$ with $u_k=e_k$ for $kneq i$ and $u_k=e_i+lambda e_j$ for $k=j$. Again, that $C$ is elementary is obvious. You may verify the other properties. They again go in a similar fashion as before.

$Box$



Now, let $AinmathbbF^(n,n,)$ for some field $mathbbF$ be invertible. I'll use without a concrete proof that $A$ may be transformed into $E_n$ using Gauss-Jordan(elementary row transformation). You can find a sketch in the hint at the end of the post.



As said before, there is a sequence of row transformations $(t_1,dots, t_k)$ which applied to $A$(sequentially) yield $E_n$. From the before Lemma, we know that every such row transformation corresponds to an elementary matrix $C_k$ and we may write the result as $C_kdots C_1A=E_n$. (note the direction here, as $t_1$ is applied first to $A$, then $t_2$ to this result and so on) Now, $$C_kdots C_1A=E_ntext implies C_kdots C_1=A^-1$$ as the inverse is unique. Thus, as $A=(A^-1)^-1$, we have that



$$A=(C_kdots C_1)^-1=C^-1_1dots C^-1_k$$



Now, all the $C^-1_i$ are invertible again and also elementary. Note also, that I've used without proof, that for a product of matrices $AB$, we have $(AB)^-1=B^-1A^-1$.




Note, that the proof mimics the way you would actually use Gauss-Jordan to compute the inverse. You can look a the block matrix $B=(A,E_n)$ and consider the row transformations $C_1,dots, C_k$(now directly in matrix form), turning $A$ into $E_n$ applied to $B$. Then $B'=(E_n,C_kdots C_1E_n)$. As above, $C_kdots C_1=A^-1$, so this tells us also, that $A^-1$ may be obtained by applying Gauss-Jordan onto $B$ as long till $A$ was transformed into $E_n$. The right half of $B'$ then shows the inverse of $A$.




Hint: Now, as $mathrmker(A)=mathbf0$, we have that, by solving $Ax=mathbf0$ using Gauss-Jordan, that the only solution obtained is $x=mathbf0$. You may check that this corresponds with $A$ being transformable into upper triangle form using elementary row transformations. From upper triangle form, you can now work up backwards and transform $A$ to $E_n$.






share|cite|improve this answer




















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876348%2fif-a-is-invertible-then-it-can-be-represented-as-a-product-of-elementary-matric%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    2
    down vote



    accepted










    A square matrix $A$ is invertible if and only if you can row reduce $A$ to an identity matrix $I$. Now each row operation that you use to reduce $A$ to $I$ can be represented by an elementary matrix, which is denoted by $E$. Suppose you need $n$ row operations in order to reduce $A$ to $I$. That means that $$(E_nE_n-1 ldots E_1)A=I.$$ Now you probably know that the inverse of a matrix is unique if it exists. So the product $(E_nE_n-1 ldots E_1)$ MUST be the inverse of $A$ because the uniqueness of inverse means that there is only one matrix $A^-1$ such that $A^-1A=I$ holds. Thus, we know $$(E_nE_n-1ldots E_1)=A^-1.$$



    To write $A$ as the product of elementary matrices, note that $$A=(A^-1)^-1=(E_nE_n-1 ldots E_1)^-1=E_1^-1E_2^-1ldots E_n^-1.$$



    The inverse of an elementary matrix is also an elementary matrix. So the last equation shows $A$ as the product of $n$ elementary metrics.






    share|cite|improve this answer




















    • oh I see.... the heart of this is that the inverse matrix $A^-$ is unique and represents a series of row reductions.
      – Jwan622
      Aug 9 at 13:54










    • The theorem just isn't intuitive to me without your explanation here. Thanks. The first part of the theorem would be more intuitive to me if it said: "A is invertible if its inverse can be written as a series of elementary matrices"...which I now know is equivalent to the original theorem.
      – Jwan622
      Aug 9 at 13:56










    • @Jwan622 Do you mean "If $A$ is invertible, then its inverse can be written as the product of elementary matrices"? Your sentence "A is invertible if its inverse can be written as a series of elementary matrices" does not make sense because you cannot speak of the inverse of $A$ before knowing $A$ is invertible.
      – kmiyazaki
      Aug 9 at 19:10















    up vote
    2
    down vote



    accepted










    A square matrix $A$ is invertible if and only if you can row reduce $A$ to an identity matrix $I$. Now each row operation that you use to reduce $A$ to $I$ can be represented by an elementary matrix, which is denoted by $E$. Suppose you need $n$ row operations in order to reduce $A$ to $I$. That means that $$(E_nE_n-1 ldots E_1)A=I.$$ Now you probably know that the inverse of a matrix is unique if it exists. So the product $(E_nE_n-1 ldots E_1)$ MUST be the inverse of $A$ because the uniqueness of inverse means that there is only one matrix $A^-1$ such that $A^-1A=I$ holds. Thus, we know $$(E_nE_n-1ldots E_1)=A^-1.$$



    To write $A$ as the product of elementary matrices, note that $$A=(A^-1)^-1=(E_nE_n-1 ldots E_1)^-1=E_1^-1E_2^-1ldots E_n^-1.$$



    The inverse of an elementary matrix is also an elementary matrix. So the last equation shows $A$ as the product of $n$ elementary metrics.






    share|cite|improve this answer




















    • oh I see.... the heart of this is that the inverse matrix $A^-$ is unique and represents a series of row reductions.
      – Jwan622
      Aug 9 at 13:54










    • The theorem just isn't intuitive to me without your explanation here. Thanks. The first part of the theorem would be more intuitive to me if it said: "A is invertible if its inverse can be written as a series of elementary matrices"...which I now know is equivalent to the original theorem.
      – Jwan622
      Aug 9 at 13:56










    • @Jwan622 Do you mean "If $A$ is invertible, then its inverse can be written as the product of elementary matrices"? Your sentence "A is invertible if its inverse can be written as a series of elementary matrices" does not make sense because you cannot speak of the inverse of $A$ before knowing $A$ is invertible.
      – kmiyazaki
      Aug 9 at 19:10













    up vote
    2
    down vote



    accepted







    up vote
    2
    down vote



    accepted






    A square matrix $A$ is invertible if and only if you can row reduce $A$ to an identity matrix $I$. Now each row operation that you use to reduce $A$ to $I$ can be represented by an elementary matrix, which is denoted by $E$. Suppose you need $n$ row operations in order to reduce $A$ to $I$. That means that $$(E_nE_n-1 ldots E_1)A=I.$$ Now you probably know that the inverse of a matrix is unique if it exists. So the product $(E_nE_n-1 ldots E_1)$ MUST be the inverse of $A$ because the uniqueness of inverse means that there is only one matrix $A^-1$ such that $A^-1A=I$ holds. Thus, we know $$(E_nE_n-1ldots E_1)=A^-1.$$



    To write $A$ as the product of elementary matrices, note that $$A=(A^-1)^-1=(E_nE_n-1 ldots E_1)^-1=E_1^-1E_2^-1ldots E_n^-1.$$



    The inverse of an elementary matrix is also an elementary matrix. So the last equation shows $A$ as the product of $n$ elementary metrics.






    share|cite|improve this answer












    A square matrix $A$ is invertible if and only if you can row reduce $A$ to an identity matrix $I$. Now each row operation that you use to reduce $A$ to $I$ can be represented by an elementary matrix, which is denoted by $E$. Suppose you need $n$ row operations in order to reduce $A$ to $I$. That means that $$(E_nE_n-1 ldots E_1)A=I.$$ Now you probably know that the inverse of a matrix is unique if it exists. So the product $(E_nE_n-1 ldots E_1)$ MUST be the inverse of $A$ because the uniqueness of inverse means that there is only one matrix $A^-1$ such that $A^-1A=I$ holds. Thus, we know $$(E_nE_n-1ldots E_1)=A^-1.$$



    To write $A$ as the product of elementary matrices, note that $$A=(A^-1)^-1=(E_nE_n-1 ldots E_1)^-1=E_1^-1E_2^-1ldots E_n^-1.$$



    The inverse of an elementary matrix is also an elementary matrix. So the last equation shows $A$ as the product of $n$ elementary metrics.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Aug 8 at 19:21









    kmiyazaki

    33711




    33711











    • oh I see.... the heart of this is that the inverse matrix $A^-$ is unique and represents a series of row reductions.
      – Jwan622
      Aug 9 at 13:54










    • The theorem just isn't intuitive to me without your explanation here. Thanks. The first part of the theorem would be more intuitive to me if it said: "A is invertible if its inverse can be written as a series of elementary matrices"...which I now know is equivalent to the original theorem.
      – Jwan622
      Aug 9 at 13:56










    • @Jwan622 Do you mean "If $A$ is invertible, then its inverse can be written as the product of elementary matrices"? Your sentence "A is invertible if its inverse can be written as a series of elementary matrices" does not make sense because you cannot speak of the inverse of $A$ before knowing $A$ is invertible.
      – kmiyazaki
      Aug 9 at 19:10

















    • oh I see.... the heart of this is that the inverse matrix $A^-$ is unique and represents a series of row reductions.
      – Jwan622
      Aug 9 at 13:54










    • The theorem just isn't intuitive to me without your explanation here. Thanks. The first part of the theorem would be more intuitive to me if it said: "A is invertible if its inverse can be written as a series of elementary matrices"...which I now know is equivalent to the original theorem.
      – Jwan622
      Aug 9 at 13:56










    • @Jwan622 Do you mean "If $A$ is invertible, then its inverse can be written as the product of elementary matrices"? Your sentence "A is invertible if its inverse can be written as a series of elementary matrices" does not make sense because you cannot speak of the inverse of $A$ before knowing $A$ is invertible.
      – kmiyazaki
      Aug 9 at 19:10
















    oh I see.... the heart of this is that the inverse matrix $A^-$ is unique and represents a series of row reductions.
    – Jwan622
    Aug 9 at 13:54




    oh I see.... the heart of this is that the inverse matrix $A^-$ is unique and represents a series of row reductions.
    – Jwan622
    Aug 9 at 13:54












    The theorem just isn't intuitive to me without your explanation here. Thanks. The first part of the theorem would be more intuitive to me if it said: "A is invertible if its inverse can be written as a series of elementary matrices"...which I now know is equivalent to the original theorem.
    – Jwan622
    Aug 9 at 13:56




    The theorem just isn't intuitive to me without your explanation here. Thanks. The first part of the theorem would be more intuitive to me if it said: "A is invertible if its inverse can be written as a series of elementary matrices"...which I now know is equivalent to the original theorem.
    – Jwan622
    Aug 9 at 13:56












    @Jwan622 Do you mean "If $A$ is invertible, then its inverse can be written as the product of elementary matrices"? Your sentence "A is invertible if its inverse can be written as a series of elementary matrices" does not make sense because you cannot speak of the inverse of $A$ before knowing $A$ is invertible.
    – kmiyazaki
    Aug 9 at 19:10





    @Jwan622 Do you mean "If $A$ is invertible, then its inverse can be written as the product of elementary matrices"? Your sentence "A is invertible if its inverse can be written as a series of elementary matrices" does not make sense because you cannot speak of the inverse of $A$ before knowing $A$ is invertible.
    – kmiyazaki
    Aug 9 at 19:10











    up vote
    1
    down vote













    I'll try to give a more detailed proof of this. First, let's recap the following:



    Lemma: Let $AinmathbbF^(m,n)$. For any of the three elementary row operations on matrices



    1. $r_imapsto r_j$, $r_jmapsto r_i$ (swap)

    2. $r_imapsto lambda r_i$, $lambdaneq 0$ (scale)

    3. $r_imapsto r_i+lambda r_j$, $jneq i$ (addition, with scale)

    there exists a regular matrix s.t. $CinmathbbF^(m,m)$ s.t. $CA$ is the matrix after the transformation and this matrix is elementary.



    Proof: I divide between the different types of transformations I've listed. Let




    1. Exchanging row $i$ with row $j$. Let $$C=beginpmatrixu_1\vdots\u_mendpmatrix$$ for row vectors $u_kinmathbbF^m$ where $u_k=e_k$ for $kneq i,j$, $u_j=e_i$, $u_i=e_j$. Now, $(CA)_kl=langle u_k,a_lrangle =langle e_k,a_lrangle=A_kl$ for $kneq i$. For $k=i$, $(CA)_il=langle e_j,a_lrangle=A_jl$ and for $k=j$, $(CA)_jl=langle e_i,a_lrangle=A_il$. Thus, $CA$ is $A$ with row $i$ and $j$ swapped. Note, that $C$ itself was created from $E_m$ by swapping row $i$ and $j$, i.e. is elementary. Also, $det C=-1$, as swapping two rows swaps the sign of the determinant, as the determinant is an alternating function. Thus $C$ is regular as well.


    2. Scaling row $i$ by $lambdaneq 0$. Let $C$ be as above with $u_k=e_k$ for $kneq i$ and $u_k=lambda e_k$ for $k=i$. Now, it is clear that $C$ is again elementary and that its determinant is $lambdaneq 0$. For verifying the effect, we look again at $$(CA)_kl=langle u_k,a_lrangle=begincaseslangle e_k,a_lrangle=A_kl &kneq i\langlelambda e_k,a_lrangle=lambdalangle e_k,a_lrangle=lambda A_kl &k=iendcases$$


    3. Adding $lambda r_j$ to $r_i$, $ineq j$. Look at the same $C$ with $u_k=e_k$ for $kneq i$ and $u_k=e_i+lambda e_j$ for $k=j$. Again, that $C$ is elementary is obvious. You may verify the other properties. They again go in a similar fashion as before.

    $Box$



    Now, let $AinmathbbF^(n,n,)$ for some field $mathbbF$ be invertible. I'll use without a concrete proof that $A$ may be transformed into $E_n$ using Gauss-Jordan(elementary row transformation). You can find a sketch in the hint at the end of the post.



    As said before, there is a sequence of row transformations $(t_1,dots, t_k)$ which applied to $A$(sequentially) yield $E_n$. From the before Lemma, we know that every such row transformation corresponds to an elementary matrix $C_k$ and we may write the result as $C_kdots C_1A=E_n$. (note the direction here, as $t_1$ is applied first to $A$, then $t_2$ to this result and so on) Now, $$C_kdots C_1A=E_ntext implies C_kdots C_1=A^-1$$ as the inverse is unique. Thus, as $A=(A^-1)^-1$, we have that



    $$A=(C_kdots C_1)^-1=C^-1_1dots C^-1_k$$



    Now, all the $C^-1_i$ are invertible again and also elementary. Note also, that I've used without proof, that for a product of matrices $AB$, we have $(AB)^-1=B^-1A^-1$.




    Note, that the proof mimics the way you would actually use Gauss-Jordan to compute the inverse. You can look a the block matrix $B=(A,E_n)$ and consider the row transformations $C_1,dots, C_k$(now directly in matrix form), turning $A$ into $E_n$ applied to $B$. Then $B'=(E_n,C_kdots C_1E_n)$. As above, $C_kdots C_1=A^-1$, so this tells us also, that $A^-1$ may be obtained by applying Gauss-Jordan onto $B$ as long till $A$ was transformed into $E_n$. The right half of $B'$ then shows the inverse of $A$.




    Hint: Now, as $mathrmker(A)=mathbf0$, we have that, by solving $Ax=mathbf0$ using Gauss-Jordan, that the only solution obtained is $x=mathbf0$. You may check that this corresponds with $A$ being transformable into upper triangle form using elementary row transformations. From upper triangle form, you can now work up backwards and transform $A$ to $E_n$.






    share|cite|improve this answer
























      up vote
      1
      down vote













      I'll try to give a more detailed proof of this. First, let's recap the following:



      Lemma: Let $AinmathbbF^(m,n)$. For any of the three elementary row operations on matrices



      1. $r_imapsto r_j$, $r_jmapsto r_i$ (swap)

      2. $r_imapsto lambda r_i$, $lambdaneq 0$ (scale)

      3. $r_imapsto r_i+lambda r_j$, $jneq i$ (addition, with scale)

      there exists a regular matrix s.t. $CinmathbbF^(m,m)$ s.t. $CA$ is the matrix after the transformation and this matrix is elementary.



      Proof: I divide between the different types of transformations I've listed. Let




      1. Exchanging row $i$ with row $j$. Let $$C=beginpmatrixu_1\vdots\u_mendpmatrix$$ for row vectors $u_kinmathbbF^m$ where $u_k=e_k$ for $kneq i,j$, $u_j=e_i$, $u_i=e_j$. Now, $(CA)_kl=langle u_k,a_lrangle =langle e_k,a_lrangle=A_kl$ for $kneq i$. For $k=i$, $(CA)_il=langle e_j,a_lrangle=A_jl$ and for $k=j$, $(CA)_jl=langle e_i,a_lrangle=A_il$. Thus, $CA$ is $A$ with row $i$ and $j$ swapped. Note, that $C$ itself was created from $E_m$ by swapping row $i$ and $j$, i.e. is elementary. Also, $det C=-1$, as swapping two rows swaps the sign of the determinant, as the determinant is an alternating function. Thus $C$ is regular as well.


      2. Scaling row $i$ by $lambdaneq 0$. Let $C$ be as above with $u_k=e_k$ for $kneq i$ and $u_k=lambda e_k$ for $k=i$. Now, it is clear that $C$ is again elementary and that its determinant is $lambdaneq 0$. For verifying the effect, we look again at $$(CA)_kl=langle u_k,a_lrangle=begincaseslangle e_k,a_lrangle=A_kl &kneq i\langlelambda e_k,a_lrangle=lambdalangle e_k,a_lrangle=lambda A_kl &k=iendcases$$


      3. Adding $lambda r_j$ to $r_i$, $ineq j$. Look at the same $C$ with $u_k=e_k$ for $kneq i$ and $u_k=e_i+lambda e_j$ for $k=j$. Again, that $C$ is elementary is obvious. You may verify the other properties. They again go in a similar fashion as before.

      $Box$



      Now, let $AinmathbbF^(n,n,)$ for some field $mathbbF$ be invertible. I'll use without a concrete proof that $A$ may be transformed into $E_n$ using Gauss-Jordan(elementary row transformation). You can find a sketch in the hint at the end of the post.



      As said before, there is a sequence of row transformations $(t_1,dots, t_k)$ which applied to $A$(sequentially) yield $E_n$. From the before Lemma, we know that every such row transformation corresponds to an elementary matrix $C_k$ and we may write the result as $C_kdots C_1A=E_n$. (note the direction here, as $t_1$ is applied first to $A$, then $t_2$ to this result and so on) Now, $$C_kdots C_1A=E_ntext implies C_kdots C_1=A^-1$$ as the inverse is unique. Thus, as $A=(A^-1)^-1$, we have that



      $$A=(C_kdots C_1)^-1=C^-1_1dots C^-1_k$$



      Now, all the $C^-1_i$ are invertible again and also elementary. Note also, that I've used without proof, that for a product of matrices $AB$, we have $(AB)^-1=B^-1A^-1$.




      Note, that the proof mimics the way you would actually use Gauss-Jordan to compute the inverse. You can look a the block matrix $B=(A,E_n)$ and consider the row transformations $C_1,dots, C_k$(now directly in matrix form), turning $A$ into $E_n$ applied to $B$. Then $B'=(E_n,C_kdots C_1E_n)$. As above, $C_kdots C_1=A^-1$, so this tells us also, that $A^-1$ may be obtained by applying Gauss-Jordan onto $B$ as long till $A$ was transformed into $E_n$. The right half of $B'$ then shows the inverse of $A$.




      Hint: Now, as $mathrmker(A)=mathbf0$, we have that, by solving $Ax=mathbf0$ using Gauss-Jordan, that the only solution obtained is $x=mathbf0$. You may check that this corresponds with $A$ being transformable into upper triangle form using elementary row transformations. From upper triangle form, you can now work up backwards and transform $A$ to $E_n$.






      share|cite|improve this answer






















        up vote
        1
        down vote










        up vote
        1
        down vote









        I'll try to give a more detailed proof of this. First, let's recap the following:



        Lemma: Let $AinmathbbF^(m,n)$. For any of the three elementary row operations on matrices



        1. $r_imapsto r_j$, $r_jmapsto r_i$ (swap)

        2. $r_imapsto lambda r_i$, $lambdaneq 0$ (scale)

        3. $r_imapsto r_i+lambda r_j$, $jneq i$ (addition, with scale)

        there exists a regular matrix s.t. $CinmathbbF^(m,m)$ s.t. $CA$ is the matrix after the transformation and this matrix is elementary.



        Proof: I divide between the different types of transformations I've listed. Let




        1. Exchanging row $i$ with row $j$. Let $$C=beginpmatrixu_1\vdots\u_mendpmatrix$$ for row vectors $u_kinmathbbF^m$ where $u_k=e_k$ for $kneq i,j$, $u_j=e_i$, $u_i=e_j$. Now, $(CA)_kl=langle u_k,a_lrangle =langle e_k,a_lrangle=A_kl$ for $kneq i$. For $k=i$, $(CA)_il=langle e_j,a_lrangle=A_jl$ and for $k=j$, $(CA)_jl=langle e_i,a_lrangle=A_il$. Thus, $CA$ is $A$ with row $i$ and $j$ swapped. Note, that $C$ itself was created from $E_m$ by swapping row $i$ and $j$, i.e. is elementary. Also, $det C=-1$, as swapping two rows swaps the sign of the determinant, as the determinant is an alternating function. Thus $C$ is regular as well.


        2. Scaling row $i$ by $lambdaneq 0$. Let $C$ be as above with $u_k=e_k$ for $kneq i$ and $u_k=lambda e_k$ for $k=i$. Now, it is clear that $C$ is again elementary and that its determinant is $lambdaneq 0$. For verifying the effect, we look again at $$(CA)_kl=langle u_k,a_lrangle=begincaseslangle e_k,a_lrangle=A_kl &kneq i\langlelambda e_k,a_lrangle=lambdalangle e_k,a_lrangle=lambda A_kl &k=iendcases$$


        3. Adding $lambda r_j$ to $r_i$, $ineq j$. Look at the same $C$ with $u_k=e_k$ for $kneq i$ and $u_k=e_i+lambda e_j$ for $k=j$. Again, that $C$ is elementary is obvious. You may verify the other properties. They again go in a similar fashion as before.

        $Box$



        Now, let $AinmathbbF^(n,n,)$ for some field $mathbbF$ be invertible. I'll use without a concrete proof that $A$ may be transformed into $E_n$ using Gauss-Jordan(elementary row transformation). You can find a sketch in the hint at the end of the post.



        As said before, there is a sequence of row transformations $(t_1,dots, t_k)$ which applied to $A$(sequentially) yield $E_n$. From the before Lemma, we know that every such row transformation corresponds to an elementary matrix $C_k$ and we may write the result as $C_kdots C_1A=E_n$. (note the direction here, as $t_1$ is applied first to $A$, then $t_2$ to this result and so on) Now, $$C_kdots C_1A=E_ntext implies C_kdots C_1=A^-1$$ as the inverse is unique. Thus, as $A=(A^-1)^-1$, we have that



        $$A=(C_kdots C_1)^-1=C^-1_1dots C^-1_k$$



        Now, all the $C^-1_i$ are invertible again and also elementary. Note also, that I've used without proof, that for a product of matrices $AB$, we have $(AB)^-1=B^-1A^-1$.




        Note, that the proof mimics the way you would actually use Gauss-Jordan to compute the inverse. You can look a the block matrix $B=(A,E_n)$ and consider the row transformations $C_1,dots, C_k$(now directly in matrix form), turning $A$ into $E_n$ applied to $B$. Then $B'=(E_n,C_kdots C_1E_n)$. As above, $C_kdots C_1=A^-1$, so this tells us also, that $A^-1$ may be obtained by applying Gauss-Jordan onto $B$ as long till $A$ was transformed into $E_n$. The right half of $B'$ then shows the inverse of $A$.




        Hint: Now, as $mathrmker(A)=mathbf0$, we have that, by solving $Ax=mathbf0$ using Gauss-Jordan, that the only solution obtained is $x=mathbf0$. You may check that this corresponds with $A$ being transformable into upper triangle form using elementary row transformations. From upper triangle form, you can now work up backwards and transform $A$ to $E_n$.






        share|cite|improve this answer












        I'll try to give a more detailed proof of this. First, let's recap the following:



        Lemma: Let $AinmathbbF^(m,n)$. For any of the three elementary row operations on matrices



        1. $r_imapsto r_j$, $r_jmapsto r_i$ (swap)

        2. $r_imapsto lambda r_i$, $lambdaneq 0$ (scale)

        3. $r_imapsto r_i+lambda r_j$, $jneq i$ (addition, with scale)

        there exists a regular matrix s.t. $CinmathbbF^(m,m)$ s.t. $CA$ is the matrix after the transformation and this matrix is elementary.



        Proof: I divide between the different types of transformations I've listed. Let




        1. Exchanging row $i$ with row $j$. Let $$C=beginpmatrixu_1\vdots\u_mendpmatrix$$ for row vectors $u_kinmathbbF^m$ where $u_k=e_k$ for $kneq i,j$, $u_j=e_i$, $u_i=e_j$. Now, $(CA)_kl=langle u_k,a_lrangle =langle e_k,a_lrangle=A_kl$ for $kneq i$. For $k=i$, $(CA)_il=langle e_j,a_lrangle=A_jl$ and for $k=j$, $(CA)_jl=langle e_i,a_lrangle=A_il$. Thus, $CA$ is $A$ with row $i$ and $j$ swapped. Note, that $C$ itself was created from $E_m$ by swapping row $i$ and $j$, i.e. is elementary. Also, $det C=-1$, as swapping two rows swaps the sign of the determinant, as the determinant is an alternating function. Thus $C$ is regular as well.


        2. Scaling row $i$ by $lambdaneq 0$. Let $C$ be as above with $u_k=e_k$ for $kneq i$ and $u_k=lambda e_k$ for $k=i$. Now, it is clear that $C$ is again elementary and that its determinant is $lambdaneq 0$. For verifying the effect, we look again at $$(CA)_kl=langle u_k,a_lrangle=begincaseslangle e_k,a_lrangle=A_kl &kneq i\langlelambda e_k,a_lrangle=lambdalangle e_k,a_lrangle=lambda A_kl &k=iendcases$$


        3. Adding $lambda r_j$ to $r_i$, $ineq j$. Look at the same $C$ with $u_k=e_k$ for $kneq i$ and $u_k=e_i+lambda e_j$ for $k=j$. Again, that $C$ is elementary is obvious. You may verify the other properties. They again go in a similar fashion as before.

        $Box$



        Now, let $AinmathbbF^(n,n,)$ for some field $mathbbF$ be invertible. I'll use without a concrete proof that $A$ may be transformed into $E_n$ using Gauss-Jordan(elementary row transformation). You can find a sketch in the hint at the end of the post.



        As said before, there is a sequence of row transformations $(t_1,dots, t_k)$ which applied to $A$(sequentially) yield $E_n$. From the before Lemma, we know that every such row transformation corresponds to an elementary matrix $C_k$ and we may write the result as $C_kdots C_1A=E_n$. (note the direction here, as $t_1$ is applied first to $A$, then $t_2$ to this result and so on) Now, $$C_kdots C_1A=E_ntext implies C_kdots C_1=A^-1$$ as the inverse is unique. Thus, as $A=(A^-1)^-1$, we have that



        $$A=(C_kdots C_1)^-1=C^-1_1dots C^-1_k$$



        Now, all the $C^-1_i$ are invertible again and also elementary. Note also, that I've used without proof, that for a product of matrices $AB$, we have $(AB)^-1=B^-1A^-1$.




        Note, that the proof mimics the way you would actually use Gauss-Jordan to compute the inverse. You can look a the block matrix $B=(A,E_n)$ and consider the row transformations $C_1,dots, C_k$(now directly in matrix form), turning $A$ into $E_n$ applied to $B$. Then $B'=(E_n,C_kdots C_1E_n)$. As above, $C_kdots C_1=A^-1$, so this tells us also, that $A^-1$ may be obtained by applying Gauss-Jordan onto $B$ as long till $A$ was transformed into $E_n$. The right half of $B'$ then shows the inverse of $A$.




        Hint: Now, as $mathrmker(A)=mathbf0$, we have that, by solving $Ax=mathbf0$ using Gauss-Jordan, that the only solution obtained is $x=mathbf0$. You may check that this corresponds with $A$ being transformable into upper triangle form using elementary row transformations. From upper triangle form, you can now work up backwards and transform $A$ to $E_n$.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Aug 8 at 19:28









        zzuussee

        1,659420




        1,659420






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876348%2fif-a-is-invertible-then-it-can-be-represented-as-a-product-of-elementary-matric%23new-answer', 'question_page');

            );

            Post as a guest













































































            這個網誌中的熱門文章

            How to combine Bézier curves to a surface?

            Mutual Information Always Non-negative

            Why am i infinitely getting the same tweet with the Twitter Search API?