If $A^2 = I$ (Identity Matrix) then $A = pm I$

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
14
down vote

favorite
4












So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?. Thanks.







share|cite|improve this question


















  • 23




    Try $$ A = beginpmatrix 0 & 1 \ 1 & 0 endpmatrix. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13







  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30











  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43














up vote
14
down vote

favorite
4












So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?. Thanks.







share|cite|improve this question


















  • 23




    Try $$ A = beginpmatrix 0 & 1 \ 1 & 0 endpmatrix. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13







  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30











  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43












up vote
14
down vote

favorite
4









up vote
14
down vote

favorite
4






4





So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?. Thanks.







share|cite|improve this question














So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?. Thanks.









share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 16 '16 at 5:55









Martin Sleziak

43.5k6113260




43.5k6113260










asked Feb 5 '12 at 20:11









Randolf Rincón Fadul

3751519




3751519







  • 23




    Try $$ A = beginpmatrix 0 & 1 \ 1 & 0 endpmatrix. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13







  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30











  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43












  • 23




    Try $$ A = beginpmatrix 0 & 1 \ 1 & 0 endpmatrix. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13







  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30











  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43







23




23




Try $$ A = beginpmatrix 0 & 1 \ 1 & 0 endpmatrix. $$
– Dylan Moreland
Feb 5 '12 at 20:13





Try $$ A = beginpmatrix 0 & 1 \ 1 & 0 endpmatrix. $$
– Dylan Moreland
Feb 5 '12 at 20:13





1




1




I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
– Matt
Feb 5 '12 at 20:30





I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
– Matt
Feb 5 '12 at 20:30













possible duplicate of Finding number of matrices whose square is the identity matrix
– Jonas Meyer
Feb 5 '12 at 20:56




possible duplicate of Finding number of matrices whose square is the identity matrix
– Jonas Meyer
Feb 5 '12 at 20:56












There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
– J. M. is not a mathematician
Feb 6 '12 at 5:11




There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
– J. M. is not a mathematician
Feb 6 '12 at 5:11












What book is that exercise from?
– Rhaldryn
Jan 22 '17 at 17:43




What book is that exercise from?
– Rhaldryn
Jan 22 '17 at 17:43










5 Answers
5






active

oldest

votes

















up vote
33
down vote



accepted










A simple counterexample is $$A = beginbmatrix 1 & 0 \ 0 & -1 endbmatrix $$ We have $A neq pm I$, but $A^2 = I$.






share|cite|improve this answer



























    up vote
    19
    down vote













    In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






    share|cite|improve this answer




















    • If you want to exchange the (standard) basis vectors $e_i$ and $e_j$ ($1 leq i,j leq n$), then use the matrix $A = [m_ij]$ with $m_kk = 1, kneq i,j$, $m_ij = m_ji = 1$ and $m_kl = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbbR^3$, take $$A = beginbmatrix 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 endbmatrix$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
      – Martin Wanvik
      Feb 5 '12 at 21:01











    • Thank you @Martin Wanvik, pretty clear explanation.
      – Randolf Rincón Fadul
      Feb 5 '12 at 21:52

















    up vote
    12
    down vote













    I know $2·mathbb C^2$ many counterexamples, namely



    $$A=c_1beginpmatrix
    0&1\
    1&0
    endpmatrix+c_2beginpmatrix
    1&0\
    0&-1
    endpmatrixpmsqrtc_1^2+c_2^2pm1beginpmatrix
    0&-1\
    1&0
    endpmatrix,$$



    see Pauli Matrices $sigma_i$.



    These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






    share|cite|improve this answer





























      up vote
      7
      down vote













      The following matrix is a conterexample $
      A =
      left( beginarraycc
      -1 & 0 \
      0 & 1 \
      endarray right)
      $






      share|cite|improve this answer



























        up vote
        6
        down vote













        "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



        So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



        When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






        share|cite|improve this answer
















        • 1




          Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
          – Jonas Meyer
          Feb 6 '12 at 5:03







        • 2




          Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
          – the L
          Feb 6 '12 at 8:18






        • 1




          @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
          – Hurkyl
          Feb 6 '12 at 9:53






        • 1




          @anonymous: Good point, e.g. $beginbmatrix1&1\ 0&1endbmatrix$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
          – Jonas Meyer
          Feb 6 '12 at 15:49










        Your Answer




        StackExchange.ifUsing("editor", function ()
        return StackExchange.using("mathjaxEditing", function ()
        StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
        StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
        );
        );
        , "mathjax-editing");

        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "69"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: true,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );








         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f106070%2fif-a2-i-identity-matrix-then-a-pm-i%23new-answer', 'question_page');

        );

        Post as a guest






























        5 Answers
        5






        active

        oldest

        votes








        5 Answers
        5






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        33
        down vote



        accepted










        A simple counterexample is $$A = beginbmatrix 1 & 0 \ 0 & -1 endbmatrix $$ We have $A neq pm I$, but $A^2 = I$.






        share|cite|improve this answer
























          up vote
          33
          down vote



          accepted










          A simple counterexample is $$A = beginbmatrix 1 & 0 \ 0 & -1 endbmatrix $$ We have $A neq pm I$, but $A^2 = I$.






          share|cite|improve this answer






















            up vote
            33
            down vote



            accepted







            up vote
            33
            down vote



            accepted






            A simple counterexample is $$A = beginbmatrix 1 & 0 \ 0 & -1 endbmatrix $$ We have $A neq pm I$, but $A^2 = I$.






            share|cite|improve this answer












            A simple counterexample is $$A = beginbmatrix 1 & 0 \ 0 & -1 endbmatrix $$ We have $A neq pm I$, but $A^2 = I$.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Feb 5 '12 at 20:15









            Martin Wanvik

            2,5321215




            2,5321215




















                up vote
                19
                down vote













                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






                share|cite|improve this answer




















                • If you want to exchange the (standard) basis vectors $e_i$ and $e_j$ ($1 leq i,j leq n$), then use the matrix $A = [m_ij]$ with $m_kk = 1, kneq i,j$, $m_ij = m_ji = 1$ and $m_kl = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbbR^3$, take $$A = beginbmatrix 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 endbmatrix$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01











                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52














                up vote
                19
                down vote













                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






                share|cite|improve this answer




















                • If you want to exchange the (standard) basis vectors $e_i$ and $e_j$ ($1 leq i,j leq n$), then use the matrix $A = [m_ij]$ with $m_kk = 1, kneq i,j$, $m_ij = m_ji = 1$ and $m_kl = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbbR^3$, take $$A = beginbmatrix 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 endbmatrix$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01











                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52












                up vote
                19
                down vote










                up vote
                19
                down vote









                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






                share|cite|improve this answer












                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Feb 5 '12 at 20:16









                Blah

                4,212915




                4,212915











                • If you want to exchange the (standard) basis vectors $e_i$ and $e_j$ ($1 leq i,j leq n$), then use the matrix $A = [m_ij]$ with $m_kk = 1, kneq i,j$, $m_ij = m_ji = 1$ and $m_kl = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbbR^3$, take $$A = beginbmatrix 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 endbmatrix$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01











                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52
















                • If you want to exchange the (standard) basis vectors $e_i$ and $e_j$ ($1 leq i,j leq n$), then use the matrix $A = [m_ij]$ with $m_kk = 1, kneq i,j$, $m_ij = m_ji = 1$ and $m_kl = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbbR^3$, take $$A = beginbmatrix 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 endbmatrix$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01











                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52















                If you want to exchange the (standard) basis vectors $e_i$ and $e_j$ ($1 leq i,j leq n$), then use the matrix $A = [m_ij]$ with $m_kk = 1, kneq i,j$, $m_ij = m_ji = 1$ and $m_kl = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbbR^3$, take $$A = beginbmatrix 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 endbmatrix$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                – Martin Wanvik
                Feb 5 '12 at 21:01





                If you want to exchange the (standard) basis vectors $e_i$ and $e_j$ ($1 leq i,j leq n$), then use the matrix $A = [m_ij]$ with $m_kk = 1, kneq i,j$, $m_ij = m_ji = 1$ and $m_kl = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbbR^3$, take $$A = beginbmatrix 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 endbmatrix$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                – Martin Wanvik
                Feb 5 '12 at 21:01













                Thank you @Martin Wanvik, pretty clear explanation.
                – Randolf Rincón Fadul
                Feb 5 '12 at 21:52




                Thank you @Martin Wanvik, pretty clear explanation.
                – Randolf Rincón Fadul
                Feb 5 '12 at 21:52










                up vote
                12
                down vote













                I know $2·mathbb C^2$ many counterexamples, namely



                $$A=c_1beginpmatrix
                0&1\
                1&0
                endpmatrix+c_2beginpmatrix
                1&0\
                0&-1
                endpmatrixpmsqrtc_1^2+c_2^2pm1beginpmatrix
                0&-1\
                1&0
                endpmatrix,$$



                see Pauli Matrices $sigma_i$.



                These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






                share|cite|improve this answer


























                  up vote
                  12
                  down vote













                  I know $2·mathbb C^2$ many counterexamples, namely



                  $$A=c_1beginpmatrix
                  0&1\
                  1&0
                  endpmatrix+c_2beginpmatrix
                  1&0\
                  0&-1
                  endpmatrixpmsqrtc_1^2+c_2^2pm1beginpmatrix
                  0&-1\
                  1&0
                  endpmatrix,$$



                  see Pauli Matrices $sigma_i$.



                  These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






                  share|cite|improve this answer
























                    up vote
                    12
                    down vote










                    up vote
                    12
                    down vote









                    I know $2·mathbb C^2$ many counterexamples, namely



                    $$A=c_1beginpmatrix
                    0&1\
                    1&0
                    endpmatrix+c_2beginpmatrix
                    1&0\
                    0&-1
                    endpmatrixpmsqrtc_1^2+c_2^2pm1beginpmatrix
                    0&-1\
                    1&0
                    endpmatrix,$$



                    see Pauli Matrices $sigma_i$.



                    These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






                    share|cite|improve this answer














                    I know $2·mathbb C^2$ many counterexamples, namely



                    $$A=c_1beginpmatrix
                    0&1\
                    1&0
                    endpmatrix+c_2beginpmatrix
                    1&0\
                    0&-1
                    endpmatrixpmsqrtc_1^2+c_2^2pm1beginpmatrix
                    0&-1\
                    1&0
                    endpmatrix,$$



                    see Pauli Matrices $sigma_i$.



                    These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Feb 6 '12 at 18:40

























                    answered Feb 5 '12 at 20:57









                    Nikolaj-K

                    5,73222967




                    5,73222967




















                        up vote
                        7
                        down vote













                        The following matrix is a conterexample $
                        A =
                        left( beginarraycc
                        -1 & 0 \
                        0 & 1 \
                        endarray right)
                        $






                        share|cite|improve this answer
























                          up vote
                          7
                          down vote













                          The following matrix is a conterexample $
                          A =
                          left( beginarraycc
                          -1 & 0 \
                          0 & 1 \
                          endarray right)
                          $






                          share|cite|improve this answer






















                            up vote
                            7
                            down vote










                            up vote
                            7
                            down vote









                            The following matrix is a conterexample $
                            A =
                            left( beginarraycc
                            -1 & 0 \
                            0 & 1 \
                            endarray right)
                            $






                            share|cite|improve this answer












                            The following matrix is a conterexample $
                            A =
                            left( beginarraycc
                            -1 & 0 \
                            0 & 1 \
                            endarray right)
                            $







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered Feb 5 '12 at 20:20









                            azarel

                            11k22331




                            11k22331




















                                up vote
                                6
                                down vote













                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






                                share|cite|improve this answer
















                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03







                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $beginbmatrix1&1\ 0&1endbmatrix$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49














                                up vote
                                6
                                down vote













                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






                                share|cite|improve this answer
















                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03







                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $beginbmatrix1&1\ 0&1endbmatrix$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49












                                up vote
                                6
                                down vote










                                up vote
                                6
                                down vote









                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






                                share|cite|improve this answer












                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.







                                share|cite|improve this answer












                                share|cite|improve this answer



                                share|cite|improve this answer










                                answered Feb 6 '12 at 4:56









                                Hurkyl

                                108k9113254




                                108k9113254







                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03







                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $beginbmatrix1&1\ 0&1endbmatrix$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49












                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03







                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $beginbmatrix1&1\ 0&1endbmatrix$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49







                                1




                                1




                                Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                – Jonas Meyer
                                Feb 6 '12 at 5:03





                                Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                – Jonas Meyer
                                Feb 6 '12 at 5:03





                                2




                                2




                                Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                – the L
                                Feb 6 '12 at 8:18




                                Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                – the L
                                Feb 6 '12 at 8:18




                                1




                                1




                                @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                – Hurkyl
                                Feb 6 '12 at 9:53




                                @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                – Hurkyl
                                Feb 6 '12 at 9:53




                                1




                                1




                                @anonymous: Good point, e.g. $beginbmatrix1&1\ 0&1endbmatrix$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                – Jonas Meyer
                                Feb 6 '12 at 15:49




                                @anonymous: Good point, e.g. $beginbmatrix1&1\ 0&1endbmatrix$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                – Jonas Meyer
                                Feb 6 '12 at 15:49












                                 

                                draft saved


                                draft discarded


























                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f106070%2fif-a2-i-identity-matrix-then-a-pm-i%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                這個網誌中的熱門文章

                                How to combine Bézier curves to a surface?

                                Propositional logic and tautologies

                                Distribution of Stopped Wiener Process with Stochastic Volatility