Plücker embedding - Two definitions

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite












While reading several papers on the topic of the Grassmannian, I cam about two definitions of the Plücker embedding.
One given as
$$ varphi: mathbbA^n cdot d rightarrow mathbbP^binomnd-1, A mapsto textdet(A^(I))$$
where $ A^(I)$ denotes the submatrix of $A$ obtained by choosing its $I$-th columns. (I know that I can identify the affine space of this dimension with the space of matrices of size $d times n$. Moreover, the map is only well-defined if we look at matrices of rank $d$ which gives us the connection to elements of the Grassmannian). The other one was given as



$$psi: Gr(d,n) rightarrow mathbbP^binomnd-1, U mapsto [u_1 wedge dotsc wedge u_d]$$



So my question concerns the connection of these two maps in the projective space respectively whether there is a connection at all. I assume there must be something as in the papers I've read different authors used them both for talking about the Plücker embedding.



Thank you for your help.










share|cite|improve this question

























    up vote
    3
    down vote

    favorite












    While reading several papers on the topic of the Grassmannian, I cam about two definitions of the Plücker embedding.
    One given as
    $$ varphi: mathbbA^n cdot d rightarrow mathbbP^binomnd-1, A mapsto textdet(A^(I))$$
    where $ A^(I)$ denotes the submatrix of $A$ obtained by choosing its $I$-th columns. (I know that I can identify the affine space of this dimension with the space of matrices of size $d times n$. Moreover, the map is only well-defined if we look at matrices of rank $d$ which gives us the connection to elements of the Grassmannian). The other one was given as



    $$psi: Gr(d,n) rightarrow mathbbP^binomnd-1, U mapsto [u_1 wedge dotsc wedge u_d]$$



    So my question concerns the connection of these two maps in the projective space respectively whether there is a connection at all. I assume there must be something as in the papers I've read different authors used them both for talking about the Plücker embedding.



    Thank you for your help.










    share|cite|improve this question























      up vote
      3
      down vote

      favorite









      up vote
      3
      down vote

      favorite











      While reading several papers on the topic of the Grassmannian, I cam about two definitions of the Plücker embedding.
      One given as
      $$ varphi: mathbbA^n cdot d rightarrow mathbbP^binomnd-1, A mapsto textdet(A^(I))$$
      where $ A^(I)$ denotes the submatrix of $A$ obtained by choosing its $I$-th columns. (I know that I can identify the affine space of this dimension with the space of matrices of size $d times n$. Moreover, the map is only well-defined if we look at matrices of rank $d$ which gives us the connection to elements of the Grassmannian). The other one was given as



      $$psi: Gr(d,n) rightarrow mathbbP^binomnd-1, U mapsto [u_1 wedge dotsc wedge u_d]$$



      So my question concerns the connection of these two maps in the projective space respectively whether there is a connection at all. I assume there must be something as in the papers I've read different authors used them both for talking about the Plücker embedding.



      Thank you for your help.










      share|cite|improve this question













      While reading several papers on the topic of the Grassmannian, I cam about two definitions of the Plücker embedding.
      One given as
      $$ varphi: mathbbA^n cdot d rightarrow mathbbP^binomnd-1, A mapsto textdet(A^(I))$$
      where $ A^(I)$ denotes the submatrix of $A$ obtained by choosing its $I$-th columns. (I know that I can identify the affine space of this dimension with the space of matrices of size $d times n$. Moreover, the map is only well-defined if we look at matrices of rank $d$ which gives us the connection to elements of the Grassmannian). The other one was given as



      $$psi: Gr(d,n) rightarrow mathbbP^binomnd-1, U mapsto [u_1 wedge dotsc wedge u_d]$$



      So my question concerns the connection of these two maps in the projective space respectively whether there is a connection at all. I assume there must be something as in the papers I've read different authors used them both for talking about the Plücker embedding.



      Thank you for your help.







      algebraic-geometry projective-space grassmannian






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Sep 3 at 13:05









      SallyOwens

      284210




      284210




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          2
          down vote



          accepted










          The idea here is that that the vectors $u_i, i=1,2,ldots,d,$ are the rows of the matrix $A$. More precisely, if $u_i=(a_i1,a_i2,ldots,a_in)$, then the coefficient of
          $e_i_1wedge e_i_2wedgecdotswedge e_i_d$ in the wedge product $u_1wedge u_2wedgecdotswedge u_d$ is equal to $det(A^(I))$.



          Consider the paper & pencil example of $n=4,d=2$ with $u_1=(a_1,a_2,a_3,a_4)$, $u_2=(b_1,b_2,b_3,b_4)$ when
          $$
          beginaligned
          u_1wedge u_2&=(a_1e_1+a_2e_2+a_3e_3+a_4e_4)wedge(b_1e_1+b_2e_2+b_3e_3+b_4e_4)\
          &=sum_i=1^4sum_j=1^4a_ib_je_iwedge e_j\
          &=(a_1b_2-a_2b_1)e_1wedge e_2+(a_1b_3-a_3b_1)e_1wedge e_3+(a_1b_4-a_4b_1)e_1wedge e_4\
          &+(a_2b_3-a_3b_2)e_2wedge e_3+(a_2b_4-a_4b_2)e_2wedge e_4+(a_3b_4-a_4b_3)e_3wedge e_4.
          endaligned
          $$
          See the six $2times2$ minors of the matrix
          $$
          A=left(beginarraycccca_1&a_2&a_3&a_4\b_1&b_2&b_3&b_4endarrayright)
          $$
          emerging?!




          For the purposes of introducing coordinates to the Grasmannian we associate a $d$-dimensional subspace $V$ with any matrix $dtimes n$ matrix $A$ that has $V$
          as its row space. It is essential that:



          • Two matrices $A$ and $A'$ are sharing the same row space if and only if there exists an invertible $dtimes d$ matrix $M$ such that $A'=MA$.

          • When we calculate the wedge products of the rows, all the coordinates of the product of the rows of $A'$ are gotten by multiplying the corresponding coordinates of the product of rows of $A$ by $det M$. Implying that as homogeneous coordinates in a projective space the two wedge products refer to the same point. This means that the mapping $G(n,d)toBbbP^N$ is well defined.

          • The easiest way of proving the result of the previous bullet is to prove it by direct observation for all three types of elementary matrices, and then writing $M$ as a product of elementary matrices.

          Also, proving that $phi$ and $psi$ really give the same mapping may be easiest to do by induction on $d$. Then, at the induction step, you get the usual expansions of $(d+1)times (d+1)$-determinants as linear combinations of $dtimes d$-determinants.






          share|cite|improve this answer




















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2903843%2fpl%25c3%25bccker-embedding-two-definitions%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            2
            down vote



            accepted










            The idea here is that that the vectors $u_i, i=1,2,ldots,d,$ are the rows of the matrix $A$. More precisely, if $u_i=(a_i1,a_i2,ldots,a_in)$, then the coefficient of
            $e_i_1wedge e_i_2wedgecdotswedge e_i_d$ in the wedge product $u_1wedge u_2wedgecdotswedge u_d$ is equal to $det(A^(I))$.



            Consider the paper & pencil example of $n=4,d=2$ with $u_1=(a_1,a_2,a_3,a_4)$, $u_2=(b_1,b_2,b_3,b_4)$ when
            $$
            beginaligned
            u_1wedge u_2&=(a_1e_1+a_2e_2+a_3e_3+a_4e_4)wedge(b_1e_1+b_2e_2+b_3e_3+b_4e_4)\
            &=sum_i=1^4sum_j=1^4a_ib_je_iwedge e_j\
            &=(a_1b_2-a_2b_1)e_1wedge e_2+(a_1b_3-a_3b_1)e_1wedge e_3+(a_1b_4-a_4b_1)e_1wedge e_4\
            &+(a_2b_3-a_3b_2)e_2wedge e_3+(a_2b_4-a_4b_2)e_2wedge e_4+(a_3b_4-a_4b_3)e_3wedge e_4.
            endaligned
            $$
            See the six $2times2$ minors of the matrix
            $$
            A=left(beginarraycccca_1&a_2&a_3&a_4\b_1&b_2&b_3&b_4endarrayright)
            $$
            emerging?!




            For the purposes of introducing coordinates to the Grasmannian we associate a $d$-dimensional subspace $V$ with any matrix $dtimes n$ matrix $A$ that has $V$
            as its row space. It is essential that:



            • Two matrices $A$ and $A'$ are sharing the same row space if and only if there exists an invertible $dtimes d$ matrix $M$ such that $A'=MA$.

            • When we calculate the wedge products of the rows, all the coordinates of the product of the rows of $A'$ are gotten by multiplying the corresponding coordinates of the product of rows of $A$ by $det M$. Implying that as homogeneous coordinates in a projective space the two wedge products refer to the same point. This means that the mapping $G(n,d)toBbbP^N$ is well defined.

            • The easiest way of proving the result of the previous bullet is to prove it by direct observation for all three types of elementary matrices, and then writing $M$ as a product of elementary matrices.

            Also, proving that $phi$ and $psi$ really give the same mapping may be easiest to do by induction on $d$. Then, at the induction step, you get the usual expansions of $(d+1)times (d+1)$-determinants as linear combinations of $dtimes d$-determinants.






            share|cite|improve this answer
























              up vote
              2
              down vote



              accepted










              The idea here is that that the vectors $u_i, i=1,2,ldots,d,$ are the rows of the matrix $A$. More precisely, if $u_i=(a_i1,a_i2,ldots,a_in)$, then the coefficient of
              $e_i_1wedge e_i_2wedgecdotswedge e_i_d$ in the wedge product $u_1wedge u_2wedgecdotswedge u_d$ is equal to $det(A^(I))$.



              Consider the paper & pencil example of $n=4,d=2$ with $u_1=(a_1,a_2,a_3,a_4)$, $u_2=(b_1,b_2,b_3,b_4)$ when
              $$
              beginaligned
              u_1wedge u_2&=(a_1e_1+a_2e_2+a_3e_3+a_4e_4)wedge(b_1e_1+b_2e_2+b_3e_3+b_4e_4)\
              &=sum_i=1^4sum_j=1^4a_ib_je_iwedge e_j\
              &=(a_1b_2-a_2b_1)e_1wedge e_2+(a_1b_3-a_3b_1)e_1wedge e_3+(a_1b_4-a_4b_1)e_1wedge e_4\
              &+(a_2b_3-a_3b_2)e_2wedge e_3+(a_2b_4-a_4b_2)e_2wedge e_4+(a_3b_4-a_4b_3)e_3wedge e_4.
              endaligned
              $$
              See the six $2times2$ minors of the matrix
              $$
              A=left(beginarraycccca_1&a_2&a_3&a_4\b_1&b_2&b_3&b_4endarrayright)
              $$
              emerging?!




              For the purposes of introducing coordinates to the Grasmannian we associate a $d$-dimensional subspace $V$ with any matrix $dtimes n$ matrix $A$ that has $V$
              as its row space. It is essential that:



              • Two matrices $A$ and $A'$ are sharing the same row space if and only if there exists an invertible $dtimes d$ matrix $M$ such that $A'=MA$.

              • When we calculate the wedge products of the rows, all the coordinates of the product of the rows of $A'$ are gotten by multiplying the corresponding coordinates of the product of rows of $A$ by $det M$. Implying that as homogeneous coordinates in a projective space the two wedge products refer to the same point. This means that the mapping $G(n,d)toBbbP^N$ is well defined.

              • The easiest way of proving the result of the previous bullet is to prove it by direct observation for all three types of elementary matrices, and then writing $M$ as a product of elementary matrices.

              Also, proving that $phi$ and $psi$ really give the same mapping may be easiest to do by induction on $d$. Then, at the induction step, you get the usual expansions of $(d+1)times (d+1)$-determinants as linear combinations of $dtimes d$-determinants.






              share|cite|improve this answer






















                up vote
                2
                down vote



                accepted







                up vote
                2
                down vote



                accepted






                The idea here is that that the vectors $u_i, i=1,2,ldots,d,$ are the rows of the matrix $A$. More precisely, if $u_i=(a_i1,a_i2,ldots,a_in)$, then the coefficient of
                $e_i_1wedge e_i_2wedgecdotswedge e_i_d$ in the wedge product $u_1wedge u_2wedgecdotswedge u_d$ is equal to $det(A^(I))$.



                Consider the paper & pencil example of $n=4,d=2$ with $u_1=(a_1,a_2,a_3,a_4)$, $u_2=(b_1,b_2,b_3,b_4)$ when
                $$
                beginaligned
                u_1wedge u_2&=(a_1e_1+a_2e_2+a_3e_3+a_4e_4)wedge(b_1e_1+b_2e_2+b_3e_3+b_4e_4)\
                &=sum_i=1^4sum_j=1^4a_ib_je_iwedge e_j\
                &=(a_1b_2-a_2b_1)e_1wedge e_2+(a_1b_3-a_3b_1)e_1wedge e_3+(a_1b_4-a_4b_1)e_1wedge e_4\
                &+(a_2b_3-a_3b_2)e_2wedge e_3+(a_2b_4-a_4b_2)e_2wedge e_4+(a_3b_4-a_4b_3)e_3wedge e_4.
                endaligned
                $$
                See the six $2times2$ minors of the matrix
                $$
                A=left(beginarraycccca_1&a_2&a_3&a_4\b_1&b_2&b_3&b_4endarrayright)
                $$
                emerging?!




                For the purposes of introducing coordinates to the Grasmannian we associate a $d$-dimensional subspace $V$ with any matrix $dtimes n$ matrix $A$ that has $V$
                as its row space. It is essential that:



                • Two matrices $A$ and $A'$ are sharing the same row space if and only if there exists an invertible $dtimes d$ matrix $M$ such that $A'=MA$.

                • When we calculate the wedge products of the rows, all the coordinates of the product of the rows of $A'$ are gotten by multiplying the corresponding coordinates of the product of rows of $A$ by $det M$. Implying that as homogeneous coordinates in a projective space the two wedge products refer to the same point. This means that the mapping $G(n,d)toBbbP^N$ is well defined.

                • The easiest way of proving the result of the previous bullet is to prove it by direct observation for all three types of elementary matrices, and then writing $M$ as a product of elementary matrices.

                Also, proving that $phi$ and $psi$ really give the same mapping may be easiest to do by induction on $d$. Then, at the induction step, you get the usual expansions of $(d+1)times (d+1)$-determinants as linear combinations of $dtimes d$-determinants.






                share|cite|improve this answer












                The idea here is that that the vectors $u_i, i=1,2,ldots,d,$ are the rows of the matrix $A$. More precisely, if $u_i=(a_i1,a_i2,ldots,a_in)$, then the coefficient of
                $e_i_1wedge e_i_2wedgecdotswedge e_i_d$ in the wedge product $u_1wedge u_2wedgecdotswedge u_d$ is equal to $det(A^(I))$.



                Consider the paper & pencil example of $n=4,d=2$ with $u_1=(a_1,a_2,a_3,a_4)$, $u_2=(b_1,b_2,b_3,b_4)$ when
                $$
                beginaligned
                u_1wedge u_2&=(a_1e_1+a_2e_2+a_3e_3+a_4e_4)wedge(b_1e_1+b_2e_2+b_3e_3+b_4e_4)\
                &=sum_i=1^4sum_j=1^4a_ib_je_iwedge e_j\
                &=(a_1b_2-a_2b_1)e_1wedge e_2+(a_1b_3-a_3b_1)e_1wedge e_3+(a_1b_4-a_4b_1)e_1wedge e_4\
                &+(a_2b_3-a_3b_2)e_2wedge e_3+(a_2b_4-a_4b_2)e_2wedge e_4+(a_3b_4-a_4b_3)e_3wedge e_4.
                endaligned
                $$
                See the six $2times2$ minors of the matrix
                $$
                A=left(beginarraycccca_1&a_2&a_3&a_4\b_1&b_2&b_3&b_4endarrayright)
                $$
                emerging?!




                For the purposes of introducing coordinates to the Grasmannian we associate a $d$-dimensional subspace $V$ with any matrix $dtimes n$ matrix $A$ that has $V$
                as its row space. It is essential that:



                • Two matrices $A$ and $A'$ are sharing the same row space if and only if there exists an invertible $dtimes d$ matrix $M$ such that $A'=MA$.

                • When we calculate the wedge products of the rows, all the coordinates of the product of the rows of $A'$ are gotten by multiplying the corresponding coordinates of the product of rows of $A$ by $det M$. Implying that as homogeneous coordinates in a projective space the two wedge products refer to the same point. This means that the mapping $G(n,d)toBbbP^N$ is well defined.

                • The easiest way of proving the result of the previous bullet is to prove it by direct observation for all three types of elementary matrices, and then writing $M$ as a product of elementary matrices.

                Also, proving that $phi$ and $psi$ really give the same mapping may be easiest to do by induction on $d$. Then, at the induction step, you get the usual expansions of $(d+1)times (d+1)$-determinants as linear combinations of $dtimes d$-determinants.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Sep 4 at 10:31









                Jyrki Lahtonen

                105k12163358




                105k12163358



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2903843%2fpl%25c3%25bccker-embedding-two-definitions%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    這個網誌中的熱門文章

                    How to combine Bézier curves to a surface?

                    Mutual Information Always Non-negative

                    Why am i infinitely getting the same tweet with the Twitter Search API?