Projection of high dimensional vectors to lower dimensional space

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












I'm trying to project a set of 13-dimensional vectors to n (n = 1,2,3) dimensional space for visualization purposes.



Assume vector $v = (1,2,3,4,5,6,7,8,9,10,11,12,13)$ as one such 13-dimensional data point which needs to me projected to the $XYZ plane$.



Going by the definitions I should project $v$ into $A$ where $A^T$ is



$A^T = left( beginarrayccc
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
endarray right)$



So using the following equation $ p = A(A^TA)^-1A^Tv$ I end up with $ p = (1,2,3,0,0,0,0,0,0,0,0,0,0)$



So the projection simply converts the values of the vector $v$ from the $4^th$ position onward to zero.



I need to confirm whether the projection I have done above is correct.







share|cite|improve this question
















  • 4




    What does "correct" mean for you here? It is certainly one projection from $mathbb R^13$ to that 3-dimensional subspace, but by no means the only possible one.
    – Henning Makholm
    Aug 3 '13 at 15:30











  • To make it general you can do the following: Choose a basis $beta = v_1, dotsc, v_13$ so that you want to project onto $operatornamespan(v_1, v_2, v_3)$. Then write $v$ in terms of $beta$. Use the projection you described (deleting all entries except first three) and then change back to the standard basis for the resulting vector.
    – Pratyush Sarkar
    Aug 3 '13 at 15:59







  • 1




    The purpose of this projection is to find the initial projection vectors to carry out the Sammon's projection. My idea of correct is whether I have correctly project $v$ to the $XYZ plane$ with the basis vectors (1, 0, 0) (0, 1, 0) and (0, 0, 1)
    – Synex
    Aug 3 '13 at 16:14















up vote
1
down vote

favorite
1












I'm trying to project a set of 13-dimensional vectors to n (n = 1,2,3) dimensional space for visualization purposes.



Assume vector $v = (1,2,3,4,5,6,7,8,9,10,11,12,13)$ as one such 13-dimensional data point which needs to me projected to the $XYZ plane$.



Going by the definitions I should project $v$ into $A$ where $A^T$ is



$A^T = left( beginarrayccc
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
endarray right)$



So using the following equation $ p = A(A^TA)^-1A^Tv$ I end up with $ p = (1,2,3,0,0,0,0,0,0,0,0,0,0)$



So the projection simply converts the values of the vector $v$ from the $4^th$ position onward to zero.



I need to confirm whether the projection I have done above is correct.







share|cite|improve this question
















  • 4




    What does "correct" mean for you here? It is certainly one projection from $mathbb R^13$ to that 3-dimensional subspace, but by no means the only possible one.
    – Henning Makholm
    Aug 3 '13 at 15:30











  • To make it general you can do the following: Choose a basis $beta = v_1, dotsc, v_13$ so that you want to project onto $operatornamespan(v_1, v_2, v_3)$. Then write $v$ in terms of $beta$. Use the projection you described (deleting all entries except first three) and then change back to the standard basis for the resulting vector.
    – Pratyush Sarkar
    Aug 3 '13 at 15:59







  • 1




    The purpose of this projection is to find the initial projection vectors to carry out the Sammon's projection. My idea of correct is whether I have correctly project $v$ to the $XYZ plane$ with the basis vectors (1, 0, 0) (0, 1, 0) and (0, 0, 1)
    – Synex
    Aug 3 '13 at 16:14













up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





I'm trying to project a set of 13-dimensional vectors to n (n = 1,2,3) dimensional space for visualization purposes.



Assume vector $v = (1,2,3,4,5,6,7,8,9,10,11,12,13)$ as one such 13-dimensional data point which needs to me projected to the $XYZ plane$.



Going by the definitions I should project $v$ into $A$ where $A^T$ is



$A^T = left( beginarrayccc
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
endarray right)$



So using the following equation $ p = A(A^TA)^-1A^Tv$ I end up with $ p = (1,2,3,0,0,0,0,0,0,0,0,0,0)$



So the projection simply converts the values of the vector $v$ from the $4^th$ position onward to zero.



I need to confirm whether the projection I have done above is correct.







share|cite|improve this question












I'm trying to project a set of 13-dimensional vectors to n (n = 1,2,3) dimensional space for visualization purposes.



Assume vector $v = (1,2,3,4,5,6,7,8,9,10,11,12,13)$ as one such 13-dimensional data point which needs to me projected to the $XYZ plane$.



Going by the definitions I should project $v$ into $A$ where $A^T$ is



$A^T = left( beginarrayccc
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
endarray right)$



So using the following equation $ p = A(A^TA)^-1A^Tv$ I end up with $ p = (1,2,3,0,0,0,0,0,0,0,0,0,0)$



So the projection simply converts the values of the vector $v$ from the $4^th$ position onward to zero.



I need to confirm whether the projection I have done above is correct.









share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Aug 3 '13 at 15:28









Synex

1971412




1971412







  • 4




    What does "correct" mean for you here? It is certainly one projection from $mathbb R^13$ to that 3-dimensional subspace, but by no means the only possible one.
    – Henning Makholm
    Aug 3 '13 at 15:30











  • To make it general you can do the following: Choose a basis $beta = v_1, dotsc, v_13$ so that you want to project onto $operatornamespan(v_1, v_2, v_3)$. Then write $v$ in terms of $beta$. Use the projection you described (deleting all entries except first three) and then change back to the standard basis for the resulting vector.
    – Pratyush Sarkar
    Aug 3 '13 at 15:59







  • 1




    The purpose of this projection is to find the initial projection vectors to carry out the Sammon's projection. My idea of correct is whether I have correctly project $v$ to the $XYZ plane$ with the basis vectors (1, 0, 0) (0, 1, 0) and (0, 0, 1)
    – Synex
    Aug 3 '13 at 16:14













  • 4




    What does "correct" mean for you here? It is certainly one projection from $mathbb R^13$ to that 3-dimensional subspace, but by no means the only possible one.
    – Henning Makholm
    Aug 3 '13 at 15:30











  • To make it general you can do the following: Choose a basis $beta = v_1, dotsc, v_13$ so that you want to project onto $operatornamespan(v_1, v_2, v_3)$. Then write $v$ in terms of $beta$. Use the projection you described (deleting all entries except first three) and then change back to the standard basis for the resulting vector.
    – Pratyush Sarkar
    Aug 3 '13 at 15:59







  • 1




    The purpose of this projection is to find the initial projection vectors to carry out the Sammon's projection. My idea of correct is whether I have correctly project $v$ to the $XYZ plane$ with the basis vectors (1, 0, 0) (0, 1, 0) and (0, 0, 1)
    – Synex
    Aug 3 '13 at 16:14








4




4




What does "correct" mean for you here? It is certainly one projection from $mathbb R^13$ to that 3-dimensional subspace, but by no means the only possible one.
– Henning Makholm
Aug 3 '13 at 15:30





What does "correct" mean for you here? It is certainly one projection from $mathbb R^13$ to that 3-dimensional subspace, but by no means the only possible one.
– Henning Makholm
Aug 3 '13 at 15:30













To make it general you can do the following: Choose a basis $beta = v_1, dotsc, v_13$ so that you want to project onto $operatornamespan(v_1, v_2, v_3)$. Then write $v$ in terms of $beta$. Use the projection you described (deleting all entries except first three) and then change back to the standard basis for the resulting vector.
– Pratyush Sarkar
Aug 3 '13 at 15:59





To make it general you can do the following: Choose a basis $beta = v_1, dotsc, v_13$ so that you want to project onto $operatornamespan(v_1, v_2, v_3)$. Then write $v$ in terms of $beta$. Use the projection you described (deleting all entries except first three) and then change back to the standard basis for the resulting vector.
– Pratyush Sarkar
Aug 3 '13 at 15:59





1




1




The purpose of this projection is to find the initial projection vectors to carry out the Sammon's projection. My idea of correct is whether I have correctly project $v$ to the $XYZ plane$ with the basis vectors (1, 0, 0) (0, 1, 0) and (0, 0, 1)
– Synex
Aug 3 '13 at 16:14





The purpose of this projection is to find the initial projection vectors to carry out the Sammon's projection. My idea of correct is whether I have correctly project $v$ to the $XYZ plane$ with the basis vectors (1, 0, 0) (0, 1, 0) and (0, 0, 1)
– Synex
Aug 3 '13 at 16:14











3 Answers
3






active

oldest

votes

















up vote
3
down vote



accepted










If the projection must be linear, but you have the freedom to choose a projection, and you want to preserve as much of the variance in the data as possible, you're probably looking for principal components analysis.



Edit: From this comment, it sounds like you're looking for a linear projection that you can use as an initial estimate, from which you will iteratively optimize Sammon's error. I'm not familiar with Sammon projection, but Wikipedia says that PCA may be used as an initial estimate, citing this article (PDF). So, go ahead and give PCA a shot! Unless you know something else about the data, I wouldn't simply project onto the first 3 out of 13 coordinates.






share|cite|improve this answer






















  • Your edit exactly answers my initial concern. I posted this question because I was initially unhappy about the orthogonal projection disregarding 10 coordinates. Yeah I will give PCA a shot and thanks for the article.
    – Synex
    Aug 3 '13 at 18:26


















up vote
2
down vote













To widen Henning's comment a bit, any $A := beginbmatrix e_i & e_j & e_k endbmatrix$, for $i,j,k in 1,2,dots,13$ and $i ne j ne k ne i$, will give you an orthogonal to the 3D subspace induced by the vectors $e_i, e_j, e_k$ of the canonical base for $mathbbR^13$, disregarding all info of the other dimensions.



More generally, if you pick any $3$ orthonormal vectors $x_1, x_2, x_3$, then



$$A := X X^T, quad X := beginbmatrix x_1 & x_2 & x_3 endbmatrix,$$



will be an orthogonal projection of $mathbbR^13$ on the subspace spanned by $x_1, x_2, x_3$. This way, you can orthogonally project on any 3D subspace of $mathbbR^13$, as long as you know its orthonormal basis.






share|cite|improve this answer





























    up vote
    1
    down vote













    For perspective projections this is relatively simple but allowing for multiple variations. We can use parametric equations:



    $$ x_1 = a_1*t$$
    $$ x_2 = a_2*t$$
    $$...$$
    $$ x_n-1 = a_n-1*t$$
    $$ x_n = a_n*t$$
    Where $n$ is the dimension and $a_n$ is the coordinate at that dimension.



    To project this to $n-1$ dimensions we can simply set $x_n$ as $1$ or some other value for a specific plane $x_n-1$ and then solve for $t$.



    Example: (The 1,2,...13 is your example point )



    $$x_1 = 1*t$$
    $$x_2 = 2*t$$
    $$...$$
    $$x_13 = 13*t$$



    We want $x_13 = 1$ so we solve for $t$ . This gives us $t= frac113$



    Now we plug this value of $t$ into the other equations and get our values from $x_1$ to $x_12$ scaled correctly for $x_13=1$. Then we do the same again but for $x_12=1$ and so on until we get $x_4...13=1$ (or what ever specific plane you want for every dimension). The $Bbb R^3$ projection of your $Bbb R^13$ point will be $(x_1, x_2, x_3 )$.



    Of course you could set any value for $x_n$ you like to get a projection onto a specific variation of a plane. However, setting the value to $0$ will not work as every value will become $0$. This is because the focal point is $0$ in every dimension, allowing for easier calculations. Also, this technique relies upon the concept of drawing a line from the point to the origin and solving for $t$ at a specific point which is the reason for the parametric equations. The technique can be applied to n-dimensional points with relative ease. These are very simple parametric equations and it is easy to understand so why do people not think of this way when projecting to lower dimensions?






    share|cite|improve this answer






















      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f458843%2fprojection-of-high-dimensional-vectors-to-lower-dimensional-space%23new-answer', 'question_page');

      );

      Post as a guest






























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      3
      down vote



      accepted










      If the projection must be linear, but you have the freedom to choose a projection, and you want to preserve as much of the variance in the data as possible, you're probably looking for principal components analysis.



      Edit: From this comment, it sounds like you're looking for a linear projection that you can use as an initial estimate, from which you will iteratively optimize Sammon's error. I'm not familiar with Sammon projection, but Wikipedia says that PCA may be used as an initial estimate, citing this article (PDF). So, go ahead and give PCA a shot! Unless you know something else about the data, I wouldn't simply project onto the first 3 out of 13 coordinates.






      share|cite|improve this answer






















      • Your edit exactly answers my initial concern. I posted this question because I was initially unhappy about the orthogonal projection disregarding 10 coordinates. Yeah I will give PCA a shot and thanks for the article.
        – Synex
        Aug 3 '13 at 18:26















      up vote
      3
      down vote



      accepted










      If the projection must be linear, but you have the freedom to choose a projection, and you want to preserve as much of the variance in the data as possible, you're probably looking for principal components analysis.



      Edit: From this comment, it sounds like you're looking for a linear projection that you can use as an initial estimate, from which you will iteratively optimize Sammon's error. I'm not familiar with Sammon projection, but Wikipedia says that PCA may be used as an initial estimate, citing this article (PDF). So, go ahead and give PCA a shot! Unless you know something else about the data, I wouldn't simply project onto the first 3 out of 13 coordinates.






      share|cite|improve this answer






















      • Your edit exactly answers my initial concern. I posted this question because I was initially unhappy about the orthogonal projection disregarding 10 coordinates. Yeah I will give PCA a shot and thanks for the article.
        – Synex
        Aug 3 '13 at 18:26













      up vote
      3
      down vote



      accepted







      up vote
      3
      down vote



      accepted






      If the projection must be linear, but you have the freedom to choose a projection, and you want to preserve as much of the variance in the data as possible, you're probably looking for principal components analysis.



      Edit: From this comment, it sounds like you're looking for a linear projection that you can use as an initial estimate, from which you will iteratively optimize Sammon's error. I'm not familiar with Sammon projection, but Wikipedia says that PCA may be used as an initial estimate, citing this article (PDF). So, go ahead and give PCA a shot! Unless you know something else about the data, I wouldn't simply project onto the first 3 out of 13 coordinates.






      share|cite|improve this answer














      If the projection must be linear, but you have the freedom to choose a projection, and you want to preserve as much of the variance in the data as possible, you're probably looking for principal components analysis.



      Edit: From this comment, it sounds like you're looking for a linear projection that you can use as an initial estimate, from which you will iteratively optimize Sammon's error. I'm not familiar with Sammon projection, but Wikipedia says that PCA may be used as an initial estimate, citing this article (PDF). So, go ahead and give PCA a shot! Unless you know something else about the data, I wouldn't simply project onto the first 3 out of 13 coordinates.







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Apr 13 '17 at 12:21









      Community♦

      1




      1










      answered Aug 3 '13 at 16:07









      Chris Culter

      19.2k43279




      19.2k43279











      • Your edit exactly answers my initial concern. I posted this question because I was initially unhappy about the orthogonal projection disregarding 10 coordinates. Yeah I will give PCA a shot and thanks for the article.
        – Synex
        Aug 3 '13 at 18:26

















      • Your edit exactly answers my initial concern. I posted this question because I was initially unhappy about the orthogonal projection disregarding 10 coordinates. Yeah I will give PCA a shot and thanks for the article.
        – Synex
        Aug 3 '13 at 18:26
















      Your edit exactly answers my initial concern. I posted this question because I was initially unhappy about the orthogonal projection disregarding 10 coordinates. Yeah I will give PCA a shot and thanks for the article.
      – Synex
      Aug 3 '13 at 18:26





      Your edit exactly answers my initial concern. I posted this question because I was initially unhappy about the orthogonal projection disregarding 10 coordinates. Yeah I will give PCA a shot and thanks for the article.
      – Synex
      Aug 3 '13 at 18:26











      up vote
      2
      down vote













      To widen Henning's comment a bit, any $A := beginbmatrix e_i & e_j & e_k endbmatrix$, for $i,j,k in 1,2,dots,13$ and $i ne j ne k ne i$, will give you an orthogonal to the 3D subspace induced by the vectors $e_i, e_j, e_k$ of the canonical base for $mathbbR^13$, disregarding all info of the other dimensions.



      More generally, if you pick any $3$ orthonormal vectors $x_1, x_2, x_3$, then



      $$A := X X^T, quad X := beginbmatrix x_1 & x_2 & x_3 endbmatrix,$$



      will be an orthogonal projection of $mathbbR^13$ on the subspace spanned by $x_1, x_2, x_3$. This way, you can orthogonally project on any 3D subspace of $mathbbR^13$, as long as you know its orthonormal basis.






      share|cite|improve this answer


























        up vote
        2
        down vote













        To widen Henning's comment a bit, any $A := beginbmatrix e_i & e_j & e_k endbmatrix$, for $i,j,k in 1,2,dots,13$ and $i ne j ne k ne i$, will give you an orthogonal to the 3D subspace induced by the vectors $e_i, e_j, e_k$ of the canonical base for $mathbbR^13$, disregarding all info of the other dimensions.



        More generally, if you pick any $3$ orthonormal vectors $x_1, x_2, x_3$, then



        $$A := X X^T, quad X := beginbmatrix x_1 & x_2 & x_3 endbmatrix,$$



        will be an orthogonal projection of $mathbbR^13$ on the subspace spanned by $x_1, x_2, x_3$. This way, you can orthogonally project on any 3D subspace of $mathbbR^13$, as long as you know its orthonormal basis.






        share|cite|improve this answer
























          up vote
          2
          down vote










          up vote
          2
          down vote









          To widen Henning's comment a bit, any $A := beginbmatrix e_i & e_j & e_k endbmatrix$, for $i,j,k in 1,2,dots,13$ and $i ne j ne k ne i$, will give you an orthogonal to the 3D subspace induced by the vectors $e_i, e_j, e_k$ of the canonical base for $mathbbR^13$, disregarding all info of the other dimensions.



          More generally, if you pick any $3$ orthonormal vectors $x_1, x_2, x_3$, then



          $$A := X X^T, quad X := beginbmatrix x_1 & x_2 & x_3 endbmatrix,$$



          will be an orthogonal projection of $mathbbR^13$ on the subspace spanned by $x_1, x_2, x_3$. This way, you can orthogonally project on any 3D subspace of $mathbbR^13$, as long as you know its orthonormal basis.






          share|cite|improve this answer














          To widen Henning's comment a bit, any $A := beginbmatrix e_i & e_j & e_k endbmatrix$, for $i,j,k in 1,2,dots,13$ and $i ne j ne k ne i$, will give you an orthogonal to the 3D subspace induced by the vectors $e_i, e_j, e_k$ of the canonical base for $mathbbR^13$, disregarding all info of the other dimensions.



          More generally, if you pick any $3$ orthonormal vectors $x_1, x_2, x_3$, then



          $$A := X X^T, quad X := beginbmatrix x_1 & x_2 & x_3 endbmatrix,$$



          will be an orthogonal projection of $mathbbR^13$ on the subspace spanned by $x_1, x_2, x_3$. This way, you can orthogonally project on any 3D subspace of $mathbbR^13$, as long as you know its orthonormal basis.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Apr 13 '17 at 12:21









          Community♦

          1




          1










          answered Aug 3 '13 at 15:57









          Vedran Å ego

          9,33111945




          9,33111945




















              up vote
              1
              down vote













              For perspective projections this is relatively simple but allowing for multiple variations. We can use parametric equations:



              $$ x_1 = a_1*t$$
              $$ x_2 = a_2*t$$
              $$...$$
              $$ x_n-1 = a_n-1*t$$
              $$ x_n = a_n*t$$
              Where $n$ is the dimension and $a_n$ is the coordinate at that dimension.



              To project this to $n-1$ dimensions we can simply set $x_n$ as $1$ or some other value for a specific plane $x_n-1$ and then solve for $t$.



              Example: (The 1,2,...13 is your example point )



              $$x_1 = 1*t$$
              $$x_2 = 2*t$$
              $$...$$
              $$x_13 = 13*t$$



              We want $x_13 = 1$ so we solve for $t$ . This gives us $t= frac113$



              Now we plug this value of $t$ into the other equations and get our values from $x_1$ to $x_12$ scaled correctly for $x_13=1$. Then we do the same again but for $x_12=1$ and so on until we get $x_4...13=1$ (or what ever specific plane you want for every dimension). The $Bbb R^3$ projection of your $Bbb R^13$ point will be $(x_1, x_2, x_3 )$.



              Of course you could set any value for $x_n$ you like to get a projection onto a specific variation of a plane. However, setting the value to $0$ will not work as every value will become $0$. This is because the focal point is $0$ in every dimension, allowing for easier calculations. Also, this technique relies upon the concept of drawing a line from the point to the origin and solving for $t$ at a specific point which is the reason for the parametric equations. The technique can be applied to n-dimensional points with relative ease. These are very simple parametric equations and it is easy to understand so why do people not think of this way when projecting to lower dimensions?






              share|cite|improve this answer


























                up vote
                1
                down vote













                For perspective projections this is relatively simple but allowing for multiple variations. We can use parametric equations:



                $$ x_1 = a_1*t$$
                $$ x_2 = a_2*t$$
                $$...$$
                $$ x_n-1 = a_n-1*t$$
                $$ x_n = a_n*t$$
                Where $n$ is the dimension and $a_n$ is the coordinate at that dimension.



                To project this to $n-1$ dimensions we can simply set $x_n$ as $1$ or some other value for a specific plane $x_n-1$ and then solve for $t$.



                Example: (The 1,2,...13 is your example point )



                $$x_1 = 1*t$$
                $$x_2 = 2*t$$
                $$...$$
                $$x_13 = 13*t$$



                We want $x_13 = 1$ so we solve for $t$ . This gives us $t= frac113$



                Now we plug this value of $t$ into the other equations and get our values from $x_1$ to $x_12$ scaled correctly for $x_13=1$. Then we do the same again but for $x_12=1$ and so on until we get $x_4...13=1$ (or what ever specific plane you want for every dimension). The $Bbb R^3$ projection of your $Bbb R^13$ point will be $(x_1, x_2, x_3 )$.



                Of course you could set any value for $x_n$ you like to get a projection onto a specific variation of a plane. However, setting the value to $0$ will not work as every value will become $0$. This is because the focal point is $0$ in every dimension, allowing for easier calculations. Also, this technique relies upon the concept of drawing a line from the point to the origin and solving for $t$ at a specific point which is the reason for the parametric equations. The technique can be applied to n-dimensional points with relative ease. These are very simple parametric equations and it is easy to understand so why do people not think of this way when projecting to lower dimensions?






                share|cite|improve this answer
























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  For perspective projections this is relatively simple but allowing for multiple variations. We can use parametric equations:



                  $$ x_1 = a_1*t$$
                  $$ x_2 = a_2*t$$
                  $$...$$
                  $$ x_n-1 = a_n-1*t$$
                  $$ x_n = a_n*t$$
                  Where $n$ is the dimension and $a_n$ is the coordinate at that dimension.



                  To project this to $n-1$ dimensions we can simply set $x_n$ as $1$ or some other value for a specific plane $x_n-1$ and then solve for $t$.



                  Example: (The 1,2,...13 is your example point )



                  $$x_1 = 1*t$$
                  $$x_2 = 2*t$$
                  $$...$$
                  $$x_13 = 13*t$$



                  We want $x_13 = 1$ so we solve for $t$ . This gives us $t= frac113$



                  Now we plug this value of $t$ into the other equations and get our values from $x_1$ to $x_12$ scaled correctly for $x_13=1$. Then we do the same again but for $x_12=1$ and so on until we get $x_4...13=1$ (or what ever specific plane you want for every dimension). The $Bbb R^3$ projection of your $Bbb R^13$ point will be $(x_1, x_2, x_3 )$.



                  Of course you could set any value for $x_n$ you like to get a projection onto a specific variation of a plane. However, setting the value to $0$ will not work as every value will become $0$. This is because the focal point is $0$ in every dimension, allowing for easier calculations. Also, this technique relies upon the concept of drawing a line from the point to the origin and solving for $t$ at a specific point which is the reason for the parametric equations. The technique can be applied to n-dimensional points with relative ease. These are very simple parametric equations and it is easy to understand so why do people not think of this way when projecting to lower dimensions?






                  share|cite|improve this answer














                  For perspective projections this is relatively simple but allowing for multiple variations. We can use parametric equations:



                  $$ x_1 = a_1*t$$
                  $$ x_2 = a_2*t$$
                  $$...$$
                  $$ x_n-1 = a_n-1*t$$
                  $$ x_n = a_n*t$$
                  Where $n$ is the dimension and $a_n$ is the coordinate at that dimension.



                  To project this to $n-1$ dimensions we can simply set $x_n$ as $1$ or some other value for a specific plane $x_n-1$ and then solve for $t$.



                  Example: (The 1,2,...13 is your example point )



                  $$x_1 = 1*t$$
                  $$x_2 = 2*t$$
                  $$...$$
                  $$x_13 = 13*t$$



                  We want $x_13 = 1$ so we solve for $t$ . This gives us $t= frac113$



                  Now we plug this value of $t$ into the other equations and get our values from $x_1$ to $x_12$ scaled correctly for $x_13=1$. Then we do the same again but for $x_12=1$ and so on until we get $x_4...13=1$ (or what ever specific plane you want for every dimension). The $Bbb R^3$ projection of your $Bbb R^13$ point will be $(x_1, x_2, x_3 )$.



                  Of course you could set any value for $x_n$ you like to get a projection onto a specific variation of a plane. However, setting the value to $0$ will not work as every value will become $0$. This is because the focal point is $0$ in every dimension, allowing for easier calculations. Also, this technique relies upon the concept of drawing a line from the point to the origin and solving for $t$ at a specific point which is the reason for the parametric equations. The technique can be applied to n-dimensional points with relative ease. These are very simple parametric equations and it is easy to understand so why do people not think of this way when projecting to lower dimensions?







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Aug 13 at 12:10

























                  answered Aug 12 at 0:05









                  Lasagnenator

                  113




                  113






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f458843%2fprojection-of-high-dimensional-vectors-to-lower-dimensional-space%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      這個網誌中的熱門文章

                      How to combine Bézier curves to a surface?

                      Carbon dioxide

                      Why am i infinitely getting the same tweet with the Twitter Search API?