Let $V$ be an inner prod. sp. and $W$ be its finite dim. linear subspace w/ orthonormal basis $leftw_1,dots,w_nright$

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite













Let $V$ be an inner product space, and let $W$ be its finite dimensional linear subspace that has an orthonormal basis $ w_1, dots , w_n $. Define $sigma : V to W$ by
$$sigma(v) = sum_i=1^n langle v,w_irangle w_i$$
Show that $|sigma(v) - v |< | w - v | $ for all $vin V$ and for all $w in W$, with $w neq sigma(v)$.




Take an arbitrary $vin V$ such that $sigma(v)neq w$ and then,
beginalign*
|sigma(v) - v | &= biggl|sum_i=1^n langle v,w_irangle w_i - v biggr| \
&leq sum_i=1^n | langle v,w_irangle w_i - v|\
endalign*
I don't know how I utilize the fact that $ w_1, dots , w_n $ is an orthonormal basis for $W$. I know that means that $langle w_i , w_jrangle = 1$ if $i=j$ and $0$ otherwise. Also, $sigma(v) subseteq W$, so we should be able to represent it as $sigma(v) = (alpha_1 w_1 , alpha_2 w_2 , dots , alpha_n w_n)$ where $alpha_i$ are scalars. Any help would be appreciated.










share|cite|improve this question



























    up vote
    1
    down vote

    favorite













    Let $V$ be an inner product space, and let $W$ be its finite dimensional linear subspace that has an orthonormal basis $ w_1, dots , w_n $. Define $sigma : V to W$ by
    $$sigma(v) = sum_i=1^n langle v,w_irangle w_i$$
    Show that $|sigma(v) - v |< | w - v | $ for all $vin V$ and for all $w in W$, with $w neq sigma(v)$.




    Take an arbitrary $vin V$ such that $sigma(v)neq w$ and then,
    beginalign*
    |sigma(v) - v | &= biggl|sum_i=1^n langle v,w_irangle w_i - v biggr| \
    &leq sum_i=1^n | langle v,w_irangle w_i - v|\
    endalign*
    I don't know how I utilize the fact that $ w_1, dots , w_n $ is an orthonormal basis for $W$. I know that means that $langle w_i , w_jrangle = 1$ if $i=j$ and $0$ otherwise. Also, $sigma(v) subseteq W$, so we should be able to represent it as $sigma(v) = (alpha_1 w_1 , alpha_2 w_2 , dots , alpha_n w_n)$ where $alpha_i$ are scalars. Any help would be appreciated.










    share|cite|improve this question

























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite












      Let $V$ be an inner product space, and let $W$ be its finite dimensional linear subspace that has an orthonormal basis $ w_1, dots , w_n $. Define $sigma : V to W$ by
      $$sigma(v) = sum_i=1^n langle v,w_irangle w_i$$
      Show that $|sigma(v) - v |< | w - v | $ for all $vin V$ and for all $w in W$, with $w neq sigma(v)$.




      Take an arbitrary $vin V$ such that $sigma(v)neq w$ and then,
      beginalign*
      |sigma(v) - v | &= biggl|sum_i=1^n langle v,w_irangle w_i - v biggr| \
      &leq sum_i=1^n | langle v,w_irangle w_i - v|\
      endalign*
      I don't know how I utilize the fact that $ w_1, dots , w_n $ is an orthonormal basis for $W$. I know that means that $langle w_i , w_jrangle = 1$ if $i=j$ and $0$ otherwise. Also, $sigma(v) subseteq W$, so we should be able to represent it as $sigma(v) = (alpha_1 w_1 , alpha_2 w_2 , dots , alpha_n w_n)$ where $alpha_i$ are scalars. Any help would be appreciated.










      share|cite|improve this question
















      Let $V$ be an inner product space, and let $W$ be its finite dimensional linear subspace that has an orthonormal basis $ w_1, dots , w_n $. Define $sigma : V to W$ by
      $$sigma(v) = sum_i=1^n langle v,w_irangle w_i$$
      Show that $|sigma(v) - v |< | w - v | $ for all $vin V$ and for all $w in W$, with $w neq sigma(v)$.




      Take an arbitrary $vin V$ such that $sigma(v)neq w$ and then,
      beginalign*
      |sigma(v) - v | &= biggl|sum_i=1^n langle v,w_irangle w_i - v biggr| \
      &leq sum_i=1^n | langle v,w_irangle w_i - v|\
      endalign*
      I don't know how I utilize the fact that $ w_1, dots , w_n $ is an orthonormal basis for $W$. I know that means that $langle w_i , w_jrangle = 1$ if $i=j$ and $0$ otherwise. Also, $sigma(v) subseteq W$, so we should be able to represent it as $sigma(v) = (alpha_1 w_1 , alpha_2 w_2 , dots , alpha_n w_n)$ where $alpha_i$ are scalars. Any help would be appreciated.







      linear-algebra inner-product-space






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Sep 10 at 21:54









      egreg

      167k1281190




      167k1281190










      asked Sep 10 at 20:51









      Dragonite

      1,122219




      1,122219




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          2
          down vote



          accepted










          The key observation is that $sigma(v)-v,perp,W$.

          Indeed, $langlesigma(v),w_irangle=langle v,w_irangle$ for each basis element $w_i$ of $W$.



          Now, $w-v=(w-sigma(v)),+,(sigma(v)-v)$, and these two summands are orthogonal to each other by the above, as $w-sigma(v)in W$, so we have
          $$|w-v|^2=|w-sigma(v)|^2,+,|sigma(v)-v|^2,.$$




          On why $sigma(v) - v perp W$. This means that $langle sigma(v) - v , w_i rangle = 0$ for all $1 leq i leq n$. Then note that:



          beginalign*
          sum_i=1^n langle sigma(v) - v , w_i rangle &= sum_i=1^n langle sigma(v) , w_i rangle - sum_i=1^n langle v , w_i rangle \
          &= sum_i=1^n langle sum_j=1^n langle v, w_j rangle w_j , w_i rangle - sum_i=1^n langle v,w_i rangle \
          &= sum_i=1^n langle langle v,w_irangle w_i , w_i rangle- sum_i=1^n langle v, w_i rangle \
          &= sum_i=1^n langle v,w_i rangle - sum_i=1^n langle v,w_i rangle \
          &=0.
          endalign*






          share|cite|improve this answer






















          • Why is $sigma(v) - v perp W$, what in the question gives that away? I must be missing a foundational concept. Given that I see how the rest of the argument is formed. Thank you!
            – Dragonite
            Sep 11 at 12:45

















          up vote
          1
          down vote













          You can write any vector of V as the sum of a vector that lies in W and one that is orthogonal to W. Geometrically, you're trying to show that the orthogonal projection of v onto W is the vector in W closest to v. Think of the vector v as one leg of a triangle and start connecting v to vectors w in W. You should be able to at least convince yourself that the shortest distance from w to v is when you have a right triangle. The argument in the previous comment proves it.






          share|cite|improve this answer




















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2912344%2flet-v-be-an-inner-prod-sp-and-w-be-its-finite-dim-linear-subspace-w-orth%23new-answer', 'question_page');

            );

            Post as a guest






























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            2
            down vote



            accepted










            The key observation is that $sigma(v)-v,perp,W$.

            Indeed, $langlesigma(v),w_irangle=langle v,w_irangle$ for each basis element $w_i$ of $W$.



            Now, $w-v=(w-sigma(v)),+,(sigma(v)-v)$, and these two summands are orthogonal to each other by the above, as $w-sigma(v)in W$, so we have
            $$|w-v|^2=|w-sigma(v)|^2,+,|sigma(v)-v|^2,.$$




            On why $sigma(v) - v perp W$. This means that $langle sigma(v) - v , w_i rangle = 0$ for all $1 leq i leq n$. Then note that:



            beginalign*
            sum_i=1^n langle sigma(v) - v , w_i rangle &= sum_i=1^n langle sigma(v) , w_i rangle - sum_i=1^n langle v , w_i rangle \
            &= sum_i=1^n langle sum_j=1^n langle v, w_j rangle w_j , w_i rangle - sum_i=1^n langle v,w_i rangle \
            &= sum_i=1^n langle langle v,w_irangle w_i , w_i rangle- sum_i=1^n langle v, w_i rangle \
            &= sum_i=1^n langle v,w_i rangle - sum_i=1^n langle v,w_i rangle \
            &=0.
            endalign*






            share|cite|improve this answer






















            • Why is $sigma(v) - v perp W$, what in the question gives that away? I must be missing a foundational concept. Given that I see how the rest of the argument is formed. Thank you!
              – Dragonite
              Sep 11 at 12:45














            up vote
            2
            down vote



            accepted










            The key observation is that $sigma(v)-v,perp,W$.

            Indeed, $langlesigma(v),w_irangle=langle v,w_irangle$ for each basis element $w_i$ of $W$.



            Now, $w-v=(w-sigma(v)),+,(sigma(v)-v)$, and these two summands are orthogonal to each other by the above, as $w-sigma(v)in W$, so we have
            $$|w-v|^2=|w-sigma(v)|^2,+,|sigma(v)-v|^2,.$$




            On why $sigma(v) - v perp W$. This means that $langle sigma(v) - v , w_i rangle = 0$ for all $1 leq i leq n$. Then note that:



            beginalign*
            sum_i=1^n langle sigma(v) - v , w_i rangle &= sum_i=1^n langle sigma(v) , w_i rangle - sum_i=1^n langle v , w_i rangle \
            &= sum_i=1^n langle sum_j=1^n langle v, w_j rangle w_j , w_i rangle - sum_i=1^n langle v,w_i rangle \
            &= sum_i=1^n langle langle v,w_irangle w_i , w_i rangle- sum_i=1^n langle v, w_i rangle \
            &= sum_i=1^n langle v,w_i rangle - sum_i=1^n langle v,w_i rangle \
            &=0.
            endalign*






            share|cite|improve this answer






















            • Why is $sigma(v) - v perp W$, what in the question gives that away? I must be missing a foundational concept. Given that I see how the rest of the argument is formed. Thank you!
              – Dragonite
              Sep 11 at 12:45












            up vote
            2
            down vote



            accepted







            up vote
            2
            down vote



            accepted






            The key observation is that $sigma(v)-v,perp,W$.

            Indeed, $langlesigma(v),w_irangle=langle v,w_irangle$ for each basis element $w_i$ of $W$.



            Now, $w-v=(w-sigma(v)),+,(sigma(v)-v)$, and these two summands are orthogonal to each other by the above, as $w-sigma(v)in W$, so we have
            $$|w-v|^2=|w-sigma(v)|^2,+,|sigma(v)-v|^2,.$$




            On why $sigma(v) - v perp W$. This means that $langle sigma(v) - v , w_i rangle = 0$ for all $1 leq i leq n$. Then note that:



            beginalign*
            sum_i=1^n langle sigma(v) - v , w_i rangle &= sum_i=1^n langle sigma(v) , w_i rangle - sum_i=1^n langle v , w_i rangle \
            &= sum_i=1^n langle sum_j=1^n langle v, w_j rangle w_j , w_i rangle - sum_i=1^n langle v,w_i rangle \
            &= sum_i=1^n langle langle v,w_irangle w_i , w_i rangle- sum_i=1^n langle v, w_i rangle \
            &= sum_i=1^n langle v,w_i rangle - sum_i=1^n langle v,w_i rangle \
            &=0.
            endalign*






            share|cite|improve this answer














            The key observation is that $sigma(v)-v,perp,W$.

            Indeed, $langlesigma(v),w_irangle=langle v,w_irangle$ for each basis element $w_i$ of $W$.



            Now, $w-v=(w-sigma(v)),+,(sigma(v)-v)$, and these two summands are orthogonal to each other by the above, as $w-sigma(v)in W$, so we have
            $$|w-v|^2=|w-sigma(v)|^2,+,|sigma(v)-v|^2,.$$




            On why $sigma(v) - v perp W$. This means that $langle sigma(v) - v , w_i rangle = 0$ for all $1 leq i leq n$. Then note that:



            beginalign*
            sum_i=1^n langle sigma(v) - v , w_i rangle &= sum_i=1^n langle sigma(v) , w_i rangle - sum_i=1^n langle v , w_i rangle \
            &= sum_i=1^n langle sum_j=1^n langle v, w_j rangle w_j , w_i rangle - sum_i=1^n langle v,w_i rangle \
            &= sum_i=1^n langle langle v,w_irangle w_i , w_i rangle- sum_i=1^n langle v, w_i rangle \
            &= sum_i=1^n langle v,w_i rangle - sum_i=1^n langle v,w_i rangle \
            &=0.
            endalign*







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Sep 11 at 20:28









            Dragonite

            1,122219




            1,122219










            answered Sep 10 at 21:12









            Berci

            57.3k23670




            57.3k23670











            • Why is $sigma(v) - v perp W$, what in the question gives that away? I must be missing a foundational concept. Given that I see how the rest of the argument is formed. Thank you!
              – Dragonite
              Sep 11 at 12:45
















            • Why is $sigma(v) - v perp W$, what in the question gives that away? I must be missing a foundational concept. Given that I see how the rest of the argument is formed. Thank you!
              – Dragonite
              Sep 11 at 12:45















            Why is $sigma(v) - v perp W$, what in the question gives that away? I must be missing a foundational concept. Given that I see how the rest of the argument is formed. Thank you!
            – Dragonite
            Sep 11 at 12:45




            Why is $sigma(v) - v perp W$, what in the question gives that away? I must be missing a foundational concept. Given that I see how the rest of the argument is formed. Thank you!
            – Dragonite
            Sep 11 at 12:45










            up vote
            1
            down vote













            You can write any vector of V as the sum of a vector that lies in W and one that is orthogonal to W. Geometrically, you're trying to show that the orthogonal projection of v onto W is the vector in W closest to v. Think of the vector v as one leg of a triangle and start connecting v to vectors w in W. You should be able to at least convince yourself that the shortest distance from w to v is when you have a right triangle. The argument in the previous comment proves it.






            share|cite|improve this answer
























              up vote
              1
              down vote













              You can write any vector of V as the sum of a vector that lies in W and one that is orthogonal to W. Geometrically, you're trying to show that the orthogonal projection of v onto W is the vector in W closest to v. Think of the vector v as one leg of a triangle and start connecting v to vectors w in W. You should be able to at least convince yourself that the shortest distance from w to v is when you have a right triangle. The argument in the previous comment proves it.






              share|cite|improve this answer






















                up vote
                1
                down vote










                up vote
                1
                down vote









                You can write any vector of V as the sum of a vector that lies in W and one that is orthogonal to W. Geometrically, you're trying to show that the orthogonal projection of v onto W is the vector in W closest to v. Think of the vector v as one leg of a triangle and start connecting v to vectors w in W. You should be able to at least convince yourself that the shortest distance from w to v is when you have a right triangle. The argument in the previous comment proves it.






                share|cite|improve this answer












                You can write any vector of V as the sum of a vector that lies in W and one that is orthogonal to W. Geometrically, you're trying to show that the orthogonal projection of v onto W is the vector in W closest to v. Think of the vector v as one leg of a triangle and start connecting v to vectors w in W. You should be able to at least convince yourself that the shortest distance from w to v is when you have a right triangle. The argument in the previous comment proves it.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Sep 10 at 22:28









                Joel Pereira

                484




                484



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2912344%2flet-v-be-an-inner-prod-sp-and-w-be-its-finite-dim-linear-subspace-w-orth%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    這個網誌中的熱門文章

                    Why am i infinitely getting the same tweet with the Twitter Search API?

                    Is there any way to eliminate the singular point to solve this integral by hand or by approximations?

                    Strongly p-embedded subgroups and p-Sylow subgroups.