$f(x) = x^TMx$, using Lagrange multipliers to prove SVD decomposition

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite
1












I'm reading the existence proof of singular value decomposition.



It considers $f:mathbbR^nto mathbbR, f(x) = x^TMx$. It talks about the gradient of $f$ and make it equal to a multiple of the gradient of $x^tx$. I suppose that it's because the constraint is the unit sphere, so that's why it made $x^tx = x_1^2 + cdots x_n^2$, right?



I'm trying to understand this so I took $f$ with a generic matrix $M$



$$f(x) =beginbmatrix
x_1 & cdots & x_n
endbmatrixbeginbmatrix
a_11 & a_12 & dots \
vdots & ddots & \
a_n1 & & a_nn
endbmatrixbeginbmatrix
x_1 \
vdots \
x_n
endbmatrix = \ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
cdots + \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$



Taking the partials to construct the gradient vector, I can see that I'll end up with:



$$beginbmatrix
2a_11x_1 + a_21 + cdots a_n1 \
a_12 + 2a_2x_2 + cdots + a_n2 \
vdots \
a_1n + a_2ncdots + 2a_nnx_n\
endbmatrix $$



Now, I need to equal this with $lambda$ gradient of $x^tx$:
$$beginbmatrix
2x_1 \
2x_2 \
vdots \
2x_n\
endbmatrix$$



so:



$$beginbmatrix
2a_11x_1 + a_21 + cdots a_n1 \
a_12 + 2a_2x_2 + cdots + a_n2 \
vdots \
a_1n + a_2ncdots + 2a_nnx_n\
endbmatrix = lambda beginbmatrix
2x_1 \
2x_2 \
vdots \
2x_n\
endbmatrix $$



As an example, the first line becomes:



$2a_11x_1 + a_21 + cdots a_n1 = lambda 2x_1 implies lambda 2x_1 -2a_11x_1 = a_21 + cdots a_n1implies x_1(2lambda - 2a_11) = a_21 + cdots a_n1$



What should I do now? It says that I should end up with $Mu = lambda u$



Also, is there a more elegant way of calculating the gradients or it's just all this mess?







share|cite|improve this question


























    up vote
    3
    down vote

    favorite
    1












    I'm reading the existence proof of singular value decomposition.



    It considers $f:mathbbR^nto mathbbR, f(x) = x^TMx$. It talks about the gradient of $f$ and make it equal to a multiple of the gradient of $x^tx$. I suppose that it's because the constraint is the unit sphere, so that's why it made $x^tx = x_1^2 + cdots x_n^2$, right?



    I'm trying to understand this so I took $f$ with a generic matrix $M$



    $$f(x) =beginbmatrix
    x_1 & cdots & x_n
    endbmatrixbeginbmatrix
    a_11 & a_12 & dots \
    vdots & ddots & \
    a_n1 & & a_nn
    endbmatrixbeginbmatrix
    x_1 \
    vdots \
    x_n
    endbmatrix = \ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
    cdots + \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$



    Taking the partials to construct the gradient vector, I can see that I'll end up with:



    $$beginbmatrix
    2a_11x_1 + a_21 + cdots a_n1 \
    a_12 + 2a_2x_2 + cdots + a_n2 \
    vdots \
    a_1n + a_2ncdots + 2a_nnx_n\
    endbmatrix $$



    Now, I need to equal this with $lambda$ gradient of $x^tx$:
    $$beginbmatrix
    2x_1 \
    2x_2 \
    vdots \
    2x_n\
    endbmatrix$$



    so:



    $$beginbmatrix
    2a_11x_1 + a_21 + cdots a_n1 \
    a_12 + 2a_2x_2 + cdots + a_n2 \
    vdots \
    a_1n + a_2ncdots + 2a_nnx_n\
    endbmatrix = lambda beginbmatrix
    2x_1 \
    2x_2 \
    vdots \
    2x_n\
    endbmatrix $$



    As an example, the first line becomes:



    $2a_11x_1 + a_21 + cdots a_n1 = lambda 2x_1 implies lambda 2x_1 -2a_11x_1 = a_21 + cdots a_n1implies x_1(2lambda - 2a_11) = a_21 + cdots a_n1$



    What should I do now? It says that I should end up with $Mu = lambda u$



    Also, is there a more elegant way of calculating the gradients or it's just all this mess?







    share|cite|improve this question
























      up vote
      3
      down vote

      favorite
      1









      up vote
      3
      down vote

      favorite
      1






      1





      I'm reading the existence proof of singular value decomposition.



      It considers $f:mathbbR^nto mathbbR, f(x) = x^TMx$. It talks about the gradient of $f$ and make it equal to a multiple of the gradient of $x^tx$. I suppose that it's because the constraint is the unit sphere, so that's why it made $x^tx = x_1^2 + cdots x_n^2$, right?



      I'm trying to understand this so I took $f$ with a generic matrix $M$



      $$f(x) =beginbmatrix
      x_1 & cdots & x_n
      endbmatrixbeginbmatrix
      a_11 & a_12 & dots \
      vdots & ddots & \
      a_n1 & & a_nn
      endbmatrixbeginbmatrix
      x_1 \
      vdots \
      x_n
      endbmatrix = \ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
      cdots + \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$



      Taking the partials to construct the gradient vector, I can see that I'll end up with:



      $$beginbmatrix
      2a_11x_1 + a_21 + cdots a_n1 \
      a_12 + 2a_2x_2 + cdots + a_n2 \
      vdots \
      a_1n + a_2ncdots + 2a_nnx_n\
      endbmatrix $$



      Now, I need to equal this with $lambda$ gradient of $x^tx$:
      $$beginbmatrix
      2x_1 \
      2x_2 \
      vdots \
      2x_n\
      endbmatrix$$



      so:



      $$beginbmatrix
      2a_11x_1 + a_21 + cdots a_n1 \
      a_12 + 2a_2x_2 + cdots + a_n2 \
      vdots \
      a_1n + a_2ncdots + 2a_nnx_n\
      endbmatrix = lambda beginbmatrix
      2x_1 \
      2x_2 \
      vdots \
      2x_n\
      endbmatrix $$



      As an example, the first line becomes:



      $2a_11x_1 + a_21 + cdots a_n1 = lambda 2x_1 implies lambda 2x_1 -2a_11x_1 = a_21 + cdots a_n1implies x_1(2lambda - 2a_11) = a_21 + cdots a_n1$



      What should I do now? It says that I should end up with $Mu = lambda u$



      Also, is there a more elegant way of calculating the gradients or it's just all this mess?







      share|cite|improve this question














      I'm reading the existence proof of singular value decomposition.



      It considers $f:mathbbR^nto mathbbR, f(x) = x^TMx$. It talks about the gradient of $f$ and make it equal to a multiple of the gradient of $x^tx$. I suppose that it's because the constraint is the unit sphere, so that's why it made $x^tx = x_1^2 + cdots x_n^2$, right?



      I'm trying to understand this so I took $f$ with a generic matrix $M$



      $$f(x) =beginbmatrix
      x_1 & cdots & x_n
      endbmatrixbeginbmatrix
      a_11 & a_12 & dots \
      vdots & ddots & \
      a_n1 & & a_nn
      endbmatrixbeginbmatrix
      x_1 \
      vdots \
      x_n
      endbmatrix = \ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
      cdots + \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$



      Taking the partials to construct the gradient vector, I can see that I'll end up with:



      $$beginbmatrix
      2a_11x_1 + a_21 + cdots a_n1 \
      a_12 + 2a_2x_2 + cdots + a_n2 \
      vdots \
      a_1n + a_2ncdots + 2a_nnx_n\
      endbmatrix $$



      Now, I need to equal this with $lambda$ gradient of $x^tx$:
      $$beginbmatrix
      2x_1 \
      2x_2 \
      vdots \
      2x_n\
      endbmatrix$$



      so:



      $$beginbmatrix
      2a_11x_1 + a_21 + cdots a_n1 \
      a_12 + 2a_2x_2 + cdots + a_n2 \
      vdots \
      a_1n + a_2ncdots + 2a_nnx_n\
      endbmatrix = lambda beginbmatrix
      2x_1 \
      2x_2 \
      vdots \
      2x_n\
      endbmatrix $$



      As an example, the first line becomes:



      $2a_11x_1 + a_21 + cdots a_n1 = lambda 2x_1 implies lambda 2x_1 -2a_11x_1 = a_21 + cdots a_n1implies x_1(2lambda - 2a_11) = a_21 + cdots a_n1$



      What should I do now? It says that I should end up with $Mu = lambda u$



      Also, is there a more elegant way of calculating the gradients or it's just all this mess?









      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 9 at 22:50









      Bernard

      110k635103




      110k635103










      asked Aug 9 at 21:52









      Guerlando OCs

      45121144




      45121144




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          You made a mistake computing the gradient of



          $$\ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
          vdots \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$
          For example, the partial derivative of this with respect to $x_1$ is
          $$
          underbrace2a_11x_1+a_12x_2+dots+a_1nx_n_textfirst row+underbracea_21x_2+dots+a_n1x_n_textfirst term of remaining rows
          $$
          which you can recognize as the first entry of $(M+M^top)x$. Therefore, the Lagrange multiplier equation becomes
          $$
          (M+M^top)x=2lambda x
          $$
          If you are further given that $M$ is symmetric, this implies $Mx=lambda x$.






          share|cite|improve this answer




















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );








             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2877733%2ffx-xtmx-using-lagrange-multipliers-to-prove-svd-decomposition%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            1
            down vote



            accepted










            You made a mistake computing the gradient of



            $$\ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
            vdots \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$
            For example, the partial derivative of this with respect to $x_1$ is
            $$
            underbrace2a_11x_1+a_12x_2+dots+a_1nx_n_textfirst row+underbracea_21x_2+dots+a_n1x_n_textfirst term of remaining rows
            $$
            which you can recognize as the first entry of $(M+M^top)x$. Therefore, the Lagrange multiplier equation becomes
            $$
            (M+M^top)x=2lambda x
            $$
            If you are further given that $M$ is symmetric, this implies $Mx=lambda x$.






            share|cite|improve this answer
























              up vote
              1
              down vote



              accepted










              You made a mistake computing the gradient of



              $$\ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
              vdots \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$
              For example, the partial derivative of this with respect to $x_1$ is
              $$
              underbrace2a_11x_1+a_12x_2+dots+a_1nx_n_textfirst row+underbracea_21x_2+dots+a_n1x_n_textfirst term of remaining rows
              $$
              which you can recognize as the first entry of $(M+M^top)x$. Therefore, the Lagrange multiplier equation becomes
              $$
              (M+M^top)x=2lambda x
              $$
              If you are further given that $M$ is symmetric, this implies $Mx=lambda x$.






              share|cite|improve this answer






















                up vote
                1
                down vote



                accepted







                up vote
                1
                down vote



                accepted






                You made a mistake computing the gradient of



                $$\ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
                vdots \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$
                For example, the partial derivative of this with respect to $x_1$ is
                $$
                underbrace2a_11x_1+a_12x_2+dots+a_1nx_n_textfirst row+underbracea_21x_2+dots+a_n1x_n_textfirst term of remaining rows
                $$
                which you can recognize as the first entry of $(M+M^top)x$. Therefore, the Lagrange multiplier equation becomes
                $$
                (M+M^top)x=2lambda x
                $$
                If you are further given that $M$ is symmetric, this implies $Mx=lambda x$.






                share|cite|improve this answer












                You made a mistake computing the gradient of



                $$\ x_1(a_11x_1 + a_12x_2 + cdots + a_1nx_n) + \x_2 (a_21x_1 + a_22x_2 + cdots + a_2nx_n) + \
                vdots \x_n(a_n1x_1+a_n2x_2 + cdots + a_nnx_n)$$
                For example, the partial derivative of this with respect to $x_1$ is
                $$
                underbrace2a_11x_1+a_12x_2+dots+a_1nx_n_textfirst row+underbracea_21x_2+dots+a_n1x_n_textfirst term of remaining rows
                $$
                which you can recognize as the first entry of $(M+M^top)x$. Therefore, the Lagrange multiplier equation becomes
                $$
                (M+M^top)x=2lambda x
                $$
                If you are further given that $M$ is symmetric, this implies $Mx=lambda x$.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Aug 9 at 22:14









                Mike Earnest

                15.9k11646




                15.9k11646






















                     

                    draft saved


                    draft discarded


























                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2877733%2ffx-xtmx-using-lagrange-multipliers-to-prove-svd-decomposition%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    這個網誌中的熱門文章

                    Is there any way to eliminate the singular point to solve this integral by hand or by approximations?

                    Why am i infinitely getting the same tweet with the Twitter Search API?

                    Carbon dioxide