Why does the non-negative matrix factorization problem non-convex?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
7
down vote

favorite
1












Supposing $mathbfXinmathbbR_+^mtimes n$, $mathbfYinmathbbR_+^mtimes r$, $mathbfWinmathbbR_+^rtimes n$, the non-negative matrix factorization problem is defined as:
$$min_mathbfY,mathbfWleft|mathbfX-mathbfYmathbfWright|_F^2$$



Why is this problem non-convex?










share|cite|improve this question

























    up vote
    7
    down vote

    favorite
    1












    Supposing $mathbfXinmathbbR_+^mtimes n$, $mathbfYinmathbbR_+^mtimes r$, $mathbfWinmathbbR_+^rtimes n$, the non-negative matrix factorization problem is defined as:
    $$min_mathbfY,mathbfWleft|mathbfX-mathbfYmathbfWright|_F^2$$



    Why is this problem non-convex?










    share|cite|improve this question























      up vote
      7
      down vote

      favorite
      1









      up vote
      7
      down vote

      favorite
      1






      1





      Supposing $mathbfXinmathbbR_+^mtimes n$, $mathbfYinmathbbR_+^mtimes r$, $mathbfWinmathbbR_+^rtimes n$, the non-negative matrix factorization problem is defined as:
      $$min_mathbfY,mathbfWleft|mathbfX-mathbfYmathbfWright|_F^2$$



      Why is this problem non-convex?










      share|cite|improve this question













      Supposing $mathbfXinmathbbR_+^mtimes n$, $mathbfYinmathbbR_+^mtimes r$, $mathbfWinmathbbR_+^rtimes n$, the non-negative matrix factorization problem is defined as:
      $$min_mathbfY,mathbfWleft|mathbfX-mathbfYmathbfWright|_F^2$$



      Why is this problem non-convex?







      matrices convex-optimization






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked May 16 '13 at 11:44









      no_name

      200118




      200118




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          8
          down vote



          accepted










          Do you have any reason to believe it is convex? In the space of nonlinear problems, convexity is the exception, not the rule. Convexity is something to be proven, not assumed.



          Consider the scalar case; that is, $m=n=1$. Then the problem is
          $$min_y,wgeq 0(x-yw)^2=min_y,wgeq 0x^2-2xyw+y^2w^2$$



          The gradient and Hessian of $phi_x(y,w)=x^2-2xyw-y^2w^2$ is
          $$nablaphi_x(y,w)=beginbmatrix 2yw^2 - 2xw \ 2y^2w - 2xy endbmatrix$$
          $$nabla^2phi_x(y,w)=beginbmatrix 2w^2 & 4yw - 2x \ 4yw - 2x & 2y^2 endbmatrix$$
          The Hessian is not positive semidefinite for all $x,y,wgeq 0$. For example,
          $$nabla^2phi_1(2,1)=beginbmatrix 2 & 6 \ 6 & 8 endbmatrix, quad
          lambda_min(nabla^2phi_1(2,1))=-1.7082$$






          share|cite|improve this answer






















          • Therefore, we can state that NMF is always a non-convex problem. Thank you. Very useful!
            – no_name
            May 22 '13 at 11:38







          • 1




            I removed the edit that claimed the gradient is "also called the Jacobian". In fact, they are not precisely synonymous. The Jacobian is generally reserved for multivariate, vector-valued functions, in which case the Jacobian is a matrix. But even for single-valued functions like this, the Jacobian and gradient are slightly different: the gradient is a row matrix, while the gradient is a column vector (though the elements will be the same, obviously). And gradients can be computed even for functions whose inputs are not $mathbbR^n$. Thank you for the other part of your edit!
            – Michael Grant
            Nov 12 '14 at 21:53











          • @MichaelGrant I have an optimization problem too. Please make comments if you have. thx!
            – Seyhmus Güngören
            Nov 12 '14 at 22:19










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f393447%2fwhy-does-the-non-negative-matrix-factorization-problem-non-convex%23new-answer', 'question_page');

          );

          Post as a guest






























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          8
          down vote



          accepted










          Do you have any reason to believe it is convex? In the space of nonlinear problems, convexity is the exception, not the rule. Convexity is something to be proven, not assumed.



          Consider the scalar case; that is, $m=n=1$. Then the problem is
          $$min_y,wgeq 0(x-yw)^2=min_y,wgeq 0x^2-2xyw+y^2w^2$$



          The gradient and Hessian of $phi_x(y,w)=x^2-2xyw-y^2w^2$ is
          $$nablaphi_x(y,w)=beginbmatrix 2yw^2 - 2xw \ 2y^2w - 2xy endbmatrix$$
          $$nabla^2phi_x(y,w)=beginbmatrix 2w^2 & 4yw - 2x \ 4yw - 2x & 2y^2 endbmatrix$$
          The Hessian is not positive semidefinite for all $x,y,wgeq 0$. For example,
          $$nabla^2phi_1(2,1)=beginbmatrix 2 & 6 \ 6 & 8 endbmatrix, quad
          lambda_min(nabla^2phi_1(2,1))=-1.7082$$






          share|cite|improve this answer






















          • Therefore, we can state that NMF is always a non-convex problem. Thank you. Very useful!
            – no_name
            May 22 '13 at 11:38







          • 1




            I removed the edit that claimed the gradient is "also called the Jacobian". In fact, they are not precisely synonymous. The Jacobian is generally reserved for multivariate, vector-valued functions, in which case the Jacobian is a matrix. But even for single-valued functions like this, the Jacobian and gradient are slightly different: the gradient is a row matrix, while the gradient is a column vector (though the elements will be the same, obviously). And gradients can be computed even for functions whose inputs are not $mathbbR^n$. Thank you for the other part of your edit!
            – Michael Grant
            Nov 12 '14 at 21:53











          • @MichaelGrant I have an optimization problem too. Please make comments if you have. thx!
            – Seyhmus Güngören
            Nov 12 '14 at 22:19














          up vote
          8
          down vote



          accepted










          Do you have any reason to believe it is convex? In the space of nonlinear problems, convexity is the exception, not the rule. Convexity is something to be proven, not assumed.



          Consider the scalar case; that is, $m=n=1$. Then the problem is
          $$min_y,wgeq 0(x-yw)^2=min_y,wgeq 0x^2-2xyw+y^2w^2$$



          The gradient and Hessian of $phi_x(y,w)=x^2-2xyw-y^2w^2$ is
          $$nablaphi_x(y,w)=beginbmatrix 2yw^2 - 2xw \ 2y^2w - 2xy endbmatrix$$
          $$nabla^2phi_x(y,w)=beginbmatrix 2w^2 & 4yw - 2x \ 4yw - 2x & 2y^2 endbmatrix$$
          The Hessian is not positive semidefinite for all $x,y,wgeq 0$. For example,
          $$nabla^2phi_1(2,1)=beginbmatrix 2 & 6 \ 6 & 8 endbmatrix, quad
          lambda_min(nabla^2phi_1(2,1))=-1.7082$$






          share|cite|improve this answer






















          • Therefore, we can state that NMF is always a non-convex problem. Thank you. Very useful!
            – no_name
            May 22 '13 at 11:38







          • 1




            I removed the edit that claimed the gradient is "also called the Jacobian". In fact, they are not precisely synonymous. The Jacobian is generally reserved for multivariate, vector-valued functions, in which case the Jacobian is a matrix. But even for single-valued functions like this, the Jacobian and gradient are slightly different: the gradient is a row matrix, while the gradient is a column vector (though the elements will be the same, obviously). And gradients can be computed even for functions whose inputs are not $mathbbR^n$. Thank you for the other part of your edit!
            – Michael Grant
            Nov 12 '14 at 21:53











          • @MichaelGrant I have an optimization problem too. Please make comments if you have. thx!
            – Seyhmus Güngören
            Nov 12 '14 at 22:19












          up vote
          8
          down vote



          accepted







          up vote
          8
          down vote



          accepted






          Do you have any reason to believe it is convex? In the space of nonlinear problems, convexity is the exception, not the rule. Convexity is something to be proven, not assumed.



          Consider the scalar case; that is, $m=n=1$. Then the problem is
          $$min_y,wgeq 0(x-yw)^2=min_y,wgeq 0x^2-2xyw+y^2w^2$$



          The gradient and Hessian of $phi_x(y,w)=x^2-2xyw-y^2w^2$ is
          $$nablaphi_x(y,w)=beginbmatrix 2yw^2 - 2xw \ 2y^2w - 2xy endbmatrix$$
          $$nabla^2phi_x(y,w)=beginbmatrix 2w^2 & 4yw - 2x \ 4yw - 2x & 2y^2 endbmatrix$$
          The Hessian is not positive semidefinite for all $x,y,wgeq 0$. For example,
          $$nabla^2phi_1(2,1)=beginbmatrix 2 & 6 \ 6 & 8 endbmatrix, quad
          lambda_min(nabla^2phi_1(2,1))=-1.7082$$






          share|cite|improve this answer














          Do you have any reason to believe it is convex? In the space of nonlinear problems, convexity is the exception, not the rule. Convexity is something to be proven, not assumed.



          Consider the scalar case; that is, $m=n=1$. Then the problem is
          $$min_y,wgeq 0(x-yw)^2=min_y,wgeq 0x^2-2xyw+y^2w^2$$



          The gradient and Hessian of $phi_x(y,w)=x^2-2xyw-y^2w^2$ is
          $$nablaphi_x(y,w)=beginbmatrix 2yw^2 - 2xw \ 2y^2w - 2xy endbmatrix$$
          $$nabla^2phi_x(y,w)=beginbmatrix 2w^2 & 4yw - 2x \ 4yw - 2x & 2y^2 endbmatrix$$
          The Hessian is not positive semidefinite for all $x,y,wgeq 0$. For example,
          $$nabla^2phi_1(2,1)=beginbmatrix 2 & 6 \ 6 & 8 endbmatrix, quad
          lambda_min(nabla^2phi_1(2,1))=-1.7082$$







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Nov 12 '14 at 21:49

























          answered May 16 '13 at 12:35









          Michael Grant

          14.6k11743




          14.6k11743











          • Therefore, we can state that NMF is always a non-convex problem. Thank you. Very useful!
            – no_name
            May 22 '13 at 11:38







          • 1




            I removed the edit that claimed the gradient is "also called the Jacobian". In fact, they are not precisely synonymous. The Jacobian is generally reserved for multivariate, vector-valued functions, in which case the Jacobian is a matrix. But even for single-valued functions like this, the Jacobian and gradient are slightly different: the gradient is a row matrix, while the gradient is a column vector (though the elements will be the same, obviously). And gradients can be computed even for functions whose inputs are not $mathbbR^n$. Thank you for the other part of your edit!
            – Michael Grant
            Nov 12 '14 at 21:53











          • @MichaelGrant I have an optimization problem too. Please make comments if you have. thx!
            – Seyhmus Güngören
            Nov 12 '14 at 22:19
















          • Therefore, we can state that NMF is always a non-convex problem. Thank you. Very useful!
            – no_name
            May 22 '13 at 11:38







          • 1




            I removed the edit that claimed the gradient is "also called the Jacobian". In fact, they are not precisely synonymous. The Jacobian is generally reserved for multivariate, vector-valued functions, in which case the Jacobian is a matrix. But even for single-valued functions like this, the Jacobian and gradient are slightly different: the gradient is a row matrix, while the gradient is a column vector (though the elements will be the same, obviously). And gradients can be computed even for functions whose inputs are not $mathbbR^n$. Thank you for the other part of your edit!
            – Michael Grant
            Nov 12 '14 at 21:53











          • @MichaelGrant I have an optimization problem too. Please make comments if you have. thx!
            – Seyhmus Güngören
            Nov 12 '14 at 22:19















          Therefore, we can state that NMF is always a non-convex problem. Thank you. Very useful!
          – no_name
          May 22 '13 at 11:38





          Therefore, we can state that NMF is always a non-convex problem. Thank you. Very useful!
          – no_name
          May 22 '13 at 11:38





          1




          1




          I removed the edit that claimed the gradient is "also called the Jacobian". In fact, they are not precisely synonymous. The Jacobian is generally reserved for multivariate, vector-valued functions, in which case the Jacobian is a matrix. But even for single-valued functions like this, the Jacobian and gradient are slightly different: the gradient is a row matrix, while the gradient is a column vector (though the elements will be the same, obviously). And gradients can be computed even for functions whose inputs are not $mathbbR^n$. Thank you for the other part of your edit!
          – Michael Grant
          Nov 12 '14 at 21:53





          I removed the edit that claimed the gradient is "also called the Jacobian". In fact, they are not precisely synonymous. The Jacobian is generally reserved for multivariate, vector-valued functions, in which case the Jacobian is a matrix. But even for single-valued functions like this, the Jacobian and gradient are slightly different: the gradient is a row matrix, while the gradient is a column vector (though the elements will be the same, obviously). And gradients can be computed even for functions whose inputs are not $mathbbR^n$. Thank you for the other part of your edit!
          – Michael Grant
          Nov 12 '14 at 21:53













          @MichaelGrant I have an optimization problem too. Please make comments if you have. thx!
          – Seyhmus Güngören
          Nov 12 '14 at 22:19




          @MichaelGrant I have an optimization problem too. Please make comments if you have. thx!
          – Seyhmus Güngören
          Nov 12 '14 at 22:19

















           

          draft saved


          draft discarded















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f393447%2fwhy-does-the-non-negative-matrix-factorization-problem-non-convex%23new-answer', 'question_page');

          );

          Post as a guest













































































          這個網誌中的熱門文章

          Is there any way to eliminate the singular point to solve this integral by hand or by approximations?

          Why am i infinitely getting the same tweet with the Twitter Search API?

          Carbon dioxide