Meaning of the spectral norm of a matrix

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
8
down vote

favorite
6












Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks







share|cite|improve this question


























    up vote
    8
    down vote

    favorite
    6












    Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks







    share|cite|improve this question
























      up vote
      8
      down vote

      favorite
      6









      up vote
      8
      down vote

      favorite
      6






      6





      Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks







      share|cite|improve this question














      Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks









      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 21 at 11:12









      Rodrigo de Azevedo

      12.6k41751




      12.6k41751










      asked Aug 29 '12 at 3:57









      val

      411413




      411413




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          14
          down vote



          accepted










          The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.






          share|cite|improve this answer






















          • thanks - this helps me conceptually.
            – val
            Aug 29 '12 at 6:56






          • 1




            This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
            – divB
            May 4 '16 at 7:35

















          up vote
          7
          down vote













          Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.



          One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:



          beginalign
          ||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
          endalign



          This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:



          1. $p=0$: Gives the rank of the matrix (number of non-zero singular values).

          2. $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.

          3. $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).

          4. $p=infty$: Gives the spectral norm (max. singular value).





          share|cite|improve this answer




















          • thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
            – val
            Aug 29 '12 at 7:02










          • Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
            – Kartik Audhkhasi
            Aug 29 '12 at 13:41










          • Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
            – val
            Aug 29 '12 at 17:31







          • 1




            Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
            – Kartik Audhkhasi
            Aug 29 '12 at 18:10










          • ok, i understand better - thanks again.
            – val
            Aug 30 '12 at 16:17










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f188202%2fmeaning-of-the-spectral-norm-of-a-matrix%23new-answer', 'question_page');

          );

          Post as a guest






























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          14
          down vote



          accepted










          The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.






          share|cite|improve this answer






















          • thanks - this helps me conceptually.
            – val
            Aug 29 '12 at 6:56






          • 1




            This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
            – divB
            May 4 '16 at 7:35














          up vote
          14
          down vote



          accepted










          The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.






          share|cite|improve this answer






















          • thanks - this helps me conceptually.
            – val
            Aug 29 '12 at 6:56






          • 1




            This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
            – divB
            May 4 '16 at 7:35












          up vote
          14
          down vote



          accepted







          up vote
          14
          down vote



          accepted






          The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.






          share|cite|improve this answer














          The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Aug 29 '12 at 15:57

























          answered Aug 29 '12 at 4:21









          chaohuang

          3,11911529




          3,11911529











          • thanks - this helps me conceptually.
            – val
            Aug 29 '12 at 6:56






          • 1




            This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
            – divB
            May 4 '16 at 7:35
















          • thanks - this helps me conceptually.
            – val
            Aug 29 '12 at 6:56






          • 1




            This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
            – divB
            May 4 '16 at 7:35















          thanks - this helps me conceptually.
          – val
          Aug 29 '12 at 6:56




          thanks - this helps me conceptually.
          – val
          Aug 29 '12 at 6:56




          1




          1




          This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
          – divB
          May 4 '16 at 7:35




          This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
          – divB
          May 4 '16 at 7:35










          up vote
          7
          down vote













          Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.



          One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:



          beginalign
          ||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
          endalign



          This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:



          1. $p=0$: Gives the rank of the matrix (number of non-zero singular values).

          2. $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.

          3. $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).

          4. $p=infty$: Gives the spectral norm (max. singular value).





          share|cite|improve this answer




















          • thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
            – val
            Aug 29 '12 at 7:02










          • Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
            – Kartik Audhkhasi
            Aug 29 '12 at 13:41










          • Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
            – val
            Aug 29 '12 at 17:31







          • 1




            Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
            – Kartik Audhkhasi
            Aug 29 '12 at 18:10










          • ok, i understand better - thanks again.
            – val
            Aug 30 '12 at 16:17














          up vote
          7
          down vote













          Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.



          One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:



          beginalign
          ||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
          endalign



          This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:



          1. $p=0$: Gives the rank of the matrix (number of non-zero singular values).

          2. $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.

          3. $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).

          4. $p=infty$: Gives the spectral norm (max. singular value).





          share|cite|improve this answer




















          • thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
            – val
            Aug 29 '12 at 7:02










          • Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
            – Kartik Audhkhasi
            Aug 29 '12 at 13:41










          • Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
            – val
            Aug 29 '12 at 17:31







          • 1




            Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
            – Kartik Audhkhasi
            Aug 29 '12 at 18:10










          • ok, i understand better - thanks again.
            – val
            Aug 30 '12 at 16:17












          up vote
          7
          down vote










          up vote
          7
          down vote









          Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.



          One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:



          beginalign
          ||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
          endalign



          This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:



          1. $p=0$: Gives the rank of the matrix (number of non-zero singular values).

          2. $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.

          3. $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).

          4. $p=infty$: Gives the spectral norm (max. singular value).





          share|cite|improve this answer












          Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.



          One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:



          beginalign
          ||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
          endalign



          This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:



          1. $p=0$: Gives the rank of the matrix (number of non-zero singular values).

          2. $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.

          3. $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).

          4. $p=infty$: Gives the spectral norm (max. singular value).






          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Aug 29 '12 at 6:07









          Kartik Audhkhasi

          1,10369




          1,10369











          • thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
            – val
            Aug 29 '12 at 7:02










          • Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
            – Kartik Audhkhasi
            Aug 29 '12 at 13:41










          • Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
            – val
            Aug 29 '12 at 17:31







          • 1




            Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
            – Kartik Audhkhasi
            Aug 29 '12 at 18:10










          • ok, i understand better - thanks again.
            – val
            Aug 30 '12 at 16:17
















          • thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
            – val
            Aug 29 '12 at 7:02










          • Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
            – Kartik Audhkhasi
            Aug 29 '12 at 13:41










          • Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
            – val
            Aug 29 '12 at 17:31







          • 1




            Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
            – Kartik Audhkhasi
            Aug 29 '12 at 18:10










          • ok, i understand better - thanks again.
            – val
            Aug 30 '12 at 16:17















          thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
          – val
          Aug 29 '12 at 7:02




          thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
          – val
          Aug 29 '12 at 7:02












          Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
          – Kartik Audhkhasi
          Aug 29 '12 at 13:41




          Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
          – Kartik Audhkhasi
          Aug 29 '12 at 13:41












          Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
          – val
          Aug 29 '12 at 17:31





          Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
          – val
          Aug 29 '12 at 17:31





          1




          1




          Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
          – Kartik Audhkhasi
          Aug 29 '12 at 18:10




          Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
          – Kartik Audhkhasi
          Aug 29 '12 at 18:10












          ok, i understand better - thanks again.
          – val
          Aug 30 '12 at 16:17




          ok, i understand better - thanks again.
          – val
          Aug 30 '12 at 16:17












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f188202%2fmeaning-of-the-spectral-norm-of-a-matrix%23new-answer', 'question_page');

          );

          Post as a guest













































































          這個網誌中的熱門文章

          How to combine Bézier curves to a surface?

          Mutual Information Always Non-negative

          Why am i infinitely getting the same tweet with the Twitter Search API?