Meaning of the spectral norm of a matrix
Clash Royale CLAN TAG#URR8PPP
up vote
8
down vote
favorite
Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks
matrices matrix-norms spectral-norm
add a comment |Â
up vote
8
down vote
favorite
Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks
matrices matrix-norms spectral-norm
add a comment |Â
up vote
8
down vote
favorite
up vote
8
down vote
favorite
Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks
matrices matrix-norms spectral-norm
Is there an intuitive meaning for the spectral norm of a matrix? Why would an algorithm calculate the relative recovery in spectral norm between two images (i.e. one before the algorithm and the other after)? Thanks
matrices matrix-norms spectral-norm
edited Aug 21 at 11:12
Rodrigo de Azevedo
12.6k41751
12.6k41751
asked Aug 29 '12 at 3:57
val
411413
411413
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
14
down vote
accepted
The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.
thanks - this helps me conceptually.
â val
Aug 29 '12 at 6:56
1
This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
â divB
May 4 '16 at 7:35
add a comment |Â
up vote
7
down vote
Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.
One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:
beginalign
||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
endalign
This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:
- $p=0$: Gives the rank of the matrix (number of non-zero singular values).
- $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.
- $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).
- $p=infty$: Gives the spectral norm (max. singular value).
thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
â val
Aug 29 '12 at 7:02
Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
â Kartik Audhkhasi
Aug 29 '12 at 13:41
Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
â val
Aug 29 '12 at 17:31
1
Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
â Kartik Audhkhasi
Aug 29 '12 at 18:10
ok, i understand better - thanks again.
â val
Aug 30 '12 at 16:17
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
14
down vote
accepted
The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.
thanks - this helps me conceptually.
â val
Aug 29 '12 at 6:56
1
This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
â divB
May 4 '16 at 7:35
add a comment |Â
up vote
14
down vote
accepted
The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.
thanks - this helps me conceptually.
â val
Aug 29 '12 at 6:56
1
This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
â divB
May 4 '16 at 7:35
add a comment |Â
up vote
14
down vote
accepted
up vote
14
down vote
accepted
The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.
The spectral norm is the maximum singular value of a matrix. Intuitively, you can think of it as the maximum 'scale', by which the matrix can 'stretch' a vector.
edited Aug 29 '12 at 15:57
answered Aug 29 '12 at 4:21
chaohuang
3,11911529
3,11911529
thanks - this helps me conceptually.
â val
Aug 29 '12 at 6:56
1
This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
â divB
May 4 '16 at 7:35
add a comment |Â
thanks - this helps me conceptually.
â val
Aug 29 '12 at 6:56
1
This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
â divB
May 4 '16 at 7:35
thanks - this helps me conceptually.
â val
Aug 29 '12 at 6:56
thanks - this helps me conceptually.
â val
Aug 29 '12 at 6:56
1
1
This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
â divB
May 4 '16 at 7:35
This is specifically important since noise will be amplified by this value. For a very nice and intuitive understanding of this I recommend slides 21 and 22 of ee263.stanford.edu/lectures.html .
â divB
May 4 '16 at 7:35
add a comment |Â
up vote
7
down vote
Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.
One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:
beginalign
||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
endalign
This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:
- $p=0$: Gives the rank of the matrix (number of non-zero singular values).
- $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.
- $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).
- $p=infty$: Gives the spectral norm (max. singular value).
thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
â val
Aug 29 '12 at 7:02
Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
â Kartik Audhkhasi
Aug 29 '12 at 13:41
Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
â val
Aug 29 '12 at 17:31
1
Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
â Kartik Audhkhasi
Aug 29 '12 at 18:10
ok, i understand better - thanks again.
â val
Aug 30 '12 at 16:17
add a comment |Â
up vote
7
down vote
Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.
One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:
beginalign
||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
endalign
This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:
- $p=0$: Gives the rank of the matrix (number of non-zero singular values).
- $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.
- $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).
- $p=infty$: Gives the spectral norm (max. singular value).
thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
â val
Aug 29 '12 at 7:02
Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
â Kartik Audhkhasi
Aug 29 '12 at 13:41
Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
â val
Aug 29 '12 at 17:31
1
Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
â Kartik Audhkhasi
Aug 29 '12 at 18:10
ok, i understand better - thanks again.
â val
Aug 30 '12 at 16:17
add a comment |Â
up vote
7
down vote
up vote
7
down vote
Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.
One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:
beginalign
||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
endalign
This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:
- $p=0$: Gives the rank of the matrix (number of non-zero singular values).
- $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.
- $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).
- $p=infty$: Gives the spectral norm (max. singular value).
Let us consider the singular value decomposition (SVD) of a matrix $X = U S V^T$, where $U$ and $V$ are matrices containing the left and right singular vectors of $X$ in their columns. $S$ is a diagonal matrix containing the singular values. A intuitive way to think of the norm of $X$ is in terms of the norm of the singular value vector in the diagonal of $S$. This is because the singular values measure the energy of the matrix in various principal directions.
One can now extend the $p$-norm for a finite-dimensional vector to a $mtimes n$ matrix by working on this singular value vector:
beginalign
||X||_p &= Big( sum_i=1^textmin(m,n) sigma_i^p Big)^1/p
endalign
This is called the Schatten norm of $X$. Specific choices of $p$ yield commonly used matrix norms:
- $p=0$: Gives the rank of the matrix (number of non-zero singular values).
- $p=1$: Gives the nuclear norm (sum of absolute singular values). This is the tightest convex relaxation of the rank.
- $p=2$: Gives the Frobenius norm (square root of the sum of squares of singular values).
- $p=infty$: Gives the spectral norm (max. singular value).
answered Aug 29 '12 at 6:07
Kartik Audhkhasi
1,10369
1,10369
thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
â val
Aug 29 '12 at 7:02
Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
â Kartik Audhkhasi
Aug 29 '12 at 13:41
Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
â val
Aug 29 '12 at 17:31
1
Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
â Kartik Audhkhasi
Aug 29 '12 at 18:10
ok, i understand better - thanks again.
â val
Aug 30 '12 at 16:17
add a comment |Â
thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
â val
Aug 29 '12 at 7:02
Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
â Kartik Audhkhasi
Aug 29 '12 at 13:41
Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
â val
Aug 29 '12 at 17:31
1
Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
â Kartik Audhkhasi
Aug 29 '12 at 18:10
ok, i understand better - thanks again.
â val
Aug 30 '12 at 16:17
thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
â val
Aug 29 '12 at 7:02
thanks - can we really talk about Schatten p=0 for matrices? since we would be looking at 1/0...?
â val
Aug 29 '12 at 7:02
Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
â Kartik Audhkhasi
Aug 29 '12 at 13:41
Yes we can. And it is equal to the rank of the matrix. Can you clarify your question a bit more?
â Kartik Audhkhasi
Aug 29 '12 at 13:41
Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
â val
Aug 29 '12 at 17:31
Hi - i was referring to the Schatten norm equation above: the exponent is 1/p. If p=0 we have 1/0. thanks.. p must > 1.
â val
Aug 29 '12 at 17:31
1
1
Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
â Kartik Audhkhasi
Aug 29 '12 at 18:10
Think of $p rightarrow 0$. Then any nonzero singular values will lead to $1$, while $0$ singular values will give $0$ anyway. Hence, you will end up with the number of non-zero singular values. However, for $p in [0,1]$, the Schatten norm is not a "norm" since it does not satisfy all the properties. However, it is still common practice to call it a "norm".
â Kartik Audhkhasi
Aug 29 '12 at 18:10
ok, i understand better - thanks again.
â val
Aug 30 '12 at 16:17
ok, i understand better - thanks again.
â val
Aug 30 '12 at 16:17
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f188202%2fmeaning-of-the-spectral-norm-of-a-matrix%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password