Matrix optimization problem for sparseness and closeness.
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$
How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?
linear-algebra reference-request soft-question nonlinear-optimization signal-processing
add a comment |Â
up vote
0
down vote
favorite
If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$
How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?
linear-algebra reference-request soft-question nonlinear-optimization signal-processing
Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
â Brian Borchers
Sep 4 at 14:29
@BrianBorchers Yes.
â mathreadler
Sep 4 at 14:47
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$
How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?
linear-algebra reference-request soft-question nonlinear-optimization signal-processing
If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$
How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?
linear-algebra reference-request soft-question nonlinear-optimization signal-processing
linear-algebra reference-request soft-question nonlinear-optimization signal-processing
edited Sep 8 at 16:03
asked Sep 4 at 11:46
mathreadler
13.9k72058
13.9k72058
Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
â Brian Borchers
Sep 4 at 14:29
@BrianBorchers Yes.
â mathreadler
Sep 4 at 14:47
add a comment |Â
Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
â Brian Borchers
Sep 4 at 14:29
@BrianBorchers Yes.
â mathreadler
Sep 4 at 14:47
Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
â Brian Borchers
Sep 4 at 14:29
Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
â Brian Borchers
Sep 4 at 14:29
@BrianBorchers Yes.
â mathreadler
Sep 4 at 14:47
@BrianBorchers Yes.
â mathreadler
Sep 4 at 14:47
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.
See for example:
Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.
Let
$U=left[
beginarrayc
M_k \
0
endarray
right]
$
Then find a sparse matrix $V$ such that
$UV approx I$.
Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.
Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.
Interesting. But this won't let us find $bf M_u$, will it?
â mathreadler
Sep 5 at 9:40
$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
â Brian Borchers
Sep 5 at 11:54
But if I also want to put constraints on them? Is it handled in the paper?
â mathreadler
Sep 5 at 12:00
As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
â Brian Borchers
Sep 5 at 12:08
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.
See for example:
Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.
Let
$U=left[
beginarrayc
M_k \
0
endarray
right]
$
Then find a sparse matrix $V$ such that
$UV approx I$.
Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.
Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.
Interesting. But this won't let us find $bf M_u$, will it?
â mathreadler
Sep 5 at 9:40
$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
â Brian Borchers
Sep 5 at 11:54
But if I also want to put constraints on them? Is it handled in the paper?
â mathreadler
Sep 5 at 12:00
As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
â Brian Borchers
Sep 5 at 12:08
add a comment |Â
up vote
1
down vote
There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.
See for example:
Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.
Let
$U=left[
beginarrayc
M_k \
0
endarray
right]
$
Then find a sparse matrix $V$ such that
$UV approx I$.
Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.
Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.
Interesting. But this won't let us find $bf M_u$, will it?
â mathreadler
Sep 5 at 9:40
$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
â Brian Borchers
Sep 5 at 11:54
But if I also want to put constraints on them? Is it handled in the paper?
â mathreadler
Sep 5 at 12:00
As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
â Brian Borchers
Sep 5 at 12:08
add a comment |Â
up vote
1
down vote
up vote
1
down vote
There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.
See for example:
Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.
Let
$U=left[
beginarrayc
M_k \
0
endarray
right]
$
Then find a sparse matrix $V$ such that
$UV approx I$.
Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.
Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.
There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.
See for example:
Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.
Let
$U=left[
beginarrayc
M_k \
0
endarray
right]
$
Then find a sparse matrix $V$ such that
$UV approx I$.
Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.
Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.
answered Sep 4 at 16:08
Brian Borchers
5,24111119
5,24111119
Interesting. But this won't let us find $bf M_u$, will it?
â mathreadler
Sep 5 at 9:40
$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
â Brian Borchers
Sep 5 at 11:54
But if I also want to put constraints on them? Is it handled in the paper?
â mathreadler
Sep 5 at 12:00
As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
â Brian Borchers
Sep 5 at 12:08
add a comment |Â
Interesting. But this won't let us find $bf M_u$, will it?
â mathreadler
Sep 5 at 9:40
$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
â Brian Borchers
Sep 5 at 11:54
But if I also want to put constraints on them? Is it handled in the paper?
â mathreadler
Sep 5 at 12:00
As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
â Brian Borchers
Sep 5 at 12:08
Interesting. But this won't let us find $bf M_u$, will it?
â mathreadler
Sep 5 at 9:40
Interesting. But this won't let us find $bf M_u$, will it?
â mathreadler
Sep 5 at 9:40
$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
â Brian Borchers
Sep 5 at 11:54
$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
â Brian Borchers
Sep 5 at 11:54
But if I also want to put constraints on them? Is it handled in the paper?
â mathreadler
Sep 5 at 12:00
But if I also want to put constraints on them? Is it handled in the paper?
â mathreadler
Sep 5 at 12:00
As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
â Brian Borchers
Sep 5 at 12:08
As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
â Brian Borchers
Sep 5 at 12:08
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2904948%2fmatrix-optimization-problem-for-sparseness-and-closeness%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
â Brian Borchers
Sep 4 at 14:29
@BrianBorchers Yes.
â mathreadler
Sep 4 at 14:47