Matrix optimization problem for sparseness and closeness.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$



How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?










share|cite|improve this question























  • Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
    – Brian Borchers
    Sep 4 at 14:29










  • @BrianBorchers Yes.
    – mathreadler
    Sep 4 at 14:47














up vote
0
down vote

favorite












If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$



How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?










share|cite|improve this question























  • Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
    – Brian Borchers
    Sep 4 at 14:29










  • @BrianBorchers Yes.
    – mathreadler
    Sep 4 at 14:47












up vote
0
down vote

favorite









up vote
0
down vote

favorite











If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$



How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?










share|cite|improve this question















If I have a matrix $$bf M= left[beginarraylbf M_k+epsilon\bf M_uendarrayright], casesbf M_k text known\bf M_u text unknown$$



How can I then find $bf epsilon$ (matrix) which is minimal (or regularized) in some norm and that $bf M_u$ and $bf M_k+epsilon$ is sparse, $bf M^-1$ exists and is also sparse?







linear-algebra reference-request soft-question nonlinear-optimization signal-processing






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 8 at 16:03

























asked Sep 4 at 11:46









mathreadler

13.9k72058




13.9k72058











  • Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
    – Brian Borchers
    Sep 4 at 14:29










  • @BrianBorchers Yes.
    – mathreadler
    Sep 4 at 14:47
















  • Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
    – Brian Borchers
    Sep 4 at 14:29










  • @BrianBorchers Yes.
    – mathreadler
    Sep 4 at 14:47















Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
– Brian Borchers
Sep 4 at 14:29




Just to be clear, $M_k$ is fixed and $epsilon$ and $M_u$ are variables to be optimized over, right?
– Brian Borchers
Sep 4 at 14:29












@BrianBorchers Yes.
– mathreadler
Sep 4 at 14:47




@BrianBorchers Yes.
– mathreadler
Sep 4 at 14:47










1 Answer
1






active

oldest

votes

















up vote
1
down vote













There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.



See for example:



Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.



Let



$U=left[
beginarrayc
M_k \
0
endarray
right]
$



Then find a sparse matrix $V$ such that



$UV approx I$.



Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.



Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.






share|cite|improve this answer




















  • Interesting. But this won't let us find $bf M_u$, will it?
    – mathreadler
    Sep 5 at 9:40










  • $epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
    – Brian Borchers
    Sep 5 at 11:54










  • But if I also want to put constraints on them? Is it handled in the paper?
    – mathreadler
    Sep 5 at 12:00










  • As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
    – Brian Borchers
    Sep 5 at 12:08










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2904948%2fmatrix-optimization-problem-for-sparseness-and-closeness%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.



See for example:



Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.



Let



$U=left[
beginarrayc
M_k \
0
endarray
right]
$



Then find a sparse matrix $V$ such that



$UV approx I$.



Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.



Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.






share|cite|improve this answer




















  • Interesting. But this won't let us find $bf M_u$, will it?
    – mathreadler
    Sep 5 at 9:40










  • $epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
    – Brian Borchers
    Sep 5 at 11:54










  • But if I also want to put constraints on them? Is it handled in the paper?
    – mathreadler
    Sep 5 at 12:00










  • As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
    – Brian Borchers
    Sep 5 at 12:08














up vote
1
down vote













There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.



See for example:



Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.



Let



$U=left[
beginarrayc
M_k \
0
endarray
right]
$



Then find a sparse matrix $V$ such that



$UV approx I$.



Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.



Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.






share|cite|improve this answer




















  • Interesting. But this won't let us find $bf M_u$, will it?
    – mathreadler
    Sep 5 at 9:40










  • $epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
    – Brian Borchers
    Sep 5 at 11:54










  • But if I also want to put constraints on them? Is it handled in the paper?
    – mathreadler
    Sep 5 at 12:00










  • As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
    – Brian Borchers
    Sep 5 at 12:08












up vote
1
down vote










up vote
1
down vote









There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.



See for example:



Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.



Let



$U=left[
beginarrayc
M_k \
0
endarray
right]
$



Then find a sparse matrix $V$ such that



$UV approx I$.



Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.



Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.






share|cite|improve this answer












There's a body of research on finding a sparse matrix that is an approximate inverse for a given matrix. This is used to find an approximate inverse for a sample covariance matrix which typically isn't of full rank and thus doesn't have an exact inverse.



See for example:



Sun, Tingni, and Cun-Hui Zhang. "Sparse matrix inversion with scaled lasso." The Journal of Machine Learning Research 14.1 (2013): 3385-3418.



Let



$U=left[
beginarrayc
M_k \
0
endarray
right]
$



Then find a sparse matrix $V$ such that



$UV approx I$.



Here "approximately" is measured by the spectral norm, $| UV - I |$. The $V$ matrix would then be your $M^-1$.



Note that although $UV$ is approximately $I$, this doesn't mean that $V^-1$ will be very close to $U$ or that $V^-1=M$ would necessarily be sparse.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Sep 4 at 16:08









Brian Borchers

5,24111119




5,24111119











  • Interesting. But this won't let us find $bf M_u$, will it?
    – mathreadler
    Sep 5 at 9:40










  • $epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
    – Brian Borchers
    Sep 5 at 11:54










  • But if I also want to put constraints on them? Is it handled in the paper?
    – mathreadler
    Sep 5 at 12:00










  • As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
    – Brian Borchers
    Sep 5 at 12:08
















  • Interesting. But this won't let us find $bf M_u$, will it?
    – mathreadler
    Sep 5 at 9:40










  • $epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
    – Brian Borchers
    Sep 5 at 11:54










  • But if I also want to put constraints on them? Is it handled in the paper?
    – mathreadler
    Sep 5 at 12:00










  • As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
    – Brian Borchers
    Sep 5 at 12:08















Interesting. But this won't let us find $bf M_u$, will it?
– mathreadler
Sep 5 at 9:40




Interesting. But this won't let us find $bf M_u$, will it?
– mathreadler
Sep 5 at 9:40












$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
– Brian Borchers
Sep 5 at 11:54




$epsilon$ and $M_u}$ would be the difference between $V^-1$ and $M_k$.
– Brian Borchers
Sep 5 at 11:54












But if I also want to put constraints on them? Is it handled in the paper?
– mathreadler
Sep 5 at 12:00




But if I also want to put constraints on them? Is it handled in the paper?
– mathreadler
Sep 5 at 12:00












As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
– Brian Borchers
Sep 5 at 12:08




As I wrote in the answer, this doesn't give you a way to constrain the sparsity of $epsilon$ and $M_u$.
– Brian Borchers
Sep 5 at 12:08

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2904948%2fmatrix-optimization-problem-for-sparseness-and-closeness%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

How to combine Bézier curves to a surface?

Mutual Information Always Non-negative

Why am i infinitely getting the same tweet with the Twitter Search API?