Large invertible submatrix in random sparse matrices
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
The Problem
Informal statement of the problem: consider a natural distribution $D_n$ of very sparse $ntimes n$ matrices over the binary field $mathbbF_2$ - that is, a matrix sampled from $D_n$ contains a constant number of $1$ per row. How likely is a matrix sampled from $D_n$ to contain a large invertible submatrix?
More formally, consider the two "natural distributions" over $ntimes n$ matrices in $mathsfM_ntimes n(mathbbF_2)$:
- $D^0_n$ samples each entry of the matrix independently from the Bernouilli distribution with probability $p_n = d/n$, where $d$ is a small constant (that is, each entry of a sampled matrix is $1$ with probability $d/n$, and $0$ with probability $1-d/n$).
- $D^1_n$ samples each row of the matrix independently; to sample the $i$th row, pick a random size-$d$ subset $S_i$ of $[1,cdots, n]$. The subset $S_i$ denotes the position of the $1$s in the $i$th row (and $[1,cdots, n]setminus S_i$ denotes the position of the $0$s).
Then, consider the following conjecture (with $b=0$ and $b=1$ giving two variants depending on the choice of the distribution): there exists constants $gamma<1, N$ such that for any $n > N$, the probability that a random matrix sampled from $D^b_n$ contains an invertible submatrix of size $gamma n times gamma n$ is at least $1-f(n)$. Here, $f(n)$ is some fixed function that goes to $0$ when $n$ grows, e.g. an inverse polynomial, or an inverse exponential.
Question
What is known about the above conjecture? Is it known to hold for one of $D^0_n,D^1_n$? Alternatively, is it known to hold if we relax the requirement to containing an invertible submatrix of size $gamma n times gamma n$ with constant probability (instead of a probability that goes to $1$ when $n$ increases)?
I'm specifically interested in the case $d = 5$ (that is, the sparse matrices have on average five $1$s per row), in case this simplifies the problem in any way. I would also be happy with any partial answer, giving pointers to relevant literature, relating the problem to existing problems, or providing any clues.
Thoughts on the question
I have not found any literature directly addressing the problem above. There is, however, a rather large body of research on the rank of random less-sparse matrices, namely, $ntimes n$ matrices containing an average of $O(log n)$ $1$s per row (as opposed to constant in my scenario).
More specifically, this paper shows that the rank of random (Bernouilli) sparse matrices will be $n - O(1)$ with probability $1-o(1)$, where the Bernouilli probability is $p_n = clog n/n$ for some constant $c>0$. A similar result for any field is proven in this other paper. However, it is not clear to me whether these results could be extended to guarantee a rank $gammacdot n$ for some constant $gamma$, with probability $1-o(1)$, in the setting $p_n = d/n$ for some small constant $d$. Furthermore, I do not know either if these results can be extended to the stronger statement that a random sparse matrix has a large invertible subsystem (as opposed to a large rank).
In another paper (which I could not find back), there was a more fine-grained analysis of the rank of a random $O(log n)$-sparse matrix over $mathbbF_2$. Unfortunately, replacing $O(log n)$ by a constant in their calculations only leads to the statement that the number of dependencies in a random sparse matrix is at most $O(n)$ (with probability $1-o(1)$); it does not allow (as far as I could see) to prove the stronger statement that the number of dependencies will be upper bounded by $(1-gamma)cdot n$ for some constant $1>gamma >0$. And this is still only about the rank anyway.
Note
I've not verified this specific conjecture experimentally, but I've done so (well, some coauthors of mine have done so) for a more complex distribution over sparse matrices (for the context, it arose when analyzing the success probability of a subexponential-time attack on a cryptographic pseudorandom generator), and it seemed to be verified (with a constant $gamma$ above $0.9$), for large values of $n$ (a few hundredth). The sampled sparse matrix always contained a $gamma ntimes gamma n$ invertible submatrix in our experiments; the same seems very likely to hold also for the simpler distributions I consider in this question.
linear-algebra probability matrices random-matrices sparse-matrices
add a comment |Â
up vote
0
down vote
favorite
The Problem
Informal statement of the problem: consider a natural distribution $D_n$ of very sparse $ntimes n$ matrices over the binary field $mathbbF_2$ - that is, a matrix sampled from $D_n$ contains a constant number of $1$ per row. How likely is a matrix sampled from $D_n$ to contain a large invertible submatrix?
More formally, consider the two "natural distributions" over $ntimes n$ matrices in $mathsfM_ntimes n(mathbbF_2)$:
- $D^0_n$ samples each entry of the matrix independently from the Bernouilli distribution with probability $p_n = d/n$, where $d$ is a small constant (that is, each entry of a sampled matrix is $1$ with probability $d/n$, and $0$ with probability $1-d/n$).
- $D^1_n$ samples each row of the matrix independently; to sample the $i$th row, pick a random size-$d$ subset $S_i$ of $[1,cdots, n]$. The subset $S_i$ denotes the position of the $1$s in the $i$th row (and $[1,cdots, n]setminus S_i$ denotes the position of the $0$s).
Then, consider the following conjecture (with $b=0$ and $b=1$ giving two variants depending on the choice of the distribution): there exists constants $gamma<1, N$ such that for any $n > N$, the probability that a random matrix sampled from $D^b_n$ contains an invertible submatrix of size $gamma n times gamma n$ is at least $1-f(n)$. Here, $f(n)$ is some fixed function that goes to $0$ when $n$ grows, e.g. an inverse polynomial, or an inverse exponential.
Question
What is known about the above conjecture? Is it known to hold for one of $D^0_n,D^1_n$? Alternatively, is it known to hold if we relax the requirement to containing an invertible submatrix of size $gamma n times gamma n$ with constant probability (instead of a probability that goes to $1$ when $n$ increases)?
I'm specifically interested in the case $d = 5$ (that is, the sparse matrices have on average five $1$s per row), in case this simplifies the problem in any way. I would also be happy with any partial answer, giving pointers to relevant literature, relating the problem to existing problems, or providing any clues.
Thoughts on the question
I have not found any literature directly addressing the problem above. There is, however, a rather large body of research on the rank of random less-sparse matrices, namely, $ntimes n$ matrices containing an average of $O(log n)$ $1$s per row (as opposed to constant in my scenario).
More specifically, this paper shows that the rank of random (Bernouilli) sparse matrices will be $n - O(1)$ with probability $1-o(1)$, where the Bernouilli probability is $p_n = clog n/n$ for some constant $c>0$. A similar result for any field is proven in this other paper. However, it is not clear to me whether these results could be extended to guarantee a rank $gammacdot n$ for some constant $gamma$, with probability $1-o(1)$, in the setting $p_n = d/n$ for some small constant $d$. Furthermore, I do not know either if these results can be extended to the stronger statement that a random sparse matrix has a large invertible subsystem (as opposed to a large rank).
In another paper (which I could not find back), there was a more fine-grained analysis of the rank of a random $O(log n)$-sparse matrix over $mathbbF_2$. Unfortunately, replacing $O(log n)$ by a constant in their calculations only leads to the statement that the number of dependencies in a random sparse matrix is at most $O(n)$ (with probability $1-o(1)$); it does not allow (as far as I could see) to prove the stronger statement that the number of dependencies will be upper bounded by $(1-gamma)cdot n$ for some constant $1>gamma >0$. And this is still only about the rank anyway.
Note
I've not verified this specific conjecture experimentally, but I've done so (well, some coauthors of mine have done so) for a more complex distribution over sparse matrices (for the context, it arose when analyzing the success probability of a subexponential-time attack on a cryptographic pseudorandom generator), and it seemed to be verified (with a constant $gamma$ above $0.9$), for large values of $n$ (a few hundredth). The sampled sparse matrix always contained a $gamma ntimes gamma n$ invertible submatrix in our experiments; the same seems very likely to hold also for the simpler distributions I consider in this question.
linear-algebra probability matrices random-matrices sparse-matrices
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
The Problem
Informal statement of the problem: consider a natural distribution $D_n$ of very sparse $ntimes n$ matrices over the binary field $mathbbF_2$ - that is, a matrix sampled from $D_n$ contains a constant number of $1$ per row. How likely is a matrix sampled from $D_n$ to contain a large invertible submatrix?
More formally, consider the two "natural distributions" over $ntimes n$ matrices in $mathsfM_ntimes n(mathbbF_2)$:
- $D^0_n$ samples each entry of the matrix independently from the Bernouilli distribution with probability $p_n = d/n$, where $d$ is a small constant (that is, each entry of a sampled matrix is $1$ with probability $d/n$, and $0$ with probability $1-d/n$).
- $D^1_n$ samples each row of the matrix independently; to sample the $i$th row, pick a random size-$d$ subset $S_i$ of $[1,cdots, n]$. The subset $S_i$ denotes the position of the $1$s in the $i$th row (and $[1,cdots, n]setminus S_i$ denotes the position of the $0$s).
Then, consider the following conjecture (with $b=0$ and $b=1$ giving two variants depending on the choice of the distribution): there exists constants $gamma<1, N$ such that for any $n > N$, the probability that a random matrix sampled from $D^b_n$ contains an invertible submatrix of size $gamma n times gamma n$ is at least $1-f(n)$. Here, $f(n)$ is some fixed function that goes to $0$ when $n$ grows, e.g. an inverse polynomial, or an inverse exponential.
Question
What is known about the above conjecture? Is it known to hold for one of $D^0_n,D^1_n$? Alternatively, is it known to hold if we relax the requirement to containing an invertible submatrix of size $gamma n times gamma n$ with constant probability (instead of a probability that goes to $1$ when $n$ increases)?
I'm specifically interested in the case $d = 5$ (that is, the sparse matrices have on average five $1$s per row), in case this simplifies the problem in any way. I would also be happy with any partial answer, giving pointers to relevant literature, relating the problem to existing problems, or providing any clues.
Thoughts on the question
I have not found any literature directly addressing the problem above. There is, however, a rather large body of research on the rank of random less-sparse matrices, namely, $ntimes n$ matrices containing an average of $O(log n)$ $1$s per row (as opposed to constant in my scenario).
More specifically, this paper shows that the rank of random (Bernouilli) sparse matrices will be $n - O(1)$ with probability $1-o(1)$, where the Bernouilli probability is $p_n = clog n/n$ for some constant $c>0$. A similar result for any field is proven in this other paper. However, it is not clear to me whether these results could be extended to guarantee a rank $gammacdot n$ for some constant $gamma$, with probability $1-o(1)$, in the setting $p_n = d/n$ for some small constant $d$. Furthermore, I do not know either if these results can be extended to the stronger statement that a random sparse matrix has a large invertible subsystem (as opposed to a large rank).
In another paper (which I could not find back), there was a more fine-grained analysis of the rank of a random $O(log n)$-sparse matrix over $mathbbF_2$. Unfortunately, replacing $O(log n)$ by a constant in their calculations only leads to the statement that the number of dependencies in a random sparse matrix is at most $O(n)$ (with probability $1-o(1)$); it does not allow (as far as I could see) to prove the stronger statement that the number of dependencies will be upper bounded by $(1-gamma)cdot n$ for some constant $1>gamma >0$. And this is still only about the rank anyway.
Note
I've not verified this specific conjecture experimentally, but I've done so (well, some coauthors of mine have done so) for a more complex distribution over sparse matrices (for the context, it arose when analyzing the success probability of a subexponential-time attack on a cryptographic pseudorandom generator), and it seemed to be verified (with a constant $gamma$ above $0.9$), for large values of $n$ (a few hundredth). The sampled sparse matrix always contained a $gamma ntimes gamma n$ invertible submatrix in our experiments; the same seems very likely to hold also for the simpler distributions I consider in this question.
linear-algebra probability matrices random-matrices sparse-matrices
The Problem
Informal statement of the problem: consider a natural distribution $D_n$ of very sparse $ntimes n$ matrices over the binary field $mathbbF_2$ - that is, a matrix sampled from $D_n$ contains a constant number of $1$ per row. How likely is a matrix sampled from $D_n$ to contain a large invertible submatrix?
More formally, consider the two "natural distributions" over $ntimes n$ matrices in $mathsfM_ntimes n(mathbbF_2)$:
- $D^0_n$ samples each entry of the matrix independently from the Bernouilli distribution with probability $p_n = d/n$, where $d$ is a small constant (that is, each entry of a sampled matrix is $1$ with probability $d/n$, and $0$ with probability $1-d/n$).
- $D^1_n$ samples each row of the matrix independently; to sample the $i$th row, pick a random size-$d$ subset $S_i$ of $[1,cdots, n]$. The subset $S_i$ denotes the position of the $1$s in the $i$th row (and $[1,cdots, n]setminus S_i$ denotes the position of the $0$s).
Then, consider the following conjecture (with $b=0$ and $b=1$ giving two variants depending on the choice of the distribution): there exists constants $gamma<1, N$ such that for any $n > N$, the probability that a random matrix sampled from $D^b_n$ contains an invertible submatrix of size $gamma n times gamma n$ is at least $1-f(n)$. Here, $f(n)$ is some fixed function that goes to $0$ when $n$ grows, e.g. an inverse polynomial, or an inverse exponential.
Question
What is known about the above conjecture? Is it known to hold for one of $D^0_n,D^1_n$? Alternatively, is it known to hold if we relax the requirement to containing an invertible submatrix of size $gamma n times gamma n$ with constant probability (instead of a probability that goes to $1$ when $n$ increases)?
I'm specifically interested in the case $d = 5$ (that is, the sparse matrices have on average five $1$s per row), in case this simplifies the problem in any way. I would also be happy with any partial answer, giving pointers to relevant literature, relating the problem to existing problems, or providing any clues.
Thoughts on the question
I have not found any literature directly addressing the problem above. There is, however, a rather large body of research on the rank of random less-sparse matrices, namely, $ntimes n$ matrices containing an average of $O(log n)$ $1$s per row (as opposed to constant in my scenario).
More specifically, this paper shows that the rank of random (Bernouilli) sparse matrices will be $n - O(1)$ with probability $1-o(1)$, where the Bernouilli probability is $p_n = clog n/n$ for some constant $c>0$. A similar result for any field is proven in this other paper. However, it is not clear to me whether these results could be extended to guarantee a rank $gammacdot n$ for some constant $gamma$, with probability $1-o(1)$, in the setting $p_n = d/n$ for some small constant $d$. Furthermore, I do not know either if these results can be extended to the stronger statement that a random sparse matrix has a large invertible subsystem (as opposed to a large rank).
In another paper (which I could not find back), there was a more fine-grained analysis of the rank of a random $O(log n)$-sparse matrix over $mathbbF_2$. Unfortunately, replacing $O(log n)$ by a constant in their calculations only leads to the statement that the number of dependencies in a random sparse matrix is at most $O(n)$ (with probability $1-o(1)$); it does not allow (as far as I could see) to prove the stronger statement that the number of dependencies will be upper bounded by $(1-gamma)cdot n$ for some constant $1>gamma >0$. And this is still only about the rank anyway.
Note
I've not verified this specific conjecture experimentally, but I've done so (well, some coauthors of mine have done so) for a more complex distribution over sparse matrices (for the context, it arose when analyzing the success probability of a subexponential-time attack on a cryptographic pseudorandom generator), and it seemed to be verified (with a constant $gamma$ above $0.9$), for large values of $n$ (a few hundredth). The sampled sparse matrix always contained a $gamma ntimes gamma n$ invertible submatrix in our experiments; the same seems very likely to hold also for the simpler distributions I consider in this question.
linear-algebra probability matrices random-matrices sparse-matrices
linear-algebra probability matrices random-matrices sparse-matrices
asked Sep 6 at 11:08
Geoffroy Couteau
7110
7110
add a comment |Â
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2907339%2flarge-invertible-submatrix-in-random-sparse-matrices%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password