Isn't the estimate for the smallest counterexample of the BPSW-test far too optimistic?
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
Here :
http://mathworld.wolfram.com/Baillie-PSWPrimalityTest.html
it is described how the BPSW-test was verified. If I understand the description right, the test was circular because intermediate probable primes were tested with this test.
Despite of this, it is estimated that the smallest counterexample should have at least $10 000$ digits.
The argument apparently is that no one found a counterexample during some years. But counterexamples are surely extremely difficult to find, moreover, I doubt that many people actually searched for counterexamples.
Isn't this estimate far too optimistic ?
I read an article giving a heuristic argument that for every $epsilon>0$ and sufficiently large $x$ the number of counterexamples should be much larger than $x^1-epsilon$. Of course, this does not rule out that the largest counterexample is huge.
In fact there are indications that counterexamples are very rare, but the estimate in the article seems unrealistic to me.
An additional question : Considering the mathworld-article, how sure can we be that the BPSW-test is correct for all numbers upto $2^64$ (which seems to be widely believed) ?
number-theory elementary-number-theory primality-test
 |Â
show 3 more comments
up vote
2
down vote
favorite
Here :
http://mathworld.wolfram.com/Baillie-PSWPrimalityTest.html
it is described how the BPSW-test was verified. If I understand the description right, the test was circular because intermediate probable primes were tested with this test.
Despite of this, it is estimated that the smallest counterexample should have at least $10 000$ digits.
The argument apparently is that no one found a counterexample during some years. But counterexamples are surely extremely difficult to find, moreover, I doubt that many people actually searched for counterexamples.
Isn't this estimate far too optimistic ?
I read an article giving a heuristic argument that for every $epsilon>0$ and sufficiently large $x$ the number of counterexamples should be much larger than $x^1-epsilon$. Of course, this does not rule out that the largest counterexample is huge.
In fact there are indications that counterexamples are very rare, but the estimate in the article seems unrealistic to me.
An additional question : Considering the mathworld-article, how sure can we be that the BPSW-test is correct for all numbers upto $2^64$ (which seems to be widely believed) ?
number-theory elementary-number-theory primality-test
2
This seems to be the source for this claim: groups.google.com/d/msg/sci.crypt/a0vacUwax5c/ZvjJn0sy7EgJ. It looks like an off-hand comment (it's ungrammatical), and it's not at all clear that the sentence about the $10000$ digits is based on the preceding information about Primo; in fact, the fact that it starts with "in fact" seems to indicate that it's based on other information (assuming that the ungrammatical "presumably" was meant to read "presumed"). I agree with you that absent additional information, the claim seems unjustified and overly optimistic.
â joriki
Sep 2 at 11:36
1
No, I don't think it's circular. The intermediate prime is first tested only probabilistically in order not to waste time if it turns out to be composite. Then, at the very end, if everything else works out and the proof only depends on the primality of the intermediate prime, it's properly proved to be a prime (namely recursively with the same algorithm). See this section in the Wikipedia article (in particular the last paragraph in the section).
â joriki
Sep 2 at 12:14
1
I'm not sure what that's based on. The article says that as of June $2009$, it was confirmed that there are no BPSW pseudoprimes up to $10^17approx2^56.5$, so that's just about $frac1200$ of the numbers up to $2^64$. Unless the Primo test has been applied about $10^19$ times (which might be but seems unlikely) I don't see why there shouldn't be a BPSW pseudoprime below $2^64$. I suspect the belief is mostly heuristic, in that the test has been so successful, it seems unlikely that it should fail if we just increase the numbers by a factor of $200$.
â joriki
Sep 2 at 12:36
1
I don't know anything about that myself, but I did find this relevant web page. "Furthermore, analysis ($24$ October $2009$) by Gilchrist of an extension of Feitsma's database to $2^64$ found no BPSW pseudoprime (standard or strong) below $2^64$ (approximately $1.8446744cdot10^19$). This lower bound of $2^64$ for any BPSW pseudoprime has since been separately verified ($11$ July $2011$) by Charles Greathouse; however, he also used Feitsma's database, so a completely independent check has not yet been carried out."
â joriki
Sep 2 at 13:19
1
"Note, however, that Carl Pomerance ($1984$) presented a heuristic argument that an infinite number of counterexamples exist to the standard test (and presumably to the strong test as well, based on similar reasoning), and even that (for sufficiently large $x$, dependent on $mu$) the number of BPSW pseudoprimes $lt x$ exceeds $x^1-mu$, where $mu$ is an arbitrarily small pre-assigned positive number. Nevertheless, not a single BPSW pseudoprime has yet been found. Consequently, the BPSW test carries an aura of dependability (justified or not) exceeding that of competing algorithms [...]."
â joriki
Sep 2 at 13:21
 |Â
show 3 more comments
up vote
2
down vote
favorite
up vote
2
down vote
favorite
Here :
http://mathworld.wolfram.com/Baillie-PSWPrimalityTest.html
it is described how the BPSW-test was verified. If I understand the description right, the test was circular because intermediate probable primes were tested with this test.
Despite of this, it is estimated that the smallest counterexample should have at least $10 000$ digits.
The argument apparently is that no one found a counterexample during some years. But counterexamples are surely extremely difficult to find, moreover, I doubt that many people actually searched for counterexamples.
Isn't this estimate far too optimistic ?
I read an article giving a heuristic argument that for every $epsilon>0$ and sufficiently large $x$ the number of counterexamples should be much larger than $x^1-epsilon$. Of course, this does not rule out that the largest counterexample is huge.
In fact there are indications that counterexamples are very rare, but the estimate in the article seems unrealistic to me.
An additional question : Considering the mathworld-article, how sure can we be that the BPSW-test is correct for all numbers upto $2^64$ (which seems to be widely believed) ?
number-theory elementary-number-theory primality-test
Here :
http://mathworld.wolfram.com/Baillie-PSWPrimalityTest.html
it is described how the BPSW-test was verified. If I understand the description right, the test was circular because intermediate probable primes were tested with this test.
Despite of this, it is estimated that the smallest counterexample should have at least $10 000$ digits.
The argument apparently is that no one found a counterexample during some years. But counterexamples are surely extremely difficult to find, moreover, I doubt that many people actually searched for counterexamples.
Isn't this estimate far too optimistic ?
I read an article giving a heuristic argument that for every $epsilon>0$ and sufficiently large $x$ the number of counterexamples should be much larger than $x^1-epsilon$. Of course, this does not rule out that the largest counterexample is huge.
In fact there are indications that counterexamples are very rare, but the estimate in the article seems unrealistic to me.
An additional question : Considering the mathworld-article, how sure can we be that the BPSW-test is correct for all numbers upto $2^64$ (which seems to be widely believed) ?
number-theory elementary-number-theory primality-test
number-theory elementary-number-theory primality-test
asked Sep 2 at 10:54
Peter
45.3k1039119
45.3k1039119
2
This seems to be the source for this claim: groups.google.com/d/msg/sci.crypt/a0vacUwax5c/ZvjJn0sy7EgJ. It looks like an off-hand comment (it's ungrammatical), and it's not at all clear that the sentence about the $10000$ digits is based on the preceding information about Primo; in fact, the fact that it starts with "in fact" seems to indicate that it's based on other information (assuming that the ungrammatical "presumably" was meant to read "presumed"). I agree with you that absent additional information, the claim seems unjustified and overly optimistic.
â joriki
Sep 2 at 11:36
1
No, I don't think it's circular. The intermediate prime is first tested only probabilistically in order not to waste time if it turns out to be composite. Then, at the very end, if everything else works out and the proof only depends on the primality of the intermediate prime, it's properly proved to be a prime (namely recursively with the same algorithm). See this section in the Wikipedia article (in particular the last paragraph in the section).
â joriki
Sep 2 at 12:14
1
I'm not sure what that's based on. The article says that as of June $2009$, it was confirmed that there are no BPSW pseudoprimes up to $10^17approx2^56.5$, so that's just about $frac1200$ of the numbers up to $2^64$. Unless the Primo test has been applied about $10^19$ times (which might be but seems unlikely) I don't see why there shouldn't be a BPSW pseudoprime below $2^64$. I suspect the belief is mostly heuristic, in that the test has been so successful, it seems unlikely that it should fail if we just increase the numbers by a factor of $200$.
â joriki
Sep 2 at 12:36
1
I don't know anything about that myself, but I did find this relevant web page. "Furthermore, analysis ($24$ October $2009$) by Gilchrist of an extension of Feitsma's database to $2^64$ found no BPSW pseudoprime (standard or strong) below $2^64$ (approximately $1.8446744cdot10^19$). This lower bound of $2^64$ for any BPSW pseudoprime has since been separately verified ($11$ July $2011$) by Charles Greathouse; however, he also used Feitsma's database, so a completely independent check has not yet been carried out."
â joriki
Sep 2 at 13:19
1
"Note, however, that Carl Pomerance ($1984$) presented a heuristic argument that an infinite number of counterexamples exist to the standard test (and presumably to the strong test as well, based on similar reasoning), and even that (for sufficiently large $x$, dependent on $mu$) the number of BPSW pseudoprimes $lt x$ exceeds $x^1-mu$, where $mu$ is an arbitrarily small pre-assigned positive number. Nevertheless, not a single BPSW pseudoprime has yet been found. Consequently, the BPSW test carries an aura of dependability (justified or not) exceeding that of competing algorithms [...]."
â joriki
Sep 2 at 13:21
 |Â
show 3 more comments
2
This seems to be the source for this claim: groups.google.com/d/msg/sci.crypt/a0vacUwax5c/ZvjJn0sy7EgJ. It looks like an off-hand comment (it's ungrammatical), and it's not at all clear that the sentence about the $10000$ digits is based on the preceding information about Primo; in fact, the fact that it starts with "in fact" seems to indicate that it's based on other information (assuming that the ungrammatical "presumably" was meant to read "presumed"). I agree with you that absent additional information, the claim seems unjustified and overly optimistic.
â joriki
Sep 2 at 11:36
1
No, I don't think it's circular. The intermediate prime is first tested only probabilistically in order not to waste time if it turns out to be composite. Then, at the very end, if everything else works out and the proof only depends on the primality of the intermediate prime, it's properly proved to be a prime (namely recursively with the same algorithm). See this section in the Wikipedia article (in particular the last paragraph in the section).
â joriki
Sep 2 at 12:14
1
I'm not sure what that's based on. The article says that as of June $2009$, it was confirmed that there are no BPSW pseudoprimes up to $10^17approx2^56.5$, so that's just about $frac1200$ of the numbers up to $2^64$. Unless the Primo test has been applied about $10^19$ times (which might be but seems unlikely) I don't see why there shouldn't be a BPSW pseudoprime below $2^64$. I suspect the belief is mostly heuristic, in that the test has been so successful, it seems unlikely that it should fail if we just increase the numbers by a factor of $200$.
â joriki
Sep 2 at 12:36
1
I don't know anything about that myself, but I did find this relevant web page. "Furthermore, analysis ($24$ October $2009$) by Gilchrist of an extension of Feitsma's database to $2^64$ found no BPSW pseudoprime (standard or strong) below $2^64$ (approximately $1.8446744cdot10^19$). This lower bound of $2^64$ for any BPSW pseudoprime has since been separately verified ($11$ July $2011$) by Charles Greathouse; however, he also used Feitsma's database, so a completely independent check has not yet been carried out."
â joriki
Sep 2 at 13:19
1
"Note, however, that Carl Pomerance ($1984$) presented a heuristic argument that an infinite number of counterexamples exist to the standard test (and presumably to the strong test as well, based on similar reasoning), and even that (for sufficiently large $x$, dependent on $mu$) the number of BPSW pseudoprimes $lt x$ exceeds $x^1-mu$, where $mu$ is an arbitrarily small pre-assigned positive number. Nevertheless, not a single BPSW pseudoprime has yet been found. Consequently, the BPSW test carries an aura of dependability (justified or not) exceeding that of competing algorithms [...]."
â joriki
Sep 2 at 13:21
2
2
This seems to be the source for this claim: groups.google.com/d/msg/sci.crypt/a0vacUwax5c/ZvjJn0sy7EgJ. It looks like an off-hand comment (it's ungrammatical), and it's not at all clear that the sentence about the $10000$ digits is based on the preceding information about Primo; in fact, the fact that it starts with "in fact" seems to indicate that it's based on other information (assuming that the ungrammatical "presumably" was meant to read "presumed"). I agree with you that absent additional information, the claim seems unjustified and overly optimistic.
â joriki
Sep 2 at 11:36
This seems to be the source for this claim: groups.google.com/d/msg/sci.crypt/a0vacUwax5c/ZvjJn0sy7EgJ. It looks like an off-hand comment (it's ungrammatical), and it's not at all clear that the sentence about the $10000$ digits is based on the preceding information about Primo; in fact, the fact that it starts with "in fact" seems to indicate that it's based on other information (assuming that the ungrammatical "presumably" was meant to read "presumed"). I agree with you that absent additional information, the claim seems unjustified and overly optimistic.
â joriki
Sep 2 at 11:36
1
1
No, I don't think it's circular. The intermediate prime is first tested only probabilistically in order not to waste time if it turns out to be composite. Then, at the very end, if everything else works out and the proof only depends on the primality of the intermediate prime, it's properly proved to be a prime (namely recursively with the same algorithm). See this section in the Wikipedia article (in particular the last paragraph in the section).
â joriki
Sep 2 at 12:14
No, I don't think it's circular. The intermediate prime is first tested only probabilistically in order not to waste time if it turns out to be composite. Then, at the very end, if everything else works out and the proof only depends on the primality of the intermediate prime, it's properly proved to be a prime (namely recursively with the same algorithm). See this section in the Wikipedia article (in particular the last paragraph in the section).
â joriki
Sep 2 at 12:14
1
1
I'm not sure what that's based on. The article says that as of June $2009$, it was confirmed that there are no BPSW pseudoprimes up to $10^17approx2^56.5$, so that's just about $frac1200$ of the numbers up to $2^64$. Unless the Primo test has been applied about $10^19$ times (which might be but seems unlikely) I don't see why there shouldn't be a BPSW pseudoprime below $2^64$. I suspect the belief is mostly heuristic, in that the test has been so successful, it seems unlikely that it should fail if we just increase the numbers by a factor of $200$.
â joriki
Sep 2 at 12:36
I'm not sure what that's based on. The article says that as of June $2009$, it was confirmed that there are no BPSW pseudoprimes up to $10^17approx2^56.5$, so that's just about $frac1200$ of the numbers up to $2^64$. Unless the Primo test has been applied about $10^19$ times (which might be but seems unlikely) I don't see why there shouldn't be a BPSW pseudoprime below $2^64$. I suspect the belief is mostly heuristic, in that the test has been so successful, it seems unlikely that it should fail if we just increase the numbers by a factor of $200$.
â joriki
Sep 2 at 12:36
1
1
I don't know anything about that myself, but I did find this relevant web page. "Furthermore, analysis ($24$ October $2009$) by Gilchrist of an extension of Feitsma's database to $2^64$ found no BPSW pseudoprime (standard or strong) below $2^64$ (approximately $1.8446744cdot10^19$). This lower bound of $2^64$ for any BPSW pseudoprime has since been separately verified ($11$ July $2011$) by Charles Greathouse; however, he also used Feitsma's database, so a completely independent check has not yet been carried out."
â joriki
Sep 2 at 13:19
I don't know anything about that myself, but I did find this relevant web page. "Furthermore, analysis ($24$ October $2009$) by Gilchrist of an extension of Feitsma's database to $2^64$ found no BPSW pseudoprime (standard or strong) below $2^64$ (approximately $1.8446744cdot10^19$). This lower bound of $2^64$ for any BPSW pseudoprime has since been separately verified ($11$ July $2011$) by Charles Greathouse; however, he also used Feitsma's database, so a completely independent check has not yet been carried out."
â joriki
Sep 2 at 13:19
1
1
"Note, however, that Carl Pomerance ($1984$) presented a heuristic argument that an infinite number of counterexamples exist to the standard test (and presumably to the strong test as well, based on similar reasoning), and even that (for sufficiently large $x$, dependent on $mu$) the number of BPSW pseudoprimes $lt x$ exceeds $x^1-mu$, where $mu$ is an arbitrarily small pre-assigned positive number. Nevertheless, not a single BPSW pseudoprime has yet been found. Consequently, the BPSW test carries an aura of dependability (justified or not) exceeding that of competing algorithms [...]."
â joriki
Sep 2 at 13:21
"Note, however, that Carl Pomerance ($1984$) presented a heuristic argument that an infinite number of counterexamples exist to the standard test (and presumably to the strong test as well, based on similar reasoning), and even that (for sufficiently large $x$, dependent on $mu$) the number of BPSW pseudoprimes $lt x$ exceeds $x^1-mu$, where $mu$ is an arbitrarily small pre-assigned positive number. Nevertheless, not a single BPSW pseudoprime has yet been found. Consequently, the BPSW test carries an aura of dependability (justified or not) exceeding that of competing algorithms [...]."
â joriki
Sep 2 at 13:21
 |Â
show 3 more comments
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
Determinism to $2^64$
This has been done with the Galway/Feitsma database of all 64-bit base-2 pseudoprimes. To my knowledge nobody has independently reproduced this data either with the same methods or different ones. Even with their methods for reducing the search space, it is a massive amount of computation. Naive brute force suffices up to $2^50$ or thereabouts, but that's a small portion of the 64-bit range.
They have described the methods used, and a few people have looked closely at them and believe it is a sound method. The code they used was reviewed by more than one person and some double checks were performed. Independent lists up to $10^13$ have been made but I haven't seen anything past that (the lists up to $10^15$ and $10^17$ were by the same authors of the list to $2^64$).
I have tested numerous variations of Lucas and Frobenius tests using the SPSP-2 list and have found no correctly coded variation in use that has a counterexample. So, on the assumptions that base-2 pseudoprime database is correct and the BPSW test is implemented correctly, then we know there are no 64-bit counterexamples.
By the way, Khashin has a Frobenius variant that he recently showed, using different methods, has no 64-bit counterexamples. He has slightly changed his test from his 2013 publication and his 2017 talk. The proofs rely extensively on computation which have not been reproduced or examined by others (to my knowledge). He is also, in my opinion, unwarrentedly optimistic about larger counterexamples (he believes there are none). There seems to be no performance reason to prefer this method.
Larger counterexamples
Pomerance gives an argument for counterexamples existing.
Various projects have been performed to specifically look for counterexamples. Chen and Greene have at least one paper that describes directed searches for a counterexample to a weakened version of the BPSW test, without success. This consists of a fairly large set of primes, with the idea that some subset product has a good chance of being a counterexample. Here is the first sentences of the conclusions section of their 2001 paper:
To date, the $620 appears to be safe. Unless an efficient scheme to search a space of size $2^1500$ is found, or an approach other than that suggested by Pomerance can be found, the problem of constructing a counterexample appears to be intractable.
Jon Grantham has given some talks over the last 15 years, including this one in 2013, that describe similar searches. My interpretation is that there are some known sets with fewer than 1200 primes, some subset of which has a product that is likely to be a counterexample. No subset has yet been shown to be one, and the space is far too large for computation to cover.
Estimate of smallest counterexample
As others have said, I don't think this has any supporting evidence other than a very large number of trials. We know counterexamples are rare, but there are a lot of numbers.
Primo has been in daily use on many machines for over 15 years. ECPP constructs a chain of primes, each one containing a step of "if $p_i$ is prime, then the following data shows that $p_i+1 > p_i$ is prime". This chain of is made until the first prime is small enough to prove using simple methods like BPSW or small SPSP sets such as Jaeschke 1994. ECPP finds these values, which can then be verified much faster as there is no searching or polynomial root finding needed. As part of running Primo as well as doing a verification step, a BPSW test is performed on candidates. If any of these intermediates were identified as prime by BPSW but ECPP failed to prove them, this is clearly logged. So far no such numbers have been identified.
Conclusion
BPSW variants have been solidly tested for 30+ years with no counterexample found. It is used millions of times daily in many applications. On the assumption of the Galway/Feitsma database, there are no 64-bit counterexamples. This is fairly well believed.
Some directed searches by some really smart people have been made to find counterexamples, with no success so far. In combination with the lack of random counterexamples, this gives a great deal of confidence in the results for large numbers. This is a big barrier to any application considering a different far-less-tested method such as a Frobenius variant.
But there is no proof nor even a statistical argument for a expected size of the smallest counterexample. I don't even know of any better bounds than the simple $1/4 times 1/8$ for a combination M-R + ES Lucas test, which empirically is clearly too high.
A superb answer! Thank you very much ! (+1 and accept)
â Peter
Sep 6 at 6:26
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
Determinism to $2^64$
This has been done with the Galway/Feitsma database of all 64-bit base-2 pseudoprimes. To my knowledge nobody has independently reproduced this data either with the same methods or different ones. Even with their methods for reducing the search space, it is a massive amount of computation. Naive brute force suffices up to $2^50$ or thereabouts, but that's a small portion of the 64-bit range.
They have described the methods used, and a few people have looked closely at them and believe it is a sound method. The code they used was reviewed by more than one person and some double checks were performed. Independent lists up to $10^13$ have been made but I haven't seen anything past that (the lists up to $10^15$ and $10^17$ were by the same authors of the list to $2^64$).
I have tested numerous variations of Lucas and Frobenius tests using the SPSP-2 list and have found no correctly coded variation in use that has a counterexample. So, on the assumptions that base-2 pseudoprime database is correct and the BPSW test is implemented correctly, then we know there are no 64-bit counterexamples.
By the way, Khashin has a Frobenius variant that he recently showed, using different methods, has no 64-bit counterexamples. He has slightly changed his test from his 2013 publication and his 2017 talk. The proofs rely extensively on computation which have not been reproduced or examined by others (to my knowledge). He is also, in my opinion, unwarrentedly optimistic about larger counterexamples (he believes there are none). There seems to be no performance reason to prefer this method.
Larger counterexamples
Pomerance gives an argument for counterexamples existing.
Various projects have been performed to specifically look for counterexamples. Chen and Greene have at least one paper that describes directed searches for a counterexample to a weakened version of the BPSW test, without success. This consists of a fairly large set of primes, with the idea that some subset product has a good chance of being a counterexample. Here is the first sentences of the conclusions section of their 2001 paper:
To date, the $620 appears to be safe. Unless an efficient scheme to search a space of size $2^1500$ is found, or an approach other than that suggested by Pomerance can be found, the problem of constructing a counterexample appears to be intractable.
Jon Grantham has given some talks over the last 15 years, including this one in 2013, that describe similar searches. My interpretation is that there are some known sets with fewer than 1200 primes, some subset of which has a product that is likely to be a counterexample. No subset has yet been shown to be one, and the space is far too large for computation to cover.
Estimate of smallest counterexample
As others have said, I don't think this has any supporting evidence other than a very large number of trials. We know counterexamples are rare, but there are a lot of numbers.
Primo has been in daily use on many machines for over 15 years. ECPP constructs a chain of primes, each one containing a step of "if $p_i$ is prime, then the following data shows that $p_i+1 > p_i$ is prime". This chain of is made until the first prime is small enough to prove using simple methods like BPSW or small SPSP sets such as Jaeschke 1994. ECPP finds these values, which can then be verified much faster as there is no searching or polynomial root finding needed. As part of running Primo as well as doing a verification step, a BPSW test is performed on candidates. If any of these intermediates were identified as prime by BPSW but ECPP failed to prove them, this is clearly logged. So far no such numbers have been identified.
Conclusion
BPSW variants have been solidly tested for 30+ years with no counterexample found. It is used millions of times daily in many applications. On the assumption of the Galway/Feitsma database, there are no 64-bit counterexamples. This is fairly well believed.
Some directed searches by some really smart people have been made to find counterexamples, with no success so far. In combination with the lack of random counterexamples, this gives a great deal of confidence in the results for large numbers. This is a big barrier to any application considering a different far-less-tested method such as a Frobenius variant.
But there is no proof nor even a statistical argument for a expected size of the smallest counterexample. I don't even know of any better bounds than the simple $1/4 times 1/8$ for a combination M-R + ES Lucas test, which empirically is clearly too high.
A superb answer! Thank you very much ! (+1 and accept)
â Peter
Sep 6 at 6:26
add a comment |Â
up vote
2
down vote
accepted
Determinism to $2^64$
This has been done with the Galway/Feitsma database of all 64-bit base-2 pseudoprimes. To my knowledge nobody has independently reproduced this data either with the same methods or different ones. Even with their methods for reducing the search space, it is a massive amount of computation. Naive brute force suffices up to $2^50$ or thereabouts, but that's a small portion of the 64-bit range.
They have described the methods used, and a few people have looked closely at them and believe it is a sound method. The code they used was reviewed by more than one person and some double checks were performed. Independent lists up to $10^13$ have been made but I haven't seen anything past that (the lists up to $10^15$ and $10^17$ were by the same authors of the list to $2^64$).
I have tested numerous variations of Lucas and Frobenius tests using the SPSP-2 list and have found no correctly coded variation in use that has a counterexample. So, on the assumptions that base-2 pseudoprime database is correct and the BPSW test is implemented correctly, then we know there are no 64-bit counterexamples.
By the way, Khashin has a Frobenius variant that he recently showed, using different methods, has no 64-bit counterexamples. He has slightly changed his test from his 2013 publication and his 2017 talk. The proofs rely extensively on computation which have not been reproduced or examined by others (to my knowledge). He is also, in my opinion, unwarrentedly optimistic about larger counterexamples (he believes there are none). There seems to be no performance reason to prefer this method.
Larger counterexamples
Pomerance gives an argument for counterexamples existing.
Various projects have been performed to specifically look for counterexamples. Chen and Greene have at least one paper that describes directed searches for a counterexample to a weakened version of the BPSW test, without success. This consists of a fairly large set of primes, with the idea that some subset product has a good chance of being a counterexample. Here is the first sentences of the conclusions section of their 2001 paper:
To date, the $620 appears to be safe. Unless an efficient scheme to search a space of size $2^1500$ is found, or an approach other than that suggested by Pomerance can be found, the problem of constructing a counterexample appears to be intractable.
Jon Grantham has given some talks over the last 15 years, including this one in 2013, that describe similar searches. My interpretation is that there are some known sets with fewer than 1200 primes, some subset of which has a product that is likely to be a counterexample. No subset has yet been shown to be one, and the space is far too large for computation to cover.
Estimate of smallest counterexample
As others have said, I don't think this has any supporting evidence other than a very large number of trials. We know counterexamples are rare, but there are a lot of numbers.
Primo has been in daily use on many machines for over 15 years. ECPP constructs a chain of primes, each one containing a step of "if $p_i$ is prime, then the following data shows that $p_i+1 > p_i$ is prime". This chain of is made until the first prime is small enough to prove using simple methods like BPSW or small SPSP sets such as Jaeschke 1994. ECPP finds these values, which can then be verified much faster as there is no searching or polynomial root finding needed. As part of running Primo as well as doing a verification step, a BPSW test is performed on candidates. If any of these intermediates were identified as prime by BPSW but ECPP failed to prove them, this is clearly logged. So far no such numbers have been identified.
Conclusion
BPSW variants have been solidly tested for 30+ years with no counterexample found. It is used millions of times daily in many applications. On the assumption of the Galway/Feitsma database, there are no 64-bit counterexamples. This is fairly well believed.
Some directed searches by some really smart people have been made to find counterexamples, with no success so far. In combination with the lack of random counterexamples, this gives a great deal of confidence in the results for large numbers. This is a big barrier to any application considering a different far-less-tested method such as a Frobenius variant.
But there is no proof nor even a statistical argument for a expected size of the smallest counterexample. I don't even know of any better bounds than the simple $1/4 times 1/8$ for a combination M-R + ES Lucas test, which empirically is clearly too high.
A superb answer! Thank you very much ! (+1 and accept)
â Peter
Sep 6 at 6:26
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
Determinism to $2^64$
This has been done with the Galway/Feitsma database of all 64-bit base-2 pseudoprimes. To my knowledge nobody has independently reproduced this data either with the same methods or different ones. Even with their methods for reducing the search space, it is a massive amount of computation. Naive brute force suffices up to $2^50$ or thereabouts, but that's a small portion of the 64-bit range.
They have described the methods used, and a few people have looked closely at them and believe it is a sound method. The code they used was reviewed by more than one person and some double checks were performed. Independent lists up to $10^13$ have been made but I haven't seen anything past that (the lists up to $10^15$ and $10^17$ were by the same authors of the list to $2^64$).
I have tested numerous variations of Lucas and Frobenius tests using the SPSP-2 list and have found no correctly coded variation in use that has a counterexample. So, on the assumptions that base-2 pseudoprime database is correct and the BPSW test is implemented correctly, then we know there are no 64-bit counterexamples.
By the way, Khashin has a Frobenius variant that he recently showed, using different methods, has no 64-bit counterexamples. He has slightly changed his test from his 2013 publication and his 2017 talk. The proofs rely extensively on computation which have not been reproduced or examined by others (to my knowledge). He is also, in my opinion, unwarrentedly optimistic about larger counterexamples (he believes there are none). There seems to be no performance reason to prefer this method.
Larger counterexamples
Pomerance gives an argument for counterexamples existing.
Various projects have been performed to specifically look for counterexamples. Chen and Greene have at least one paper that describes directed searches for a counterexample to a weakened version of the BPSW test, without success. This consists of a fairly large set of primes, with the idea that some subset product has a good chance of being a counterexample. Here is the first sentences of the conclusions section of their 2001 paper:
To date, the $620 appears to be safe. Unless an efficient scheme to search a space of size $2^1500$ is found, or an approach other than that suggested by Pomerance can be found, the problem of constructing a counterexample appears to be intractable.
Jon Grantham has given some talks over the last 15 years, including this one in 2013, that describe similar searches. My interpretation is that there are some known sets with fewer than 1200 primes, some subset of which has a product that is likely to be a counterexample. No subset has yet been shown to be one, and the space is far too large for computation to cover.
Estimate of smallest counterexample
As others have said, I don't think this has any supporting evidence other than a very large number of trials. We know counterexamples are rare, but there are a lot of numbers.
Primo has been in daily use on many machines for over 15 years. ECPP constructs a chain of primes, each one containing a step of "if $p_i$ is prime, then the following data shows that $p_i+1 > p_i$ is prime". This chain of is made until the first prime is small enough to prove using simple methods like BPSW or small SPSP sets such as Jaeschke 1994. ECPP finds these values, which can then be verified much faster as there is no searching or polynomial root finding needed. As part of running Primo as well as doing a verification step, a BPSW test is performed on candidates. If any of these intermediates were identified as prime by BPSW but ECPP failed to prove them, this is clearly logged. So far no such numbers have been identified.
Conclusion
BPSW variants have been solidly tested for 30+ years with no counterexample found. It is used millions of times daily in many applications. On the assumption of the Galway/Feitsma database, there are no 64-bit counterexamples. This is fairly well believed.
Some directed searches by some really smart people have been made to find counterexamples, with no success so far. In combination with the lack of random counterexamples, this gives a great deal of confidence in the results for large numbers. This is a big barrier to any application considering a different far-less-tested method such as a Frobenius variant.
But there is no proof nor even a statistical argument for a expected size of the smallest counterexample. I don't even know of any better bounds than the simple $1/4 times 1/8$ for a combination M-R + ES Lucas test, which empirically is clearly too high.
Determinism to $2^64$
This has been done with the Galway/Feitsma database of all 64-bit base-2 pseudoprimes. To my knowledge nobody has independently reproduced this data either with the same methods or different ones. Even with their methods for reducing the search space, it is a massive amount of computation. Naive brute force suffices up to $2^50$ or thereabouts, but that's a small portion of the 64-bit range.
They have described the methods used, and a few people have looked closely at them and believe it is a sound method. The code they used was reviewed by more than one person and some double checks were performed. Independent lists up to $10^13$ have been made but I haven't seen anything past that (the lists up to $10^15$ and $10^17$ were by the same authors of the list to $2^64$).
I have tested numerous variations of Lucas and Frobenius tests using the SPSP-2 list and have found no correctly coded variation in use that has a counterexample. So, on the assumptions that base-2 pseudoprime database is correct and the BPSW test is implemented correctly, then we know there are no 64-bit counterexamples.
By the way, Khashin has a Frobenius variant that he recently showed, using different methods, has no 64-bit counterexamples. He has slightly changed his test from his 2013 publication and his 2017 talk. The proofs rely extensively on computation which have not been reproduced or examined by others (to my knowledge). He is also, in my opinion, unwarrentedly optimistic about larger counterexamples (he believes there are none). There seems to be no performance reason to prefer this method.
Larger counterexamples
Pomerance gives an argument for counterexamples existing.
Various projects have been performed to specifically look for counterexamples. Chen and Greene have at least one paper that describes directed searches for a counterexample to a weakened version of the BPSW test, without success. This consists of a fairly large set of primes, with the idea that some subset product has a good chance of being a counterexample. Here is the first sentences of the conclusions section of their 2001 paper:
To date, the $620 appears to be safe. Unless an efficient scheme to search a space of size $2^1500$ is found, or an approach other than that suggested by Pomerance can be found, the problem of constructing a counterexample appears to be intractable.
Jon Grantham has given some talks over the last 15 years, including this one in 2013, that describe similar searches. My interpretation is that there are some known sets with fewer than 1200 primes, some subset of which has a product that is likely to be a counterexample. No subset has yet been shown to be one, and the space is far too large for computation to cover.
Estimate of smallest counterexample
As others have said, I don't think this has any supporting evidence other than a very large number of trials. We know counterexamples are rare, but there are a lot of numbers.
Primo has been in daily use on many machines for over 15 years. ECPP constructs a chain of primes, each one containing a step of "if $p_i$ is prime, then the following data shows that $p_i+1 > p_i$ is prime". This chain of is made until the first prime is small enough to prove using simple methods like BPSW or small SPSP sets such as Jaeschke 1994. ECPP finds these values, which can then be verified much faster as there is no searching or polynomial root finding needed. As part of running Primo as well as doing a verification step, a BPSW test is performed on candidates. If any of these intermediates were identified as prime by BPSW but ECPP failed to prove them, this is clearly logged. So far no such numbers have been identified.
Conclusion
BPSW variants have been solidly tested for 30+ years with no counterexample found. It is used millions of times daily in many applications. On the assumption of the Galway/Feitsma database, there are no 64-bit counterexamples. This is fairly well believed.
Some directed searches by some really smart people have been made to find counterexamples, with no success so far. In combination with the lack of random counterexamples, this gives a great deal of confidence in the results for large numbers. This is a big barrier to any application considering a different far-less-tested method such as a Frobenius variant.
But there is no proof nor even a statistical argument for a expected size of the smallest counterexample. I don't even know of any better bounds than the simple $1/4 times 1/8$ for a combination M-R + ES Lucas test, which empirically is clearly too high.
answered Sep 5 at 19:27
DanaJ
2,2511916
2,2511916
A superb answer! Thank you very much ! (+1 and accept)
â Peter
Sep 6 at 6:26
add a comment |Â
A superb answer! Thank you very much ! (+1 and accept)
â Peter
Sep 6 at 6:26
A superb answer! Thank you very much ! (+1 and accept)
â Peter
Sep 6 at 6:26
A superb answer! Thank you very much ! (+1 and accept)
â Peter
Sep 6 at 6:26
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2902592%2fisnt-the-estimate-for-the-smallest-counterexample-of-the-bpsw-test-far-too-opti%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
2
This seems to be the source for this claim: groups.google.com/d/msg/sci.crypt/a0vacUwax5c/ZvjJn0sy7EgJ. It looks like an off-hand comment (it's ungrammatical), and it's not at all clear that the sentence about the $10000$ digits is based on the preceding information about Primo; in fact, the fact that it starts with "in fact" seems to indicate that it's based on other information (assuming that the ungrammatical "presumably" was meant to read "presumed"). I agree with you that absent additional information, the claim seems unjustified and overly optimistic.
â joriki
Sep 2 at 11:36
1
No, I don't think it's circular. The intermediate prime is first tested only probabilistically in order not to waste time if it turns out to be composite. Then, at the very end, if everything else works out and the proof only depends on the primality of the intermediate prime, it's properly proved to be a prime (namely recursively with the same algorithm). See this section in the Wikipedia article (in particular the last paragraph in the section).
â joriki
Sep 2 at 12:14
1
I'm not sure what that's based on. The article says that as of June $2009$, it was confirmed that there are no BPSW pseudoprimes up to $10^17approx2^56.5$, so that's just about $frac1200$ of the numbers up to $2^64$. Unless the Primo test has been applied about $10^19$ times (which might be but seems unlikely) I don't see why there shouldn't be a BPSW pseudoprime below $2^64$. I suspect the belief is mostly heuristic, in that the test has been so successful, it seems unlikely that it should fail if we just increase the numbers by a factor of $200$.
â joriki
Sep 2 at 12:36
1
I don't know anything about that myself, but I did find this relevant web page. "Furthermore, analysis ($24$ October $2009$) by Gilchrist of an extension of Feitsma's database to $2^64$ found no BPSW pseudoprime (standard or strong) below $2^64$ (approximately $1.8446744cdot10^19$). This lower bound of $2^64$ for any BPSW pseudoprime has since been separately verified ($11$ July $2011$) by Charles Greathouse; however, he also used Feitsma's database, so a completely independent check has not yet been carried out."
â joriki
Sep 2 at 13:19
1
"Note, however, that Carl Pomerance ($1984$) presented a heuristic argument that an infinite number of counterexamples exist to the standard test (and presumably to the strong test as well, based on similar reasoning), and even that (for sufficiently large $x$, dependent on $mu$) the number of BPSW pseudoprimes $lt x$ exceeds $x^1-mu$, where $mu$ is an arbitrarily small pre-assigned positive number. Nevertheless, not a single BPSW pseudoprime has yet been found. Consequently, the BPSW test carries an aura of dependability (justified or not) exceeding that of competing algorithms [...]."
â joriki
Sep 2 at 13:21