Examples of notably long or difficult proofs that only improve upon existing results by a small amount

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
88
down vote

favorite
29












I was recently reading Bui, Conrey and Young's 2011 paper "More than 41% of the zeros of the zeta function are on the critical line", in which they improve the lower bound on the proportion of zeros on the critical line of the Riemann $zeta$-function. In order to achieve this, they define the (monstrous) mollifier:



$$
psi=sum_nleq y_1fracmu(n)P_1[n]n^sigma_0-frac12n^s+chi(s+frac12-sigma_0)sum_hkleq y_2fracmu_2(h)h^sigma_0-frac12k^frac12-sigma_0h^sk^1-sP_2(hk).
$$



This is followed by a nearly 20-page tour-de-force of grueling analytic number theory, coming to a grinding halt by improving the already known bounds from $40.88%$ to... $41.05%$.



Of course any improvement of this lower bound is a highly important achievement, and the methods and techniques used in the paper are deeply inspirational. However, given the highly complex nature of the proof and the small margin of improvement obtained over already existing results, I was inspired to ask the following question:



What are other examples of notably long or difficult proofs that only improve upon existing results by a small amount?










share|cite|improve this question



















  • 12




    It is really difficult to answer without a precise definition of "small amount". Let me give a provocative example. It was well known that the Poincare' conjecture hold in every dimension different from $3$. Then Perelman came with his very difficult proof in the case of dimension $3$, and clearly a single case can be seen as a "small amount" when compared with infinitely many :-)
    – Francesco Polizzi
    Sep 6 at 8:32







  • 13




    @FrancescoPolizzi It might be better considered as "a small amount of what remains to be done" rather than "a small amount of what was needed to be done at the start". So your Poincare example is 100% of what remained to be done, whereas the example in the question is ~0.3%.
    – Christopher
    Sep 6 at 10:32






  • 3




    This answer of John Baez seems relevant: mathoverflow.net/questions/31315/…. I'm not sure if any of the proofs he mentions are notably long or difficult, but this problem seems to have an interesting history of mathematicians making microscopic improvements on what was known before.
    – Will Brian
    Sep 6 at 13:03






  • 13




    en.wikipedia.org/wiki/Moving_sofa_problem
    – Steve Huntsman
    Sep 6 at 13:29






  • 7




    I'm not qualified to post an answer, but I think there are many such examples in combinatorics, depending on what you mean by "notably long or difficult." For example, pulling one of the first paper's from Radziszowski's Small Ramsey Number survey (pdf), Angeltveit and McKay's paper at arxiv.org/abs/1703.08768 checks ~ two trillion cases with a computer program to improve a bound by 1.
    – Ben Burns
    Sep 6 at 13:57















up vote
88
down vote

favorite
29












I was recently reading Bui, Conrey and Young's 2011 paper "More than 41% of the zeros of the zeta function are on the critical line", in which they improve the lower bound on the proportion of zeros on the critical line of the Riemann $zeta$-function. In order to achieve this, they define the (monstrous) mollifier:



$$
psi=sum_nleq y_1fracmu(n)P_1[n]n^sigma_0-frac12n^s+chi(s+frac12-sigma_0)sum_hkleq y_2fracmu_2(h)h^sigma_0-frac12k^frac12-sigma_0h^sk^1-sP_2(hk).
$$



This is followed by a nearly 20-page tour-de-force of grueling analytic number theory, coming to a grinding halt by improving the already known bounds from $40.88%$ to... $41.05%$.



Of course any improvement of this lower bound is a highly important achievement, and the methods and techniques used in the paper are deeply inspirational. However, given the highly complex nature of the proof and the small margin of improvement obtained over already existing results, I was inspired to ask the following question:



What are other examples of notably long or difficult proofs that only improve upon existing results by a small amount?










share|cite|improve this question



















  • 12




    It is really difficult to answer without a precise definition of "small amount". Let me give a provocative example. It was well known that the Poincare' conjecture hold in every dimension different from $3$. Then Perelman came with his very difficult proof in the case of dimension $3$, and clearly a single case can be seen as a "small amount" when compared with infinitely many :-)
    – Francesco Polizzi
    Sep 6 at 8:32







  • 13




    @FrancescoPolizzi It might be better considered as "a small amount of what remains to be done" rather than "a small amount of what was needed to be done at the start". So your Poincare example is 100% of what remained to be done, whereas the example in the question is ~0.3%.
    – Christopher
    Sep 6 at 10:32






  • 3




    This answer of John Baez seems relevant: mathoverflow.net/questions/31315/…. I'm not sure if any of the proofs he mentions are notably long or difficult, but this problem seems to have an interesting history of mathematicians making microscopic improvements on what was known before.
    – Will Brian
    Sep 6 at 13:03






  • 13




    en.wikipedia.org/wiki/Moving_sofa_problem
    – Steve Huntsman
    Sep 6 at 13:29






  • 7




    I'm not qualified to post an answer, but I think there are many such examples in combinatorics, depending on what you mean by "notably long or difficult." For example, pulling one of the first paper's from Radziszowski's Small Ramsey Number survey (pdf), Angeltveit and McKay's paper at arxiv.org/abs/1703.08768 checks ~ two trillion cases with a computer program to improve a bound by 1.
    – Ben Burns
    Sep 6 at 13:57













up vote
88
down vote

favorite
29









up vote
88
down vote

favorite
29






29





I was recently reading Bui, Conrey and Young's 2011 paper "More than 41% of the zeros of the zeta function are on the critical line", in which they improve the lower bound on the proportion of zeros on the critical line of the Riemann $zeta$-function. In order to achieve this, they define the (monstrous) mollifier:



$$
psi=sum_nleq y_1fracmu(n)P_1[n]n^sigma_0-frac12n^s+chi(s+frac12-sigma_0)sum_hkleq y_2fracmu_2(h)h^sigma_0-frac12k^frac12-sigma_0h^sk^1-sP_2(hk).
$$



This is followed by a nearly 20-page tour-de-force of grueling analytic number theory, coming to a grinding halt by improving the already known bounds from $40.88%$ to... $41.05%$.



Of course any improvement of this lower bound is a highly important achievement, and the methods and techniques used in the paper are deeply inspirational. However, given the highly complex nature of the proof and the small margin of improvement obtained over already existing results, I was inspired to ask the following question:



What are other examples of notably long or difficult proofs that only improve upon existing results by a small amount?










share|cite|improve this question















I was recently reading Bui, Conrey and Young's 2011 paper "More than 41% of the zeros of the zeta function are on the critical line", in which they improve the lower bound on the proportion of zeros on the critical line of the Riemann $zeta$-function. In order to achieve this, they define the (monstrous) mollifier:



$$
psi=sum_nleq y_1fracmu(n)P_1[n]n^sigma_0-frac12n^s+chi(s+frac12-sigma_0)sum_hkleq y_2fracmu_2(h)h^sigma_0-frac12k^frac12-sigma_0h^sk^1-sP_2(hk).
$$



This is followed by a nearly 20-page tour-de-force of grueling analytic number theory, coming to a grinding halt by improving the already known bounds from $40.88%$ to... $41.05%$.



Of course any improvement of this lower bound is a highly important achievement, and the methods and techniques used in the paper are deeply inspirational. However, given the highly complex nature of the proof and the small margin of improvement obtained over already existing results, I was inspired to ask the following question:



What are other examples of notably long or difficult proofs that only improve upon existing results by a small amount?







soft-question






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 6 at 10:36


























community wiki





PierreTheFermented








  • 12




    It is really difficult to answer without a precise definition of "small amount". Let me give a provocative example. It was well known that the Poincare' conjecture hold in every dimension different from $3$. Then Perelman came with his very difficult proof in the case of dimension $3$, and clearly a single case can be seen as a "small amount" when compared with infinitely many :-)
    – Francesco Polizzi
    Sep 6 at 8:32







  • 13




    @FrancescoPolizzi It might be better considered as "a small amount of what remains to be done" rather than "a small amount of what was needed to be done at the start". So your Poincare example is 100% of what remained to be done, whereas the example in the question is ~0.3%.
    – Christopher
    Sep 6 at 10:32






  • 3




    This answer of John Baez seems relevant: mathoverflow.net/questions/31315/…. I'm not sure if any of the proofs he mentions are notably long or difficult, but this problem seems to have an interesting history of mathematicians making microscopic improvements on what was known before.
    – Will Brian
    Sep 6 at 13:03






  • 13




    en.wikipedia.org/wiki/Moving_sofa_problem
    – Steve Huntsman
    Sep 6 at 13:29






  • 7




    I'm not qualified to post an answer, but I think there are many such examples in combinatorics, depending on what you mean by "notably long or difficult." For example, pulling one of the first paper's from Radziszowski's Small Ramsey Number survey (pdf), Angeltveit and McKay's paper at arxiv.org/abs/1703.08768 checks ~ two trillion cases with a computer program to improve a bound by 1.
    – Ben Burns
    Sep 6 at 13:57













  • 12




    It is really difficult to answer without a precise definition of "small amount". Let me give a provocative example. It was well known that the Poincare' conjecture hold in every dimension different from $3$. Then Perelman came with his very difficult proof in the case of dimension $3$, and clearly a single case can be seen as a "small amount" when compared with infinitely many :-)
    – Francesco Polizzi
    Sep 6 at 8:32







  • 13




    @FrancescoPolizzi It might be better considered as "a small amount of what remains to be done" rather than "a small amount of what was needed to be done at the start". So your Poincare example is 100% of what remained to be done, whereas the example in the question is ~0.3%.
    – Christopher
    Sep 6 at 10:32






  • 3




    This answer of John Baez seems relevant: mathoverflow.net/questions/31315/…. I'm not sure if any of the proofs he mentions are notably long or difficult, but this problem seems to have an interesting history of mathematicians making microscopic improvements on what was known before.
    – Will Brian
    Sep 6 at 13:03






  • 13




    en.wikipedia.org/wiki/Moving_sofa_problem
    – Steve Huntsman
    Sep 6 at 13:29






  • 7




    I'm not qualified to post an answer, but I think there are many such examples in combinatorics, depending on what you mean by "notably long or difficult." For example, pulling one of the first paper's from Radziszowski's Small Ramsey Number survey (pdf), Angeltveit and McKay's paper at arxiv.org/abs/1703.08768 checks ~ two trillion cases with a computer program to improve a bound by 1.
    – Ben Burns
    Sep 6 at 13:57








12




12




It is really difficult to answer without a precise definition of "small amount". Let me give a provocative example. It was well known that the Poincare' conjecture hold in every dimension different from $3$. Then Perelman came with his very difficult proof in the case of dimension $3$, and clearly a single case can be seen as a "small amount" when compared with infinitely many :-)
– Francesco Polizzi
Sep 6 at 8:32





It is really difficult to answer without a precise definition of "small amount". Let me give a provocative example. It was well known that the Poincare' conjecture hold in every dimension different from $3$. Then Perelman came with his very difficult proof in the case of dimension $3$, and clearly a single case can be seen as a "small amount" when compared with infinitely many :-)
– Francesco Polizzi
Sep 6 at 8:32





13




13




@FrancescoPolizzi It might be better considered as "a small amount of what remains to be done" rather than "a small amount of what was needed to be done at the start". So your Poincare example is 100% of what remained to be done, whereas the example in the question is ~0.3%.
– Christopher
Sep 6 at 10:32




@FrancescoPolizzi It might be better considered as "a small amount of what remains to be done" rather than "a small amount of what was needed to be done at the start". So your Poincare example is 100% of what remained to be done, whereas the example in the question is ~0.3%.
– Christopher
Sep 6 at 10:32




3




3




This answer of John Baez seems relevant: mathoverflow.net/questions/31315/…. I'm not sure if any of the proofs he mentions are notably long or difficult, but this problem seems to have an interesting history of mathematicians making microscopic improvements on what was known before.
– Will Brian
Sep 6 at 13:03




This answer of John Baez seems relevant: mathoverflow.net/questions/31315/…. I'm not sure if any of the proofs he mentions are notably long or difficult, but this problem seems to have an interesting history of mathematicians making microscopic improvements on what was known before.
– Will Brian
Sep 6 at 13:03




13




13




en.wikipedia.org/wiki/Moving_sofa_problem
– Steve Huntsman
Sep 6 at 13:29




en.wikipedia.org/wiki/Moving_sofa_problem
– Steve Huntsman
Sep 6 at 13:29




7




7




I'm not qualified to post an answer, but I think there are many such examples in combinatorics, depending on what you mean by "notably long or difficult." For example, pulling one of the first paper's from Radziszowski's Small Ramsey Number survey (pdf), Angeltveit and McKay's paper at arxiv.org/abs/1703.08768 checks ~ two trillion cases with a computer program to improve a bound by 1.
– Ben Burns
Sep 6 at 13:57





I'm not qualified to post an answer, but I think there are many such examples in combinatorics, depending on what you mean by "notably long or difficult." For example, pulling one of the first paper's from Radziszowski's Small Ramsey Number survey (pdf), Angeltveit and McKay's paper at arxiv.org/abs/1703.08768 checks ~ two trillion cases with a computer program to improve a bound by 1.
– Ben Burns
Sep 6 at 13:57











10 Answers
10






active

oldest

votes

















up vote
81
down vote













This 28-page paper by Le Gall lowers the best-known estimate for the asymptotic complexity of matrix multiplication from $O(n^2.3728642)$ to $O(n^2.3728639)$, a reduction by 0.00001%. However, it simplifies the previous method, so it does not really qualify as "notably long or difficult proof". Maybe this definition applies more to the previous improvement in a 73-page paper by Vassilevska-Williams, which brought down the exponent from $O(n^2.374)$ to $O(n^2.3728642)$






share|cite|improve this answer


















  • 16




    It's probably worth pointing out that [according to the papers, I don't know the subject myself] the strong-but-widely-believed conjecture - analogous to the Riemann hypothesis in the example in the question - is that the asymptotic complexity is $O(n^2)$.
    – Christopher
    Sep 6 at 10:36






  • 21




    @Christopher $O(n^2)$ I think would be a shock, but $O(n^2+epsilon)$ for any $epsilon$ is a lot more plausible.
    – Steven Stadnicki
    Sep 6 at 17:11






  • 3




    @StevenStadnicki, Christopher: Isn't $O(n^2 log(n))$ a viable conjecture?
    – Mitch
    Sep 9 at 0:26






  • 5




    @Mitch A function in $O(n^2log n)$ is also in $O(n^2+epsilon)$ for any $epsilon>0$; that is just a more general form that includes essentially all "$n^2$ times logarithmic factors" classes.
    – Federico Poloni
    Sep 9 at 10:08






  • 2




    @FedericoPoloni $O(n^2 logn)$ is certainly in $O(n^2+epsilon)$ for $epsilon > 0$ but $O(n^2+epsilon)$ is not in $O(n^2 logn)$. The question is whether $O(n^2 logn)$ would also be shocking.
    – Anush
    Sep 11 at 11:14

















up vote
50
down vote













Recently Konyagin and Shkredov improved the exponent of $4/3$ in the sum-products estimate in $mathbbR$, namely that $|A+A|+|Acdot A|gg |A|^4/3$ for every $Asubset mathbbR$, to $4/3+5/9813$. This appears to be much harder than the one-page proof for $4/3$.



The conjecture of Erdős is that the exponent approaches 2.






share|cite|improve this answer


















  • 6




    One other notable example from the same area, already mentioned in the very first section 1.1.1 of Tao and Vu's book. Erdos showed that any finite set $A subset mathbbZ$ of integers contains a subset $B subset A$ with $|B| > (|A| + 1)/3$ and free of solutions to $x+y = z$. In Estimates related to sumfree subsets of sets of integers (Israel J. Math., 1997), Bourgain applies an ingenious harmonic analysis to strengthen this bound to... $(|A| + 2)/3$. Then again, this establishes the non-sharpness of a classical result, so who judges whether the improvement is "only by a short amount?"
    – Vesselin Dimitrov
    Sep 7 at 22:13






  • 6




    @VesselinDimitrov I would say this deserves a separate answer. Regardless of how one judges this result, I think it fits well with the spirit of the question.
    – Wojowu
    Sep 8 at 12:33

















up vote
47
down vote













The De Bruijn-Newman constant $Lambda$ was defined and upper bounded by $Lambda leq 1/2$ in 1950. After 58 years of work, in 2008 this upper bound was finally improved to ... $Lambda < 1/2$ (a 0% improvement) in a 26-page paper. The best known upper bound is currently $Lambda leq 0.22$. The Riemann hypothesis is equivalent to $Lambda = 0$, so if it's true then we've got quite a ways to go.






share|cite|improve this answer


















  • 10




    I think this is a winner when it comes to both relative and absolute smallest improvement!
    – Wojowu
    Sep 7 at 17:12






  • 1




    Rogers and Tao have proved that constant is non-negative. See arxiv.org/abs/1801.05914.
    – M. Khan
    Sep 7 at 19:19






  • 2




    @M.Khan Indeed. The Riemann hypothesis was already known to be equivalent to the statement $Lambda leq 0$, so their proof tightened the equivalence to the statement $Lambda = 0$.
    – tparker
    Sep 7 at 19:51






  • 6




    Apparently if one thinks about the De Bruijn-Newman constant in isolation, without its connection to the Riemann hypothesis, then it's arguably more natural to conjecture that $Lambda > 0$ than $Lambda = 0$. This is considered one of the strongest heuristic arguments against the Riemann hypothesis (although of course not nearly as strong as the many heuristic arguments for it).
    – tparker
    Sep 7 at 19:53







  • 9




    From $Lambdaleq1/2$ to $Lambda<1/2$ I would not say that is a 0% improvement but instead a "0$^+$"% improvement.
    – Matemáticos Chibchas
    Sep 8 at 23:11


















up vote
39
down vote













One example could be the sphere packing problem in $mathbbR^8$, recently finished by Viazovska. It was a well-known conjecture that $E_8$-lattice packing gives the maximum density for sphere packings in $mathbbR^8$. In 2003, Cohn and Elkies have shown using linear programming that the optimal density is less or equal to 1.000001 times the density of $E_8$ packing. Viazovska's paper finally removed this 1.000001 factor. While I would not call her paper very long, it definitely has some highly non-trivial ideas in it (the appearance of modular forms is pretty surprising to me, for example).






share|cite|improve this answer


















  • 21




    This is not really a "small amount" since it reduces the approximation to a tight bound.
    – Ethan Bolker
    Sep 7 at 16:55

















up vote
33
down vote













The Dirichlet divisor problem has a history of such minor improvements, each with progressively longer proof. The problem asks for possible exponents $theta$ for which we have $sum_nleq xd(n)=xlog x+(2gamma-1)x+O(x^theta)$, where $d$ is the divisor-counting function. It is known $theta<frac14$ can't work, and it's conjectured any $theta>frac14$ does.



Progress towards showing this has been rather slow: Dirichlet has shown $theta=frac12$ works, Voronoi has improved it to $theta>frac13$ and since then we had around a dozen of papers, each more difficult than the previous one, none of which has improved $theta$ by more than $0.004$, see here for details.



Similar story happens with Gauss circle problem, see the table here.






share|cite|improve this answer





























    up vote
    10
    down vote













    In the 65-page paper A Randomized Rounding Approach to the Traveling Salesman Problem, Oveis Gharan, Saberi, and Singh acquire a $(1.5 - 10^-52)$-approximation algorithm for the Traveling Salesman Problem on graphical metrics, a problem for which a $1.5$ approximation can be explained and analyzed in under 5 minutes. I have heard that the constant can be improved to $1.5 - 10^-6$ with little change to the paper.



    Since then, a sequence of simpler works has brought the constant down to 1.4 (which I believe is the latest result).






    share|cite|improve this answer





























      up vote
      9
      down vote













      In the context of the Millenium problem for the Navier-Stokes equations, there is a long history of results along the lines of "if global regularity fails, then such and such norm has to blow up." There are numerous results on minimal improvements in the specification of "such and such." Of course, no one knows at this point whether any norm really blows up or not.






      share|cite|improve this answer





























        up vote
        9
        down vote













        Almost any paper on the Gauss Circle Problem written in the last 70 years qualifies.






        share|cite|improve this answer


















        • 6




          I have already mentioned that in my answer about Dirichlet divisor problem.
          – Wojowu
          Sep 8 at 12:27

















        up vote
        8
        down vote













        Here is another realm where improvements are small and tend towards an unknown but existing limit $S$ with currently known bounds $1.2748 ≤ S ≤ 1.5098$ where the upper bound seems not too far from the actual value of $S$.



        It is about the smallest possible supremum of the autoconvolution of a nonnegative function supported in a compact interval. The discrete version of those (i.e. restricting to functions that are piecewise constant) yields optimal functions which are closely related to Generalized Sidon sets.



        A fascinating article is Improved bounds on the supremum of autoconvolutions. As a teaser, have a look at the diagram on page 10.

        If the condition "nonnegative" on the function is removed, surprisingly the new supremum may be even smaller.






        share|cite|improve this answer





























          up vote
          8
          down vote













          Let $phi$ be a univalent (i .e., holomorphic and injective) function on the unit disc. Consider the growth rate of the lengths of the images of circles $|z|=r$ as $r$ goes to 1:
          $$
          limsup_rto 1-fracdtheta,
          $$
          and denote by $gamma$ the supremum of this quantity over all bounded univalent $phi$.



          Beliaev and Smirnov describe the work on upper bounds for $gamma$, as of 2004:




          Conjectural value of $gamma=B(1)$
          is $1/4$, but existing estimates are
          quite far. The first result in this direction is due to Bieberbach [7] who in 1914
          used his area theorem to prove that
          $gammaleq 1/2$. <...> Clunie and Pommerenke in [16] proved that
          $gamma
          leq
          1
          /
          2
          −
          1
          /
          300$ <...> Carleson and Jones [13] <...> used
          Marcinkiewicz integrals to prove
          $gamma<
          0.49755$. This estimate was improved
          by Makarov and Pommerenke [43] to
          $gamma<
          0.4886$ and then by Grinshpan and
          Pommerenke [21] to
          $γ<0.4884$. The best current estimate is due to Hedenmalm
          and Shimorin [24] who quite recently proved that
          $B(1)<0.46.$




          I guess the latter estimate is still the best as of now.






          share|cite|improve this answer






















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "504"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f309974%2fexamples-of-notably-long-or-difficult-proofs-that-only-improve-upon-existing-res%23new-answer', 'question_page');

            );

            Post as a guest






























            10 Answers
            10






            active

            oldest

            votes








            10 Answers
            10






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            81
            down vote













            This 28-page paper by Le Gall lowers the best-known estimate for the asymptotic complexity of matrix multiplication from $O(n^2.3728642)$ to $O(n^2.3728639)$, a reduction by 0.00001%. However, it simplifies the previous method, so it does not really qualify as "notably long or difficult proof". Maybe this definition applies more to the previous improvement in a 73-page paper by Vassilevska-Williams, which brought down the exponent from $O(n^2.374)$ to $O(n^2.3728642)$






            share|cite|improve this answer


















            • 16




              It's probably worth pointing out that [according to the papers, I don't know the subject myself] the strong-but-widely-believed conjecture - analogous to the Riemann hypothesis in the example in the question - is that the asymptotic complexity is $O(n^2)$.
              – Christopher
              Sep 6 at 10:36






            • 21




              @Christopher $O(n^2)$ I think would be a shock, but $O(n^2+epsilon)$ for any $epsilon$ is a lot more plausible.
              – Steven Stadnicki
              Sep 6 at 17:11






            • 3




              @StevenStadnicki, Christopher: Isn't $O(n^2 log(n))$ a viable conjecture?
              – Mitch
              Sep 9 at 0:26






            • 5




              @Mitch A function in $O(n^2log n)$ is also in $O(n^2+epsilon)$ for any $epsilon>0$; that is just a more general form that includes essentially all "$n^2$ times logarithmic factors" classes.
              – Federico Poloni
              Sep 9 at 10:08






            • 2




              @FedericoPoloni $O(n^2 logn)$ is certainly in $O(n^2+epsilon)$ for $epsilon > 0$ but $O(n^2+epsilon)$ is not in $O(n^2 logn)$. The question is whether $O(n^2 logn)$ would also be shocking.
              – Anush
              Sep 11 at 11:14














            up vote
            81
            down vote













            This 28-page paper by Le Gall lowers the best-known estimate for the asymptotic complexity of matrix multiplication from $O(n^2.3728642)$ to $O(n^2.3728639)$, a reduction by 0.00001%. However, it simplifies the previous method, so it does not really qualify as "notably long or difficult proof". Maybe this definition applies more to the previous improvement in a 73-page paper by Vassilevska-Williams, which brought down the exponent from $O(n^2.374)$ to $O(n^2.3728642)$






            share|cite|improve this answer


















            • 16




              It's probably worth pointing out that [according to the papers, I don't know the subject myself] the strong-but-widely-believed conjecture - analogous to the Riemann hypothesis in the example in the question - is that the asymptotic complexity is $O(n^2)$.
              – Christopher
              Sep 6 at 10:36






            • 21




              @Christopher $O(n^2)$ I think would be a shock, but $O(n^2+epsilon)$ for any $epsilon$ is a lot more plausible.
              – Steven Stadnicki
              Sep 6 at 17:11






            • 3




              @StevenStadnicki, Christopher: Isn't $O(n^2 log(n))$ a viable conjecture?
              – Mitch
              Sep 9 at 0:26






            • 5




              @Mitch A function in $O(n^2log n)$ is also in $O(n^2+epsilon)$ for any $epsilon>0$; that is just a more general form that includes essentially all "$n^2$ times logarithmic factors" classes.
              – Federico Poloni
              Sep 9 at 10:08






            • 2




              @FedericoPoloni $O(n^2 logn)$ is certainly in $O(n^2+epsilon)$ for $epsilon > 0$ but $O(n^2+epsilon)$ is not in $O(n^2 logn)$. The question is whether $O(n^2 logn)$ would also be shocking.
              – Anush
              Sep 11 at 11:14












            up vote
            81
            down vote










            up vote
            81
            down vote









            This 28-page paper by Le Gall lowers the best-known estimate for the asymptotic complexity of matrix multiplication from $O(n^2.3728642)$ to $O(n^2.3728639)$, a reduction by 0.00001%. However, it simplifies the previous method, so it does not really qualify as "notably long or difficult proof". Maybe this definition applies more to the previous improvement in a 73-page paper by Vassilevska-Williams, which brought down the exponent from $O(n^2.374)$ to $O(n^2.3728642)$






            share|cite|improve this answer














            This 28-page paper by Le Gall lowers the best-known estimate for the asymptotic complexity of matrix multiplication from $O(n^2.3728642)$ to $O(n^2.3728639)$, a reduction by 0.00001%. However, it simplifies the previous method, so it does not really qualify as "notably long or difficult proof". Maybe this definition applies more to the previous improvement in a 73-page paper by Vassilevska-Williams, which brought down the exponent from $O(n^2.374)$ to $O(n^2.3728642)$







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Sep 6 at 9:02


























            community wiki





            Federico Poloni








            • 16




              It's probably worth pointing out that [according to the papers, I don't know the subject myself] the strong-but-widely-believed conjecture - analogous to the Riemann hypothesis in the example in the question - is that the asymptotic complexity is $O(n^2)$.
              – Christopher
              Sep 6 at 10:36






            • 21




              @Christopher $O(n^2)$ I think would be a shock, but $O(n^2+epsilon)$ for any $epsilon$ is a lot more plausible.
              – Steven Stadnicki
              Sep 6 at 17:11






            • 3




              @StevenStadnicki, Christopher: Isn't $O(n^2 log(n))$ a viable conjecture?
              – Mitch
              Sep 9 at 0:26






            • 5




              @Mitch A function in $O(n^2log n)$ is also in $O(n^2+epsilon)$ for any $epsilon>0$; that is just a more general form that includes essentially all "$n^2$ times logarithmic factors" classes.
              – Federico Poloni
              Sep 9 at 10:08






            • 2




              @FedericoPoloni $O(n^2 logn)$ is certainly in $O(n^2+epsilon)$ for $epsilon > 0$ but $O(n^2+epsilon)$ is not in $O(n^2 logn)$. The question is whether $O(n^2 logn)$ would also be shocking.
              – Anush
              Sep 11 at 11:14












            • 16




              It's probably worth pointing out that [according to the papers, I don't know the subject myself] the strong-but-widely-believed conjecture - analogous to the Riemann hypothesis in the example in the question - is that the asymptotic complexity is $O(n^2)$.
              – Christopher
              Sep 6 at 10:36






            • 21




              @Christopher $O(n^2)$ I think would be a shock, but $O(n^2+epsilon)$ for any $epsilon$ is a lot more plausible.
              – Steven Stadnicki
              Sep 6 at 17:11






            • 3




              @StevenStadnicki, Christopher: Isn't $O(n^2 log(n))$ a viable conjecture?
              – Mitch
              Sep 9 at 0:26






            • 5




              @Mitch A function in $O(n^2log n)$ is also in $O(n^2+epsilon)$ for any $epsilon>0$; that is just a more general form that includes essentially all "$n^2$ times logarithmic factors" classes.
              – Federico Poloni
              Sep 9 at 10:08






            • 2




              @FedericoPoloni $O(n^2 logn)$ is certainly in $O(n^2+epsilon)$ for $epsilon > 0$ but $O(n^2+epsilon)$ is not in $O(n^2 logn)$. The question is whether $O(n^2 logn)$ would also be shocking.
              – Anush
              Sep 11 at 11:14







            16




            16




            It's probably worth pointing out that [according to the papers, I don't know the subject myself] the strong-but-widely-believed conjecture - analogous to the Riemann hypothesis in the example in the question - is that the asymptotic complexity is $O(n^2)$.
            – Christopher
            Sep 6 at 10:36




            It's probably worth pointing out that [according to the papers, I don't know the subject myself] the strong-but-widely-believed conjecture - analogous to the Riemann hypothesis in the example in the question - is that the asymptotic complexity is $O(n^2)$.
            – Christopher
            Sep 6 at 10:36




            21




            21




            @Christopher $O(n^2)$ I think would be a shock, but $O(n^2+epsilon)$ for any $epsilon$ is a lot more plausible.
            – Steven Stadnicki
            Sep 6 at 17:11




            @Christopher $O(n^2)$ I think would be a shock, but $O(n^2+epsilon)$ for any $epsilon$ is a lot more plausible.
            – Steven Stadnicki
            Sep 6 at 17:11




            3




            3




            @StevenStadnicki, Christopher: Isn't $O(n^2 log(n))$ a viable conjecture?
            – Mitch
            Sep 9 at 0:26




            @StevenStadnicki, Christopher: Isn't $O(n^2 log(n))$ a viable conjecture?
            – Mitch
            Sep 9 at 0:26




            5




            5




            @Mitch A function in $O(n^2log n)$ is also in $O(n^2+epsilon)$ for any $epsilon>0$; that is just a more general form that includes essentially all "$n^2$ times logarithmic factors" classes.
            – Federico Poloni
            Sep 9 at 10:08




            @Mitch A function in $O(n^2log n)$ is also in $O(n^2+epsilon)$ for any $epsilon>0$; that is just a more general form that includes essentially all "$n^2$ times logarithmic factors" classes.
            – Federico Poloni
            Sep 9 at 10:08




            2




            2




            @FedericoPoloni $O(n^2 logn)$ is certainly in $O(n^2+epsilon)$ for $epsilon > 0$ but $O(n^2+epsilon)$ is not in $O(n^2 logn)$. The question is whether $O(n^2 logn)$ would also be shocking.
            – Anush
            Sep 11 at 11:14




            @FedericoPoloni $O(n^2 logn)$ is certainly in $O(n^2+epsilon)$ for $epsilon > 0$ but $O(n^2+epsilon)$ is not in $O(n^2 logn)$. The question is whether $O(n^2 logn)$ would also be shocking.
            – Anush
            Sep 11 at 11:14










            up vote
            50
            down vote













            Recently Konyagin and Shkredov improved the exponent of $4/3$ in the sum-products estimate in $mathbbR$, namely that $|A+A|+|Acdot A|gg |A|^4/3$ for every $Asubset mathbbR$, to $4/3+5/9813$. This appears to be much harder than the one-page proof for $4/3$.



            The conjecture of Erdős is that the exponent approaches 2.






            share|cite|improve this answer


















            • 6




              One other notable example from the same area, already mentioned in the very first section 1.1.1 of Tao and Vu's book. Erdos showed that any finite set $A subset mathbbZ$ of integers contains a subset $B subset A$ with $|B| > (|A| + 1)/3$ and free of solutions to $x+y = z$. In Estimates related to sumfree subsets of sets of integers (Israel J. Math., 1997), Bourgain applies an ingenious harmonic analysis to strengthen this bound to... $(|A| + 2)/3$. Then again, this establishes the non-sharpness of a classical result, so who judges whether the improvement is "only by a short amount?"
              – Vesselin Dimitrov
              Sep 7 at 22:13






            • 6




              @VesselinDimitrov I would say this deserves a separate answer. Regardless of how one judges this result, I think it fits well with the spirit of the question.
              – Wojowu
              Sep 8 at 12:33














            up vote
            50
            down vote













            Recently Konyagin and Shkredov improved the exponent of $4/3$ in the sum-products estimate in $mathbbR$, namely that $|A+A|+|Acdot A|gg |A|^4/3$ for every $Asubset mathbbR$, to $4/3+5/9813$. This appears to be much harder than the one-page proof for $4/3$.



            The conjecture of Erdős is that the exponent approaches 2.






            share|cite|improve this answer


















            • 6




              One other notable example from the same area, already mentioned in the very first section 1.1.1 of Tao and Vu's book. Erdos showed that any finite set $A subset mathbbZ$ of integers contains a subset $B subset A$ with $|B| > (|A| + 1)/3$ and free of solutions to $x+y = z$. In Estimates related to sumfree subsets of sets of integers (Israel J. Math., 1997), Bourgain applies an ingenious harmonic analysis to strengthen this bound to... $(|A| + 2)/3$. Then again, this establishes the non-sharpness of a classical result, so who judges whether the improvement is "only by a short amount?"
              – Vesselin Dimitrov
              Sep 7 at 22:13






            • 6




              @VesselinDimitrov I would say this deserves a separate answer. Regardless of how one judges this result, I think it fits well with the spirit of the question.
              – Wojowu
              Sep 8 at 12:33












            up vote
            50
            down vote










            up vote
            50
            down vote









            Recently Konyagin and Shkredov improved the exponent of $4/3$ in the sum-products estimate in $mathbbR$, namely that $|A+A|+|Acdot A|gg |A|^4/3$ for every $Asubset mathbbR$, to $4/3+5/9813$. This appears to be much harder than the one-page proof for $4/3$.



            The conjecture of Erdős is that the exponent approaches 2.






            share|cite|improve this answer














            Recently Konyagin and Shkredov improved the exponent of $4/3$ in the sum-products estimate in $mathbbR$, namely that $|A+A|+|Acdot A|gg |A|^4/3$ for every $Asubset mathbbR$, to $4/3+5/9813$. This appears to be much harder than the one-page proof for $4/3$.



            The conjecture of Erdős is that the exponent approaches 2.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Sep 6 at 12:56


























            community wiki





            Fedor Petrov








            • 6




              One other notable example from the same area, already mentioned in the very first section 1.1.1 of Tao and Vu's book. Erdos showed that any finite set $A subset mathbbZ$ of integers contains a subset $B subset A$ with $|B| > (|A| + 1)/3$ and free of solutions to $x+y = z$. In Estimates related to sumfree subsets of sets of integers (Israel J. Math., 1997), Bourgain applies an ingenious harmonic analysis to strengthen this bound to... $(|A| + 2)/3$. Then again, this establishes the non-sharpness of a classical result, so who judges whether the improvement is "only by a short amount?"
              – Vesselin Dimitrov
              Sep 7 at 22:13






            • 6




              @VesselinDimitrov I would say this deserves a separate answer. Regardless of how one judges this result, I think it fits well with the spirit of the question.
              – Wojowu
              Sep 8 at 12:33












            • 6




              One other notable example from the same area, already mentioned in the very first section 1.1.1 of Tao and Vu's book. Erdos showed that any finite set $A subset mathbbZ$ of integers contains a subset $B subset A$ with $|B| > (|A| + 1)/3$ and free of solutions to $x+y = z$. In Estimates related to sumfree subsets of sets of integers (Israel J. Math., 1997), Bourgain applies an ingenious harmonic analysis to strengthen this bound to... $(|A| + 2)/3$. Then again, this establishes the non-sharpness of a classical result, so who judges whether the improvement is "only by a short amount?"
              – Vesselin Dimitrov
              Sep 7 at 22:13






            • 6




              @VesselinDimitrov I would say this deserves a separate answer. Regardless of how one judges this result, I think it fits well with the spirit of the question.
              – Wojowu
              Sep 8 at 12:33







            6




            6




            One other notable example from the same area, already mentioned in the very first section 1.1.1 of Tao and Vu's book. Erdos showed that any finite set $A subset mathbbZ$ of integers contains a subset $B subset A$ with $|B| > (|A| + 1)/3$ and free of solutions to $x+y = z$. In Estimates related to sumfree subsets of sets of integers (Israel J. Math., 1997), Bourgain applies an ingenious harmonic analysis to strengthen this bound to... $(|A| + 2)/3$. Then again, this establishes the non-sharpness of a classical result, so who judges whether the improvement is "only by a short amount?"
            – Vesselin Dimitrov
            Sep 7 at 22:13




            One other notable example from the same area, already mentioned in the very first section 1.1.1 of Tao and Vu's book. Erdos showed that any finite set $A subset mathbbZ$ of integers contains a subset $B subset A$ with $|B| > (|A| + 1)/3$ and free of solutions to $x+y = z$. In Estimates related to sumfree subsets of sets of integers (Israel J. Math., 1997), Bourgain applies an ingenious harmonic analysis to strengthen this bound to... $(|A| + 2)/3$. Then again, this establishes the non-sharpness of a classical result, so who judges whether the improvement is "only by a short amount?"
            – Vesselin Dimitrov
            Sep 7 at 22:13




            6




            6




            @VesselinDimitrov I would say this deserves a separate answer. Regardless of how one judges this result, I think it fits well with the spirit of the question.
            – Wojowu
            Sep 8 at 12:33




            @VesselinDimitrov I would say this deserves a separate answer. Regardless of how one judges this result, I think it fits well with the spirit of the question.
            – Wojowu
            Sep 8 at 12:33










            up vote
            47
            down vote













            The De Bruijn-Newman constant $Lambda$ was defined and upper bounded by $Lambda leq 1/2$ in 1950. After 58 years of work, in 2008 this upper bound was finally improved to ... $Lambda < 1/2$ (a 0% improvement) in a 26-page paper. The best known upper bound is currently $Lambda leq 0.22$. The Riemann hypothesis is equivalent to $Lambda = 0$, so if it's true then we've got quite a ways to go.






            share|cite|improve this answer


















            • 10




              I think this is a winner when it comes to both relative and absolute smallest improvement!
              – Wojowu
              Sep 7 at 17:12






            • 1




              Rogers and Tao have proved that constant is non-negative. See arxiv.org/abs/1801.05914.
              – M. Khan
              Sep 7 at 19:19






            • 2




              @M.Khan Indeed. The Riemann hypothesis was already known to be equivalent to the statement $Lambda leq 0$, so their proof tightened the equivalence to the statement $Lambda = 0$.
              – tparker
              Sep 7 at 19:51






            • 6




              Apparently if one thinks about the De Bruijn-Newman constant in isolation, without its connection to the Riemann hypothesis, then it's arguably more natural to conjecture that $Lambda > 0$ than $Lambda = 0$. This is considered one of the strongest heuristic arguments against the Riemann hypothesis (although of course not nearly as strong as the many heuristic arguments for it).
              – tparker
              Sep 7 at 19:53







            • 9




              From $Lambdaleq1/2$ to $Lambda<1/2$ I would not say that is a 0% improvement but instead a "0$^+$"% improvement.
              – Matemáticos Chibchas
              Sep 8 at 23:11















            up vote
            47
            down vote













            The De Bruijn-Newman constant $Lambda$ was defined and upper bounded by $Lambda leq 1/2$ in 1950. After 58 years of work, in 2008 this upper bound was finally improved to ... $Lambda < 1/2$ (a 0% improvement) in a 26-page paper. The best known upper bound is currently $Lambda leq 0.22$. The Riemann hypothesis is equivalent to $Lambda = 0$, so if it's true then we've got quite a ways to go.






            share|cite|improve this answer


















            • 10




              I think this is a winner when it comes to both relative and absolute smallest improvement!
              – Wojowu
              Sep 7 at 17:12






            • 1




              Rogers and Tao have proved that constant is non-negative. See arxiv.org/abs/1801.05914.
              – M. Khan
              Sep 7 at 19:19






            • 2




              @M.Khan Indeed. The Riemann hypothesis was already known to be equivalent to the statement $Lambda leq 0$, so their proof tightened the equivalence to the statement $Lambda = 0$.
              – tparker
              Sep 7 at 19:51






            • 6




              Apparently if one thinks about the De Bruijn-Newman constant in isolation, without its connection to the Riemann hypothesis, then it's arguably more natural to conjecture that $Lambda > 0$ than $Lambda = 0$. This is considered one of the strongest heuristic arguments against the Riemann hypothesis (although of course not nearly as strong as the many heuristic arguments for it).
              – tparker
              Sep 7 at 19:53







            • 9




              From $Lambdaleq1/2$ to $Lambda<1/2$ I would not say that is a 0% improvement but instead a "0$^+$"% improvement.
              – Matemáticos Chibchas
              Sep 8 at 23:11













            up vote
            47
            down vote










            up vote
            47
            down vote









            The De Bruijn-Newman constant $Lambda$ was defined and upper bounded by $Lambda leq 1/2$ in 1950. After 58 years of work, in 2008 this upper bound was finally improved to ... $Lambda < 1/2$ (a 0% improvement) in a 26-page paper. The best known upper bound is currently $Lambda leq 0.22$. The Riemann hypothesis is equivalent to $Lambda = 0$, so if it's true then we've got quite a ways to go.






            share|cite|improve this answer














            The De Bruijn-Newman constant $Lambda$ was defined and upper bounded by $Lambda leq 1/2$ in 1950. After 58 years of work, in 2008 this upper bound was finally improved to ... $Lambda < 1/2$ (a 0% improvement) in a 26-page paper. The best known upper bound is currently $Lambda leq 0.22$. The Riemann hypothesis is equivalent to $Lambda = 0$, so if it's true then we've got quite a ways to go.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Sep 7 at 21:41


























            community wiki





            3 revs, 3 users 67%
            Andrés E. Caicedo








            • 10




              I think this is a winner when it comes to both relative and absolute smallest improvement!
              – Wojowu
              Sep 7 at 17:12






            • 1




              Rogers and Tao have proved that constant is non-negative. See arxiv.org/abs/1801.05914.
              – M. Khan
              Sep 7 at 19:19






            • 2




              @M.Khan Indeed. The Riemann hypothesis was already known to be equivalent to the statement $Lambda leq 0$, so their proof tightened the equivalence to the statement $Lambda = 0$.
              – tparker
              Sep 7 at 19:51






            • 6




              Apparently if one thinks about the De Bruijn-Newman constant in isolation, without its connection to the Riemann hypothesis, then it's arguably more natural to conjecture that $Lambda > 0$ than $Lambda = 0$. This is considered one of the strongest heuristic arguments against the Riemann hypothesis (although of course not nearly as strong as the many heuristic arguments for it).
              – tparker
              Sep 7 at 19:53







            • 9




              From $Lambdaleq1/2$ to $Lambda<1/2$ I would not say that is a 0% improvement but instead a "0$^+$"% improvement.
              – Matemáticos Chibchas
              Sep 8 at 23:11













            • 10




              I think this is a winner when it comes to both relative and absolute smallest improvement!
              – Wojowu
              Sep 7 at 17:12






            • 1




              Rogers and Tao have proved that constant is non-negative. See arxiv.org/abs/1801.05914.
              – M. Khan
              Sep 7 at 19:19






            • 2




              @M.Khan Indeed. The Riemann hypothesis was already known to be equivalent to the statement $Lambda leq 0$, so their proof tightened the equivalence to the statement $Lambda = 0$.
              – tparker
              Sep 7 at 19:51






            • 6




              Apparently if one thinks about the De Bruijn-Newman constant in isolation, without its connection to the Riemann hypothesis, then it's arguably more natural to conjecture that $Lambda > 0$ than $Lambda = 0$. This is considered one of the strongest heuristic arguments against the Riemann hypothesis (although of course not nearly as strong as the many heuristic arguments for it).
              – tparker
              Sep 7 at 19:53







            • 9




              From $Lambdaleq1/2$ to $Lambda<1/2$ I would not say that is a 0% improvement but instead a "0$^+$"% improvement.
              – Matemáticos Chibchas
              Sep 8 at 23:11








            10




            10




            I think this is a winner when it comes to both relative and absolute smallest improvement!
            – Wojowu
            Sep 7 at 17:12




            I think this is a winner when it comes to both relative and absolute smallest improvement!
            – Wojowu
            Sep 7 at 17:12




            1




            1




            Rogers and Tao have proved that constant is non-negative. See arxiv.org/abs/1801.05914.
            – M. Khan
            Sep 7 at 19:19




            Rogers and Tao have proved that constant is non-negative. See arxiv.org/abs/1801.05914.
            – M. Khan
            Sep 7 at 19:19




            2




            2




            @M.Khan Indeed. The Riemann hypothesis was already known to be equivalent to the statement $Lambda leq 0$, so their proof tightened the equivalence to the statement $Lambda = 0$.
            – tparker
            Sep 7 at 19:51




            @M.Khan Indeed. The Riemann hypothesis was already known to be equivalent to the statement $Lambda leq 0$, so their proof tightened the equivalence to the statement $Lambda = 0$.
            – tparker
            Sep 7 at 19:51




            6




            6




            Apparently if one thinks about the De Bruijn-Newman constant in isolation, without its connection to the Riemann hypothesis, then it's arguably more natural to conjecture that $Lambda > 0$ than $Lambda = 0$. This is considered one of the strongest heuristic arguments against the Riemann hypothesis (although of course not nearly as strong as the many heuristic arguments for it).
            – tparker
            Sep 7 at 19:53





            Apparently if one thinks about the De Bruijn-Newman constant in isolation, without its connection to the Riemann hypothesis, then it's arguably more natural to conjecture that $Lambda > 0$ than $Lambda = 0$. This is considered one of the strongest heuristic arguments against the Riemann hypothesis (although of course not nearly as strong as the many heuristic arguments for it).
            – tparker
            Sep 7 at 19:53





            9




            9




            From $Lambdaleq1/2$ to $Lambda<1/2$ I would not say that is a 0% improvement but instead a "0$^+$"% improvement.
            – Matemáticos Chibchas
            Sep 8 at 23:11





            From $Lambdaleq1/2$ to $Lambda<1/2$ I would not say that is a 0% improvement but instead a "0$^+$"% improvement.
            – Matemáticos Chibchas
            Sep 8 at 23:11











            up vote
            39
            down vote













            One example could be the sphere packing problem in $mathbbR^8$, recently finished by Viazovska. It was a well-known conjecture that $E_8$-lattice packing gives the maximum density for sphere packings in $mathbbR^8$. In 2003, Cohn and Elkies have shown using linear programming that the optimal density is less or equal to 1.000001 times the density of $E_8$ packing. Viazovska's paper finally removed this 1.000001 factor. While I would not call her paper very long, it definitely has some highly non-trivial ideas in it (the appearance of modular forms is pretty surprising to me, for example).






            share|cite|improve this answer


















            • 21




              This is not really a "small amount" since it reduces the approximation to a tight bound.
              – Ethan Bolker
              Sep 7 at 16:55














            up vote
            39
            down vote













            One example could be the sphere packing problem in $mathbbR^8$, recently finished by Viazovska. It was a well-known conjecture that $E_8$-lattice packing gives the maximum density for sphere packings in $mathbbR^8$. In 2003, Cohn and Elkies have shown using linear programming that the optimal density is less or equal to 1.000001 times the density of $E_8$ packing. Viazovska's paper finally removed this 1.000001 factor. While I would not call her paper very long, it definitely has some highly non-trivial ideas in it (the appearance of modular forms is pretty surprising to me, for example).






            share|cite|improve this answer


















            • 21




              This is not really a "small amount" since it reduces the approximation to a tight bound.
              – Ethan Bolker
              Sep 7 at 16:55












            up vote
            39
            down vote










            up vote
            39
            down vote









            One example could be the sphere packing problem in $mathbbR^8$, recently finished by Viazovska. It was a well-known conjecture that $E_8$-lattice packing gives the maximum density for sphere packings in $mathbbR^8$. In 2003, Cohn and Elkies have shown using linear programming that the optimal density is less or equal to 1.000001 times the density of $E_8$ packing. Viazovska's paper finally removed this 1.000001 factor. While I would not call her paper very long, it definitely has some highly non-trivial ideas in it (the appearance of modular forms is pretty surprising to me, for example).






            share|cite|improve this answer














            One example could be the sphere packing problem in $mathbbR^8$, recently finished by Viazovska. It was a well-known conjecture that $E_8$-lattice packing gives the maximum density for sphere packings in $mathbbR^8$. In 2003, Cohn and Elkies have shown using linear programming that the optimal density is less or equal to 1.000001 times the density of $E_8$ packing. Viazovska's paper finally removed this 1.000001 factor. While I would not call her paper very long, it definitely has some highly non-trivial ideas in it (the appearance of modular forms is pretty surprising to me, for example).







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Sep 7 at 0:24


























            community wiki





            2 revs, 2 users 67%
            Aknazar Kazhymurat








            • 21




              This is not really a "small amount" since it reduces the approximation to a tight bound.
              – Ethan Bolker
              Sep 7 at 16:55












            • 21




              This is not really a "small amount" since it reduces the approximation to a tight bound.
              – Ethan Bolker
              Sep 7 at 16:55







            21




            21




            This is not really a "small amount" since it reduces the approximation to a tight bound.
            – Ethan Bolker
            Sep 7 at 16:55




            This is not really a "small amount" since it reduces the approximation to a tight bound.
            – Ethan Bolker
            Sep 7 at 16:55










            up vote
            33
            down vote













            The Dirichlet divisor problem has a history of such minor improvements, each with progressively longer proof. The problem asks for possible exponents $theta$ for which we have $sum_nleq xd(n)=xlog x+(2gamma-1)x+O(x^theta)$, where $d$ is the divisor-counting function. It is known $theta<frac14$ can't work, and it's conjectured any $theta>frac14$ does.



            Progress towards showing this has been rather slow: Dirichlet has shown $theta=frac12$ works, Voronoi has improved it to $theta>frac13$ and since then we had around a dozen of papers, each more difficult than the previous one, none of which has improved $theta$ by more than $0.004$, see here for details.



            Similar story happens with Gauss circle problem, see the table here.






            share|cite|improve this answer


























              up vote
              33
              down vote













              The Dirichlet divisor problem has a history of such minor improvements, each with progressively longer proof. The problem asks for possible exponents $theta$ for which we have $sum_nleq xd(n)=xlog x+(2gamma-1)x+O(x^theta)$, where $d$ is the divisor-counting function. It is known $theta<frac14$ can't work, and it's conjectured any $theta>frac14$ does.



              Progress towards showing this has been rather slow: Dirichlet has shown $theta=frac12$ works, Voronoi has improved it to $theta>frac13$ and since then we had around a dozen of papers, each more difficult than the previous one, none of which has improved $theta$ by more than $0.004$, see here for details.



              Similar story happens with Gauss circle problem, see the table here.






              share|cite|improve this answer
























                up vote
                33
                down vote










                up vote
                33
                down vote









                The Dirichlet divisor problem has a history of such minor improvements, each with progressively longer proof. The problem asks for possible exponents $theta$ for which we have $sum_nleq xd(n)=xlog x+(2gamma-1)x+O(x^theta)$, where $d$ is the divisor-counting function. It is known $theta<frac14$ can't work, and it's conjectured any $theta>frac14$ does.



                Progress towards showing this has been rather slow: Dirichlet has shown $theta=frac12$ works, Voronoi has improved it to $theta>frac13$ and since then we had around a dozen of papers, each more difficult than the previous one, none of which has improved $theta$ by more than $0.004$, see here for details.



                Similar story happens with Gauss circle problem, see the table here.






                share|cite|improve this answer














                The Dirichlet divisor problem has a history of such minor improvements, each with progressively longer proof. The problem asks for possible exponents $theta$ for which we have $sum_nleq xd(n)=xlog x+(2gamma-1)x+O(x^theta)$, where $d$ is the divisor-counting function. It is known $theta<frac14$ can't work, and it's conjectured any $theta>frac14$ does.



                Progress towards showing this has been rather slow: Dirichlet has shown $theta=frac12$ works, Voronoi has improved it to $theta>frac13$ and since then we had around a dozen of papers, each more difficult than the previous one, none of which has improved $theta$ by more than $0.004$, see here for details.



                Similar story happens with Gauss circle problem, see the table here.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                answered Sep 6 at 15:10


























                community wiki





                Wojowu





















                    up vote
                    10
                    down vote













                    In the 65-page paper A Randomized Rounding Approach to the Traveling Salesman Problem, Oveis Gharan, Saberi, and Singh acquire a $(1.5 - 10^-52)$-approximation algorithm for the Traveling Salesman Problem on graphical metrics, a problem for which a $1.5$ approximation can be explained and analyzed in under 5 minutes. I have heard that the constant can be improved to $1.5 - 10^-6$ with little change to the paper.



                    Since then, a sequence of simpler works has brought the constant down to 1.4 (which I believe is the latest result).






                    share|cite|improve this answer


























                      up vote
                      10
                      down vote













                      In the 65-page paper A Randomized Rounding Approach to the Traveling Salesman Problem, Oveis Gharan, Saberi, and Singh acquire a $(1.5 - 10^-52)$-approximation algorithm for the Traveling Salesman Problem on graphical metrics, a problem for which a $1.5$ approximation can be explained and analyzed in under 5 minutes. I have heard that the constant can be improved to $1.5 - 10^-6$ with little change to the paper.



                      Since then, a sequence of simpler works has brought the constant down to 1.4 (which I believe is the latest result).






                      share|cite|improve this answer
























                        up vote
                        10
                        down vote










                        up vote
                        10
                        down vote









                        In the 65-page paper A Randomized Rounding Approach to the Traveling Salesman Problem, Oveis Gharan, Saberi, and Singh acquire a $(1.5 - 10^-52)$-approximation algorithm for the Traveling Salesman Problem on graphical metrics, a problem for which a $1.5$ approximation can be explained and analyzed in under 5 minutes. I have heard that the constant can be improved to $1.5 - 10^-6$ with little change to the paper.



                        Since then, a sequence of simpler works has brought the constant down to 1.4 (which I believe is the latest result).






                        share|cite|improve this answer














                        In the 65-page paper A Randomized Rounding Approach to the Traveling Salesman Problem, Oveis Gharan, Saberi, and Singh acquire a $(1.5 - 10^-52)$-approximation algorithm for the Traveling Salesman Problem on graphical metrics, a problem for which a $1.5$ approximation can be explained and analyzed in under 5 minutes. I have heard that the constant can be improved to $1.5 - 10^-6$ with little change to the paper.



                        Since then, a sequence of simpler works has brought the constant down to 1.4 (which I believe is the latest result).







                        share|cite|improve this answer














                        share|cite|improve this answer



                        share|cite|improve this answer








                        answered Sep 12 at 1:13


























                        community wiki





                        Yonatan N





















                            up vote
                            9
                            down vote













                            In the context of the Millenium problem for the Navier-Stokes equations, there is a long history of results along the lines of "if global regularity fails, then such and such norm has to blow up." There are numerous results on minimal improvements in the specification of "such and such." Of course, no one knows at this point whether any norm really blows up or not.






                            share|cite|improve this answer


























                              up vote
                              9
                              down vote













                              In the context of the Millenium problem for the Navier-Stokes equations, there is a long history of results along the lines of "if global regularity fails, then such and such norm has to blow up." There are numerous results on minimal improvements in the specification of "such and such." Of course, no one knows at this point whether any norm really blows up or not.






                              share|cite|improve this answer
























                                up vote
                                9
                                down vote










                                up vote
                                9
                                down vote









                                In the context of the Millenium problem for the Navier-Stokes equations, there is a long history of results along the lines of "if global regularity fails, then such and such norm has to blow up." There are numerous results on minimal improvements in the specification of "such and such." Of course, no one knows at this point whether any norm really blows up or not.






                                share|cite|improve this answer














                                In the context of the Millenium problem for the Navier-Stokes equations, there is a long history of results along the lines of "if global regularity fails, then such and such norm has to blow up." There are numerous results on minimal improvements in the specification of "such and such." Of course, no one knows at this point whether any norm really blows up or not.







                                share|cite|improve this answer














                                share|cite|improve this answer



                                share|cite|improve this answer








                                edited Sep 7 at 1:30


























                                community wiki





                                2 revs
                                Michael Renardy





















                                    up vote
                                    9
                                    down vote













                                    Almost any paper on the Gauss Circle Problem written in the last 70 years qualifies.






                                    share|cite|improve this answer


















                                    • 6




                                      I have already mentioned that in my answer about Dirichlet divisor problem.
                                      – Wojowu
                                      Sep 8 at 12:27














                                    up vote
                                    9
                                    down vote













                                    Almost any paper on the Gauss Circle Problem written in the last 70 years qualifies.






                                    share|cite|improve this answer


















                                    • 6




                                      I have already mentioned that in my answer about Dirichlet divisor problem.
                                      – Wojowu
                                      Sep 8 at 12:27












                                    up vote
                                    9
                                    down vote










                                    up vote
                                    9
                                    down vote









                                    Almost any paper on the Gauss Circle Problem written in the last 70 years qualifies.






                                    share|cite|improve this answer














                                    Almost any paper on the Gauss Circle Problem written in the last 70 years qualifies.







                                    share|cite|improve this answer














                                    share|cite|improve this answer



                                    share|cite|improve this answer








                                    answered Sep 7 at 19:52


























                                    community wiki





                                    Igor Rivin








                                    • 6




                                      I have already mentioned that in my answer about Dirichlet divisor problem.
                                      – Wojowu
                                      Sep 8 at 12:27












                                    • 6




                                      I have already mentioned that in my answer about Dirichlet divisor problem.
                                      – Wojowu
                                      Sep 8 at 12:27







                                    6




                                    6




                                    I have already mentioned that in my answer about Dirichlet divisor problem.
                                    – Wojowu
                                    Sep 8 at 12:27




                                    I have already mentioned that in my answer about Dirichlet divisor problem.
                                    – Wojowu
                                    Sep 8 at 12:27










                                    up vote
                                    8
                                    down vote













                                    Here is another realm where improvements are small and tend towards an unknown but existing limit $S$ with currently known bounds $1.2748 ≤ S ≤ 1.5098$ where the upper bound seems not too far from the actual value of $S$.



                                    It is about the smallest possible supremum of the autoconvolution of a nonnegative function supported in a compact interval. The discrete version of those (i.e. restricting to functions that are piecewise constant) yields optimal functions which are closely related to Generalized Sidon sets.



                                    A fascinating article is Improved bounds on the supremum of autoconvolutions. As a teaser, have a look at the diagram on page 10.

                                    If the condition "nonnegative" on the function is removed, surprisingly the new supremum may be even smaller.






                                    share|cite|improve this answer


























                                      up vote
                                      8
                                      down vote













                                      Here is another realm where improvements are small and tend towards an unknown but existing limit $S$ with currently known bounds $1.2748 ≤ S ≤ 1.5098$ where the upper bound seems not too far from the actual value of $S$.



                                      It is about the smallest possible supremum of the autoconvolution of a nonnegative function supported in a compact interval. The discrete version of those (i.e. restricting to functions that are piecewise constant) yields optimal functions which are closely related to Generalized Sidon sets.



                                      A fascinating article is Improved bounds on the supremum of autoconvolutions. As a teaser, have a look at the diagram on page 10.

                                      If the condition "nonnegative" on the function is removed, surprisingly the new supremum may be even smaller.






                                      share|cite|improve this answer
























                                        up vote
                                        8
                                        down vote










                                        up vote
                                        8
                                        down vote









                                        Here is another realm where improvements are small and tend towards an unknown but existing limit $S$ with currently known bounds $1.2748 ≤ S ≤ 1.5098$ where the upper bound seems not too far from the actual value of $S$.



                                        It is about the smallest possible supremum of the autoconvolution of a nonnegative function supported in a compact interval. The discrete version of those (i.e. restricting to functions that are piecewise constant) yields optimal functions which are closely related to Generalized Sidon sets.



                                        A fascinating article is Improved bounds on the supremum of autoconvolutions. As a teaser, have a look at the diagram on page 10.

                                        If the condition "nonnegative" on the function is removed, surprisingly the new supremum may be even smaller.






                                        share|cite|improve this answer














                                        Here is another realm where improvements are small and tend towards an unknown but existing limit $S$ with currently known bounds $1.2748 ≤ S ≤ 1.5098$ where the upper bound seems not too far from the actual value of $S$.



                                        It is about the smallest possible supremum of the autoconvolution of a nonnegative function supported in a compact interval. The discrete version of those (i.e. restricting to functions that are piecewise constant) yields optimal functions which are closely related to Generalized Sidon sets.



                                        A fascinating article is Improved bounds on the supremum of autoconvolutions. As a teaser, have a look at the diagram on page 10.

                                        If the condition "nonnegative" on the function is removed, surprisingly the new supremum may be even smaller.







                                        share|cite|improve this answer














                                        share|cite|improve this answer



                                        share|cite|improve this answer








                                        edited Sep 8 at 10:55


























                                        community wiki





                                        2 revs
                                        Wolfgang





















                                            up vote
                                            8
                                            down vote













                                            Let $phi$ be a univalent (i .e., holomorphic and injective) function on the unit disc. Consider the growth rate of the lengths of the images of circles $|z|=r$ as $r$ goes to 1:
                                            $$
                                            limsup_rto 1-fracdtheta,
                                            $$
                                            and denote by $gamma$ the supremum of this quantity over all bounded univalent $phi$.



                                            Beliaev and Smirnov describe the work on upper bounds for $gamma$, as of 2004:




                                            Conjectural value of $gamma=B(1)$
                                            is $1/4$, but existing estimates are
                                            quite far. The first result in this direction is due to Bieberbach [7] who in 1914
                                            used his area theorem to prove that
                                            $gammaleq 1/2$. <...> Clunie and Pommerenke in [16] proved that
                                            $gamma
                                            leq
                                            1
                                            /
                                            2
                                            −
                                            1
                                            /
                                            300$ <...> Carleson and Jones [13] <...> used
                                            Marcinkiewicz integrals to prove
                                            $gamma<
                                            0.49755$. This estimate was improved
                                            by Makarov and Pommerenke [43] to
                                            $gamma<
                                            0.4886$ and then by Grinshpan and
                                            Pommerenke [21] to
                                            $γ<0.4884$. The best current estimate is due to Hedenmalm
                                            and Shimorin [24] who quite recently proved that
                                            $B(1)<0.46.$




                                            I guess the latter estimate is still the best as of now.






                                            share|cite|improve this answer


























                                              up vote
                                              8
                                              down vote













                                              Let $phi$ be a univalent (i .e., holomorphic and injective) function on the unit disc. Consider the growth rate of the lengths of the images of circles $|z|=r$ as $r$ goes to 1:
                                              $$
                                              limsup_rto 1-fracdtheta,
                                              $$
                                              and denote by $gamma$ the supremum of this quantity over all bounded univalent $phi$.



                                              Beliaev and Smirnov describe the work on upper bounds for $gamma$, as of 2004:




                                              Conjectural value of $gamma=B(1)$
                                              is $1/4$, but existing estimates are
                                              quite far. The first result in this direction is due to Bieberbach [7] who in 1914
                                              used his area theorem to prove that
                                              $gammaleq 1/2$. <...> Clunie and Pommerenke in [16] proved that
                                              $gamma
                                              leq
                                              1
                                              /
                                              2
                                              −
                                              1
                                              /
                                              300$ <...> Carleson and Jones [13] <...> used
                                              Marcinkiewicz integrals to prove
                                              $gamma<
                                              0.49755$. This estimate was improved
                                              by Makarov and Pommerenke [43] to
                                              $gamma<
                                              0.4886$ and then by Grinshpan and
                                              Pommerenke [21] to
                                              $γ<0.4884$. The best current estimate is due to Hedenmalm
                                              and Shimorin [24] who quite recently proved that
                                              $B(1)<0.46.$




                                              I guess the latter estimate is still the best as of now.






                                              share|cite|improve this answer
























                                                up vote
                                                8
                                                down vote










                                                up vote
                                                8
                                                down vote









                                                Let $phi$ be a univalent (i .e., holomorphic and injective) function on the unit disc. Consider the growth rate of the lengths of the images of circles $|z|=r$ as $r$ goes to 1:
                                                $$
                                                limsup_rto 1-fracdtheta,
                                                $$
                                                and denote by $gamma$ the supremum of this quantity over all bounded univalent $phi$.



                                                Beliaev and Smirnov describe the work on upper bounds for $gamma$, as of 2004:




                                                Conjectural value of $gamma=B(1)$
                                                is $1/4$, but existing estimates are
                                                quite far. The first result in this direction is due to Bieberbach [7] who in 1914
                                                used his area theorem to prove that
                                                $gammaleq 1/2$. <...> Clunie and Pommerenke in [16] proved that
                                                $gamma
                                                leq
                                                1
                                                /
                                                2
                                                −
                                                1
                                                /
                                                300$ <...> Carleson and Jones [13] <...> used
                                                Marcinkiewicz integrals to prove
                                                $gamma<
                                                0.49755$. This estimate was improved
                                                by Makarov and Pommerenke [43] to
                                                $gamma<
                                                0.4886$ and then by Grinshpan and
                                                Pommerenke [21] to
                                                $γ<0.4884$. The best current estimate is due to Hedenmalm
                                                and Shimorin [24] who quite recently proved that
                                                $B(1)<0.46.$




                                                I guess the latter estimate is still the best as of now.






                                                share|cite|improve this answer














                                                Let $phi$ be a univalent (i .e., holomorphic and injective) function on the unit disc. Consider the growth rate of the lengths of the images of circles $|z|=r$ as $r$ goes to 1:
                                                $$
                                                limsup_rto 1-fracdtheta,
                                                $$
                                                and denote by $gamma$ the supremum of this quantity over all bounded univalent $phi$.



                                                Beliaev and Smirnov describe the work on upper bounds for $gamma$, as of 2004:




                                                Conjectural value of $gamma=B(1)$
                                                is $1/4$, but existing estimates are
                                                quite far. The first result in this direction is due to Bieberbach [7] who in 1914
                                                used his area theorem to prove that
                                                $gammaleq 1/2$. <...> Clunie and Pommerenke in [16] proved that
                                                $gamma
                                                leq
                                                1
                                                /
                                                2
                                                −
                                                1
                                                /
                                                300$ <...> Carleson and Jones [13] <...> used
                                                Marcinkiewicz integrals to prove
                                                $gamma<
                                                0.49755$. This estimate was improved
                                                by Makarov and Pommerenke [43] to
                                                $gamma<
                                                0.4886$ and then by Grinshpan and
                                                Pommerenke [21] to
                                                $γ<0.4884$. The best current estimate is due to Hedenmalm
                                                and Shimorin [24] who quite recently proved that
                                                $B(1)<0.46.$




                                                I guess the latter estimate is still the best as of now.







                                                share|cite|improve this answer














                                                share|cite|improve this answer



                                                share|cite|improve this answer








                                                edited Sep 10 at 13:05


























                                                community wiki





                                                2 revs
                                                Kostya_I




























                                                     

                                                    draft saved


                                                    draft discarded















































                                                     


                                                    draft saved


                                                    draft discarded














                                                    StackExchange.ready(
                                                    function ()
                                                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f309974%2fexamples-of-notably-long-or-difficult-proofs-that-only-improve-upon-existing-res%23new-answer', 'question_page');

                                                    );

                                                    Post as a guest













































































                                                    這個網誌中的熱門文章

                                                    Is there any way to eliminate the singular point to solve this integral by hand or by approximations?

                                                    Why am i infinitely getting the same tweet with the Twitter Search API?

                                                    Carbon dioxide