estimate $ln(x)$ using Taylor Polynomial accurate to specified precision

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












The task is to estimate $ln(x)$ using Taylor Polynomial expansion accurate to specified precision. After computing is done, the value compared to the number that WolframAlpha provides to check if the error estimation was made right. for $x ge 0.5$ every series I try behave as expected, but for $xle0.5$ there are problems

Expanding around $a=1$ gives below series valid for $xin(0;2]$ $$ln(x)=sum_n=1^infty frac(-1)^n+1(x-1)^nn$$
Let's say I need to estimate $ln(0.25)$ so that the absolute error would be $le0.00005$ (correct to 4 decimal places). WolframAlpha says $ln(0.25)=-1.38629436...$



I tried this iterative algorithm: https://www.mathworks.com/matlabcentral/answers/uploaded_files/38169/Part%20B.pdf which uses next polynomial term as error bound. Sadly this is not good enough and stops too early so the actual error is bigger than $0.00005$.



I also tried using Lagrange Error Bound as it represented here: http://math.feld.cvut.cz/mt/txte/3/txe4ea3c.htm at the end of the document it says that "on $(0,1/2)$, the Lagrange estimate is too generous." so for my value of $0.25$ taking as many terms as Lagrange Error Bound says, still does no good (I did check that).



So I decided to try this approach: http://math.colorado.edu/~nowi9933/Math2300F17/2300TPRemainderEstimateSol.pdf [page 4, section (d)]



using $ln(1+x)$ series valid for $|x|le1$ $$ln(1+x)=sum_n=1^infty frac(-1)^n+1(x)^nn$$
with Lagrange Error Bound $$frac(0.25)^n+1n+1 le 0.00005$$

Solving it gives $nge5$ terms. And this is also not enough as adding 5 terms gives $ln(x)approx-1.2984375$ last term being $-0.0474609375$

Above calculations were made with a program in python and though IEEE754 floating point format has its flaws its not the problem in the program but in the math part. I'm quiet surprised that there is almost no information regarding that issue on the internet. I understand that one way wolfram must be getting the value is just adding terms as much as it can, but I need some trustworthy error bounding method. Is there some key point I am missing about this?










share|cite|improve this question



























    up vote
    2
    down vote

    favorite












    The task is to estimate $ln(x)$ using Taylor Polynomial expansion accurate to specified precision. After computing is done, the value compared to the number that WolframAlpha provides to check if the error estimation was made right. for $x ge 0.5$ every series I try behave as expected, but for $xle0.5$ there are problems

    Expanding around $a=1$ gives below series valid for $xin(0;2]$ $$ln(x)=sum_n=1^infty frac(-1)^n+1(x-1)^nn$$
    Let's say I need to estimate $ln(0.25)$ so that the absolute error would be $le0.00005$ (correct to 4 decimal places). WolframAlpha says $ln(0.25)=-1.38629436...$



    I tried this iterative algorithm: https://www.mathworks.com/matlabcentral/answers/uploaded_files/38169/Part%20B.pdf which uses next polynomial term as error bound. Sadly this is not good enough and stops too early so the actual error is bigger than $0.00005$.



    I also tried using Lagrange Error Bound as it represented here: http://math.feld.cvut.cz/mt/txte/3/txe4ea3c.htm at the end of the document it says that "on $(0,1/2)$, the Lagrange estimate is too generous." so for my value of $0.25$ taking as many terms as Lagrange Error Bound says, still does no good (I did check that).



    So I decided to try this approach: http://math.colorado.edu/~nowi9933/Math2300F17/2300TPRemainderEstimateSol.pdf [page 4, section (d)]



    using $ln(1+x)$ series valid for $|x|le1$ $$ln(1+x)=sum_n=1^infty frac(-1)^n+1(x)^nn$$
    with Lagrange Error Bound $$frac(0.25)^n+1n+1 le 0.00005$$

    Solving it gives $nge5$ terms. And this is also not enough as adding 5 terms gives $ln(x)approx-1.2984375$ last term being $-0.0474609375$

    Above calculations were made with a program in python and though IEEE754 floating point format has its flaws its not the problem in the program but in the math part. I'm quiet surprised that there is almost no information regarding that issue on the internet. I understand that one way wolfram must be getting the value is just adding terms as much as it can, but I need some trustworthy error bounding method. Is there some key point I am missing about this?










    share|cite|improve this question

























      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      The task is to estimate $ln(x)$ using Taylor Polynomial expansion accurate to specified precision. After computing is done, the value compared to the number that WolframAlpha provides to check if the error estimation was made right. for $x ge 0.5$ every series I try behave as expected, but for $xle0.5$ there are problems

      Expanding around $a=1$ gives below series valid for $xin(0;2]$ $$ln(x)=sum_n=1^infty frac(-1)^n+1(x-1)^nn$$
      Let's say I need to estimate $ln(0.25)$ so that the absolute error would be $le0.00005$ (correct to 4 decimal places). WolframAlpha says $ln(0.25)=-1.38629436...$



      I tried this iterative algorithm: https://www.mathworks.com/matlabcentral/answers/uploaded_files/38169/Part%20B.pdf which uses next polynomial term as error bound. Sadly this is not good enough and stops too early so the actual error is bigger than $0.00005$.



      I also tried using Lagrange Error Bound as it represented here: http://math.feld.cvut.cz/mt/txte/3/txe4ea3c.htm at the end of the document it says that "on $(0,1/2)$, the Lagrange estimate is too generous." so for my value of $0.25$ taking as many terms as Lagrange Error Bound says, still does no good (I did check that).



      So I decided to try this approach: http://math.colorado.edu/~nowi9933/Math2300F17/2300TPRemainderEstimateSol.pdf [page 4, section (d)]



      using $ln(1+x)$ series valid for $|x|le1$ $$ln(1+x)=sum_n=1^infty frac(-1)^n+1(x)^nn$$
      with Lagrange Error Bound $$frac(0.25)^n+1n+1 le 0.00005$$

      Solving it gives $nge5$ terms. And this is also not enough as adding 5 terms gives $ln(x)approx-1.2984375$ last term being $-0.0474609375$

      Above calculations were made with a program in python and though IEEE754 floating point format has its flaws its not the problem in the program but in the math part. I'm quiet surprised that there is almost no information regarding that issue on the internet. I understand that one way wolfram must be getting the value is just adding terms as much as it can, but I need some trustworthy error bounding method. Is there some key point I am missing about this?










      share|cite|improve this question















      The task is to estimate $ln(x)$ using Taylor Polynomial expansion accurate to specified precision. After computing is done, the value compared to the number that WolframAlpha provides to check if the error estimation was made right. for $x ge 0.5$ every series I try behave as expected, but for $xle0.5$ there are problems

      Expanding around $a=1$ gives below series valid for $xin(0;2]$ $$ln(x)=sum_n=1^infty frac(-1)^n+1(x-1)^nn$$
      Let's say I need to estimate $ln(0.25)$ so that the absolute error would be $le0.00005$ (correct to 4 decimal places). WolframAlpha says $ln(0.25)=-1.38629436...$



      I tried this iterative algorithm: https://www.mathworks.com/matlabcentral/answers/uploaded_files/38169/Part%20B.pdf which uses next polynomial term as error bound. Sadly this is not good enough and stops too early so the actual error is bigger than $0.00005$.



      I also tried using Lagrange Error Bound as it represented here: http://math.feld.cvut.cz/mt/txte/3/txe4ea3c.htm at the end of the document it says that "on $(0,1/2)$, the Lagrange estimate is too generous." so for my value of $0.25$ taking as many terms as Lagrange Error Bound says, still does no good (I did check that).



      So I decided to try this approach: http://math.colorado.edu/~nowi9933/Math2300F17/2300TPRemainderEstimateSol.pdf [page 4, section (d)]



      using $ln(1+x)$ series valid for $|x|le1$ $$ln(1+x)=sum_n=1^infty frac(-1)^n+1(x)^nn$$
      with Lagrange Error Bound $$frac(0.25)^n+1n+1 le 0.00005$$

      Solving it gives $nge5$ terms. And this is also not enough as adding 5 terms gives $ln(x)approx-1.2984375$ last term being $-0.0474609375$

      Above calculations were made with a program in python and though IEEE754 floating point format has its flaws its not the problem in the program but in the math part. I'm quiet surprised that there is almost no information regarding that issue on the internet. I understand that one way wolfram must be getting the value is just adding terms as much as it can, but I need some trustworthy error bounding method. Is there some key point I am missing about this?







      taylor-expansion






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 31 at 13:45

























      asked Aug 31 at 4:53









      wonderwhy

      133




      133




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          0
          down vote



          accepted










          I would be using instead the expansion
          $$log left(frac1-x1+xright)=-2 sum_i=0^infty fracx^2n+12n+1$$ and make $x=frac 35$.
          $$S_p=-2sum_i=0^p fracleft(frac35right)^2 n+12 n+1$$ and compute
          $$left(
          beginarrayccc
          p & S_p &log left(frac14right) -S_p \
          0 & -1.200000000 & -0.186294361 \
          1 & -1.344000000 & -0.042294361 \
          2 & -1.375104000 & -0.011190361 \
          3 & -1.383102171 & -0.003192190 \
          4 & -1.385341659 & -0.000952702 \
          5 & -1.386001290 & -0.000293071 \
          6 & -1.386202224 & -0.000092137 \
          7 & -1.386264916 & -0.000029446 \
          8 & -1.386284829 & -0.000009532
          endarray
          right)$$






          share|cite|improve this answer




















          • Can you please elaborate why did you choose these series and why these are better? I must've accented on the Wolfram too much and now edited the question to not be misleading. We do not know the actual value of $ln(x)$ before computing the polynomial and Wolfram only serves as a standard to verify against that the error estimation was made right. As it turns out comparing next term to the error is enough in case of these series [at least the python program seems to work], is this right, or should I look into Lagrange Remainder?
            – wonderwhy
            Aug 31 at 13:53











          • and could you please show the interval of convergence?
            – wonderwhy
            Aug 31 at 14:39

















          up vote
          0
          down vote













          A more efficient technique is to avoid Maclaurin/Taylor series (since they tend to converge slowly if not suitably accelerated) and use Beuker-like integrals instead. You are just interested in a tight approximation for $-2log 2$, and



          $$ int_0^1fracx^m(1-x)^m1+x,dx $$
          for any $minmathbbN$ is a number of the form $q_m pm 2^mlog(2)$ with $q_minmathbbQ$, and also an element of $left(0,frac14^mright)$.
          By choosing $m=6$ we get that $colorredfrac1951921860$ is a lower bound for $log(2)$ within an error $<4cdot 10^-6$ and we are done. Even better, we may consider
          $$ int_0^1fracP_6(2x-1)x+1,dx $$
          with $P_n$ being a Legendre polynomial, to derive that $colorredfrac6230789890$ is an approximation for $log(2)$ within an error $<7cdot 10^-10$.






          share|cite|improve this answer






















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900332%2festimate-lnx-using-taylor-polynomial-accurate-to-specified-precision%23new-answer', 'question_page');

            );

            Post as a guest






























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote



            accepted










            I would be using instead the expansion
            $$log left(frac1-x1+xright)=-2 sum_i=0^infty fracx^2n+12n+1$$ and make $x=frac 35$.
            $$S_p=-2sum_i=0^p fracleft(frac35right)^2 n+12 n+1$$ and compute
            $$left(
            beginarrayccc
            p & S_p &log left(frac14right) -S_p \
            0 & -1.200000000 & -0.186294361 \
            1 & -1.344000000 & -0.042294361 \
            2 & -1.375104000 & -0.011190361 \
            3 & -1.383102171 & -0.003192190 \
            4 & -1.385341659 & -0.000952702 \
            5 & -1.386001290 & -0.000293071 \
            6 & -1.386202224 & -0.000092137 \
            7 & -1.386264916 & -0.000029446 \
            8 & -1.386284829 & -0.000009532
            endarray
            right)$$






            share|cite|improve this answer




















            • Can you please elaborate why did you choose these series and why these are better? I must've accented on the Wolfram too much and now edited the question to not be misleading. We do not know the actual value of $ln(x)$ before computing the polynomial and Wolfram only serves as a standard to verify against that the error estimation was made right. As it turns out comparing next term to the error is enough in case of these series [at least the python program seems to work], is this right, or should I look into Lagrange Remainder?
              – wonderwhy
              Aug 31 at 13:53











            • and could you please show the interval of convergence?
              – wonderwhy
              Aug 31 at 14:39














            up vote
            0
            down vote



            accepted










            I would be using instead the expansion
            $$log left(frac1-x1+xright)=-2 sum_i=0^infty fracx^2n+12n+1$$ and make $x=frac 35$.
            $$S_p=-2sum_i=0^p fracleft(frac35right)^2 n+12 n+1$$ and compute
            $$left(
            beginarrayccc
            p & S_p &log left(frac14right) -S_p \
            0 & -1.200000000 & -0.186294361 \
            1 & -1.344000000 & -0.042294361 \
            2 & -1.375104000 & -0.011190361 \
            3 & -1.383102171 & -0.003192190 \
            4 & -1.385341659 & -0.000952702 \
            5 & -1.386001290 & -0.000293071 \
            6 & -1.386202224 & -0.000092137 \
            7 & -1.386264916 & -0.000029446 \
            8 & -1.386284829 & -0.000009532
            endarray
            right)$$






            share|cite|improve this answer




















            • Can you please elaborate why did you choose these series and why these are better? I must've accented on the Wolfram too much and now edited the question to not be misleading. We do not know the actual value of $ln(x)$ before computing the polynomial and Wolfram only serves as a standard to verify against that the error estimation was made right. As it turns out comparing next term to the error is enough in case of these series [at least the python program seems to work], is this right, or should I look into Lagrange Remainder?
              – wonderwhy
              Aug 31 at 13:53











            • and could you please show the interval of convergence?
              – wonderwhy
              Aug 31 at 14:39












            up vote
            0
            down vote



            accepted







            up vote
            0
            down vote



            accepted






            I would be using instead the expansion
            $$log left(frac1-x1+xright)=-2 sum_i=0^infty fracx^2n+12n+1$$ and make $x=frac 35$.
            $$S_p=-2sum_i=0^p fracleft(frac35right)^2 n+12 n+1$$ and compute
            $$left(
            beginarrayccc
            p & S_p &log left(frac14right) -S_p \
            0 & -1.200000000 & -0.186294361 \
            1 & -1.344000000 & -0.042294361 \
            2 & -1.375104000 & -0.011190361 \
            3 & -1.383102171 & -0.003192190 \
            4 & -1.385341659 & -0.000952702 \
            5 & -1.386001290 & -0.000293071 \
            6 & -1.386202224 & -0.000092137 \
            7 & -1.386264916 & -0.000029446 \
            8 & -1.386284829 & -0.000009532
            endarray
            right)$$






            share|cite|improve this answer












            I would be using instead the expansion
            $$log left(frac1-x1+xright)=-2 sum_i=0^infty fracx^2n+12n+1$$ and make $x=frac 35$.
            $$S_p=-2sum_i=0^p fracleft(frac35right)^2 n+12 n+1$$ and compute
            $$left(
            beginarrayccc
            p & S_p &log left(frac14right) -S_p \
            0 & -1.200000000 & -0.186294361 \
            1 & -1.344000000 & -0.042294361 \
            2 & -1.375104000 & -0.011190361 \
            3 & -1.383102171 & -0.003192190 \
            4 & -1.385341659 & -0.000952702 \
            5 & -1.386001290 & -0.000293071 \
            6 & -1.386202224 & -0.000092137 \
            7 & -1.386264916 & -0.000029446 \
            8 & -1.386284829 & -0.000009532
            endarray
            right)$$







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Aug 31 at 6:20









            Claude Leibovici

            113k1155127




            113k1155127











            • Can you please elaborate why did you choose these series and why these are better? I must've accented on the Wolfram too much and now edited the question to not be misleading. We do not know the actual value of $ln(x)$ before computing the polynomial and Wolfram only serves as a standard to verify against that the error estimation was made right. As it turns out comparing next term to the error is enough in case of these series [at least the python program seems to work], is this right, or should I look into Lagrange Remainder?
              – wonderwhy
              Aug 31 at 13:53











            • and could you please show the interval of convergence?
              – wonderwhy
              Aug 31 at 14:39
















            • Can you please elaborate why did you choose these series and why these are better? I must've accented on the Wolfram too much and now edited the question to not be misleading. We do not know the actual value of $ln(x)$ before computing the polynomial and Wolfram only serves as a standard to verify against that the error estimation was made right. As it turns out comparing next term to the error is enough in case of these series [at least the python program seems to work], is this right, or should I look into Lagrange Remainder?
              – wonderwhy
              Aug 31 at 13:53











            • and could you please show the interval of convergence?
              – wonderwhy
              Aug 31 at 14:39















            Can you please elaborate why did you choose these series and why these are better? I must've accented on the Wolfram too much and now edited the question to not be misleading. We do not know the actual value of $ln(x)$ before computing the polynomial and Wolfram only serves as a standard to verify against that the error estimation was made right. As it turns out comparing next term to the error is enough in case of these series [at least the python program seems to work], is this right, or should I look into Lagrange Remainder?
            – wonderwhy
            Aug 31 at 13:53





            Can you please elaborate why did you choose these series and why these are better? I must've accented on the Wolfram too much and now edited the question to not be misleading. We do not know the actual value of $ln(x)$ before computing the polynomial and Wolfram only serves as a standard to verify against that the error estimation was made right. As it turns out comparing next term to the error is enough in case of these series [at least the python program seems to work], is this right, or should I look into Lagrange Remainder?
            – wonderwhy
            Aug 31 at 13:53













            and could you please show the interval of convergence?
            – wonderwhy
            Aug 31 at 14:39




            and could you please show the interval of convergence?
            – wonderwhy
            Aug 31 at 14:39










            up vote
            0
            down vote













            A more efficient technique is to avoid Maclaurin/Taylor series (since they tend to converge slowly if not suitably accelerated) and use Beuker-like integrals instead. You are just interested in a tight approximation for $-2log 2$, and



            $$ int_0^1fracx^m(1-x)^m1+x,dx $$
            for any $minmathbbN$ is a number of the form $q_m pm 2^mlog(2)$ with $q_minmathbbQ$, and also an element of $left(0,frac14^mright)$.
            By choosing $m=6$ we get that $colorredfrac1951921860$ is a lower bound for $log(2)$ within an error $<4cdot 10^-6$ and we are done. Even better, we may consider
            $$ int_0^1fracP_6(2x-1)x+1,dx $$
            with $P_n$ being a Legendre polynomial, to derive that $colorredfrac6230789890$ is an approximation for $log(2)$ within an error $<7cdot 10^-10$.






            share|cite|improve this answer


























              up vote
              0
              down vote













              A more efficient technique is to avoid Maclaurin/Taylor series (since they tend to converge slowly if not suitably accelerated) and use Beuker-like integrals instead. You are just interested in a tight approximation for $-2log 2$, and



              $$ int_0^1fracx^m(1-x)^m1+x,dx $$
              for any $minmathbbN$ is a number of the form $q_m pm 2^mlog(2)$ with $q_minmathbbQ$, and also an element of $left(0,frac14^mright)$.
              By choosing $m=6$ we get that $colorredfrac1951921860$ is a lower bound for $log(2)$ within an error $<4cdot 10^-6$ and we are done. Even better, we may consider
              $$ int_0^1fracP_6(2x-1)x+1,dx $$
              with $P_n$ being a Legendre polynomial, to derive that $colorredfrac6230789890$ is an approximation for $log(2)$ within an error $<7cdot 10^-10$.






              share|cite|improve this answer
























                up vote
                0
                down vote










                up vote
                0
                down vote









                A more efficient technique is to avoid Maclaurin/Taylor series (since they tend to converge slowly if not suitably accelerated) and use Beuker-like integrals instead. You are just interested in a tight approximation for $-2log 2$, and



                $$ int_0^1fracx^m(1-x)^m1+x,dx $$
                for any $minmathbbN$ is a number of the form $q_m pm 2^mlog(2)$ with $q_minmathbbQ$, and also an element of $left(0,frac14^mright)$.
                By choosing $m=6$ we get that $colorredfrac1951921860$ is a lower bound for $log(2)$ within an error $<4cdot 10^-6$ and we are done. Even better, we may consider
                $$ int_0^1fracP_6(2x-1)x+1,dx $$
                with $P_n$ being a Legendre polynomial, to derive that $colorredfrac6230789890$ is an approximation for $log(2)$ within an error $<7cdot 10^-10$.






                share|cite|improve this answer














                A more efficient technique is to avoid Maclaurin/Taylor series (since they tend to converge slowly if not suitably accelerated) and use Beuker-like integrals instead. You are just interested in a tight approximation for $-2log 2$, and



                $$ int_0^1fracx^m(1-x)^m1+x,dx $$
                for any $minmathbbN$ is a number of the form $q_m pm 2^mlog(2)$ with $q_minmathbbQ$, and also an element of $left(0,frac14^mright)$.
                By choosing $m=6$ we get that $colorredfrac1951921860$ is a lower bound for $log(2)$ within an error $<4cdot 10^-6$ and we are done. Even better, we may consider
                $$ int_0^1fracP_6(2x-1)x+1,dx $$
                with $P_n$ being a Legendre polynomial, to derive that $colorredfrac6230789890$ is an approximation for $log(2)$ within an error $<7cdot 10^-10$.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Aug 31 at 7:16

























                answered Aug 31 at 7:10









                Jack D'Aurizio♦

                274k32268638




                274k32268638



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900332%2festimate-lnx-using-taylor-polynomial-accurate-to-specified-precision%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    這個網誌中的熱門文章

                    How to combine Bézier curves to a surface?

                    Mutual Information Always Non-negative

                    Why am i infinitely getting the same tweet with the Twitter Search API?