Numerical Computation and limits of a function
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
There's numerical differentiation and numerical integration, but is there numerical methods for calculating limits? Or is it only via symbolic computations?
I've seen limit calculators like this.
But,in general, is there a need to compute limits of functions in real life computer programming?
limits numerical-methods
add a comment |Â
up vote
2
down vote
favorite
There's numerical differentiation and numerical integration, but is there numerical methods for calculating limits? Or is it only via symbolic computations?
I've seen limit calculators like this.
But,in general, is there a need to compute limits of functions in real life computer programming?
limits numerical-methods
Hm. Did you think about using Newton's method? Not sure how to formulate it though.
â user2468
Apr 9 '12 at 1:12
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
There's numerical differentiation and numerical integration, but is there numerical methods for calculating limits? Or is it only via symbolic computations?
I've seen limit calculators like this.
But,in general, is there a need to compute limits of functions in real life computer programming?
limits numerical-methods
There's numerical differentiation and numerical integration, but is there numerical methods for calculating limits? Or is it only via symbolic computations?
I've seen limit calculators like this.
But,in general, is there a need to compute limits of functions in real life computer programming?
limits numerical-methods
asked Apr 9 '12 at 0:14
DrStrangeLove
341615
341615
Hm. Did you think about using Newton's method? Not sure how to formulate it though.
â user2468
Apr 9 '12 at 1:12
add a comment |Â
Hm. Did you think about using Newton's method? Not sure how to formulate it though.
â user2468
Apr 9 '12 at 1:12
Hm. Did you think about using Newton's method? Not sure how to formulate it though.
â user2468
Apr 9 '12 at 1:12
Hm. Did you think about using Newton's method? Not sure how to formulate it though.
â user2468
Apr 9 '12 at 1:12
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
5
down vote
One can certainly do a bit better than Thomas's initial suggestion; there are a number of so-called sequence transformations that are used for estimating the limits of discrete sequences. (For continuous limits, one will of course have to discretize; e.g. for something like $limlimits_hto 0f(h)$, one might work with the discrete sequences $f(1/k)$ or $f(2^-k)$ for integer $k$.) Two of the more common methods are Richardson extrapolation, which is based on constructing an interpolating polynomial from the members of your sequence, and the Shanks transformation, which is essentially a transformation of a power series into an approximating rational function. I had already discussed these here and here, and used these methods in this example, so I won't be repeating myself. See those answers for references to books and other literature.
Here's a quick demonstration of Richardson extrapolation, using the example $limlimits_xto 0dfracsin,xx$. Using the discretization $x=2^-k$, $k$ a positive integer, (the trick is to pick a discrete sequence that approaches the limiting value, in this case $0$, at an "appropriate" rate) we have the short table
$$beginarrayccck&2^-k&2^k sinfrac12^k\2&0.25&0.989615837018\3&0.125&0.997397867082\4&0.0625&0.999349085478\5&0.03125&0.999837247530\6&0.015625&0.999959310393\7&0.0078125&0.999989827505endarray$$
Note that the last approximation is only good to four or so digits.
To do Richardson extrapolation here, we take these six points and form the fifth-degree (why?) interpolating polynomial passing through these points through the usual methods (Lagrange, Newton). Having done so, we evaluate the interpolating polynomial thus constructed at $0$ (which is in fact the value that the sequence $2^-k$ approaches). The result of this extrapolation (since $0$ is not within the range of the points we are interpolating) gives a result good to eleven digits (error $1.5times10^-12$), quite a bit more than the estimates we started from.
Comparable results are obtained if we use the discretization $x=k^-2$ instead (error $7.8times10^-12$). In general, however, one will have to experiment with various "auxiliary sequences", since not all sequences can always give results as good as in the example above.
Here's a Mathematica one-liner using Richardson extrapolation for the limit considered above:
InterpolatingPolynomial[Table[N[2^-k, Sinc[2^-k], 20], k, 2, 7], 0]
add a comment |Â
up vote
1
down vote
I have always just made the variable approach the limit by making it sufficiently close, which will give an approximation of the limit. To evaluate limits like in the link you provided would require lots of work coding it as it would need to understand what to do with the input.
Limits are often necessarily in numerical analysis, as they are used to compute many things, like derivatives, integrals, constants, transcendental and other functions, etc. For example, the decimal value of $sqrt2$ cannot be computed to 100% accuracy as it has an infinite number of decimal places. However, it can be estimated to sufficient accuracy with numerical methods.
I hope this helps.
add a comment |Â
up vote
0
down vote
A very simple way of calculating $lim_xto x_0 f(x)$ numerically would be to approach the limiting $x_0$ by smaller and smaller steps, and just stop whenever the accuracy of the obtained value is high enough.
For example, the following would suffice to calculate $lim_xto 0^+ fracsin xx$ numerically (I'm doing it in pseudo-Python here):
x = 1
dx = 0.1
dlim = 1
lim = 0
prev = 0
while abs(dlim) > 1e-6:
if x<=dx: dx *= 0.1
x -= dx
lim = sin(x)/x
dlim = lim-prev
This algorithm will take smaller and smaller steps and approach $x=0$ from above, until the obtained limit value changes by less than the required accuracy. Of course, in this case you could really start with $x=0.1$ and $dx = 0.01$, since we know that nothing interesting happens before very close to $x=0$, where Python sooner or later will return NaN
or Inf
because we're dividing by zero...
Note: I thought this up from the top of my head, so it might not be an optimal approach - especially not for an arbitrary, unknown function (which you can evaluate, but you don't know how it behaves...). Also, I cannot really see a use case scenario when you'd need to do this.
add a comment |Â
up vote
0
down vote
Regarding Limits at Infinity
The limit of a function, $f(x)$, as $x$ tends to $+infty$ is equivalent to the limit of the sequence $left(f_nright)_ninmathbbN$. Hence, you can use sequence transformations methods to improve the rate of convergence, as referenced in @j-m-is-not-a-mathematician's answer. Sequence acceleration methods include:
Shank's transformation. $S(f_n)=fracf_n+1f_n-1-f_n^2f_n+1-2f_n+f_n-1$
Aitken's delta squared process
Richardson's transformation, as described in J. M's answer. This method is used if $f_n$ depends on a small parameter (such as in the case of an integral).
Euler's series transform. This method may be used if $f_n$ can be expressed as an alternating series.
Series acceleration methods are further discussed at this (Stackexchange question).
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
5
down vote
One can certainly do a bit better than Thomas's initial suggestion; there are a number of so-called sequence transformations that are used for estimating the limits of discrete sequences. (For continuous limits, one will of course have to discretize; e.g. for something like $limlimits_hto 0f(h)$, one might work with the discrete sequences $f(1/k)$ or $f(2^-k)$ for integer $k$.) Two of the more common methods are Richardson extrapolation, which is based on constructing an interpolating polynomial from the members of your sequence, and the Shanks transformation, which is essentially a transformation of a power series into an approximating rational function. I had already discussed these here and here, and used these methods in this example, so I won't be repeating myself. See those answers for references to books and other literature.
Here's a quick demonstration of Richardson extrapolation, using the example $limlimits_xto 0dfracsin,xx$. Using the discretization $x=2^-k$, $k$ a positive integer, (the trick is to pick a discrete sequence that approaches the limiting value, in this case $0$, at an "appropriate" rate) we have the short table
$$beginarrayccck&2^-k&2^k sinfrac12^k\2&0.25&0.989615837018\3&0.125&0.997397867082\4&0.0625&0.999349085478\5&0.03125&0.999837247530\6&0.015625&0.999959310393\7&0.0078125&0.999989827505endarray$$
Note that the last approximation is only good to four or so digits.
To do Richardson extrapolation here, we take these six points and form the fifth-degree (why?) interpolating polynomial passing through these points through the usual methods (Lagrange, Newton). Having done so, we evaluate the interpolating polynomial thus constructed at $0$ (which is in fact the value that the sequence $2^-k$ approaches). The result of this extrapolation (since $0$ is not within the range of the points we are interpolating) gives a result good to eleven digits (error $1.5times10^-12$), quite a bit more than the estimates we started from.
Comparable results are obtained if we use the discretization $x=k^-2$ instead (error $7.8times10^-12$). In general, however, one will have to experiment with various "auxiliary sequences", since not all sequences can always give results as good as in the example above.
Here's a Mathematica one-liner using Richardson extrapolation for the limit considered above:
InterpolatingPolynomial[Table[N[2^-k, Sinc[2^-k], 20], k, 2, 7], 0]
add a comment |Â
up vote
5
down vote
One can certainly do a bit better than Thomas's initial suggestion; there are a number of so-called sequence transformations that are used for estimating the limits of discrete sequences. (For continuous limits, one will of course have to discretize; e.g. for something like $limlimits_hto 0f(h)$, one might work with the discrete sequences $f(1/k)$ or $f(2^-k)$ for integer $k$.) Two of the more common methods are Richardson extrapolation, which is based on constructing an interpolating polynomial from the members of your sequence, and the Shanks transformation, which is essentially a transformation of a power series into an approximating rational function. I had already discussed these here and here, and used these methods in this example, so I won't be repeating myself. See those answers for references to books and other literature.
Here's a quick demonstration of Richardson extrapolation, using the example $limlimits_xto 0dfracsin,xx$. Using the discretization $x=2^-k$, $k$ a positive integer, (the trick is to pick a discrete sequence that approaches the limiting value, in this case $0$, at an "appropriate" rate) we have the short table
$$beginarrayccck&2^-k&2^k sinfrac12^k\2&0.25&0.989615837018\3&0.125&0.997397867082\4&0.0625&0.999349085478\5&0.03125&0.999837247530\6&0.015625&0.999959310393\7&0.0078125&0.999989827505endarray$$
Note that the last approximation is only good to four or so digits.
To do Richardson extrapolation here, we take these six points and form the fifth-degree (why?) interpolating polynomial passing through these points through the usual methods (Lagrange, Newton). Having done so, we evaluate the interpolating polynomial thus constructed at $0$ (which is in fact the value that the sequence $2^-k$ approaches). The result of this extrapolation (since $0$ is not within the range of the points we are interpolating) gives a result good to eleven digits (error $1.5times10^-12$), quite a bit more than the estimates we started from.
Comparable results are obtained if we use the discretization $x=k^-2$ instead (error $7.8times10^-12$). In general, however, one will have to experiment with various "auxiliary sequences", since not all sequences can always give results as good as in the example above.
Here's a Mathematica one-liner using Richardson extrapolation for the limit considered above:
InterpolatingPolynomial[Table[N[2^-k, Sinc[2^-k], 20], k, 2, 7], 0]
add a comment |Â
up vote
5
down vote
up vote
5
down vote
One can certainly do a bit better than Thomas's initial suggestion; there are a number of so-called sequence transformations that are used for estimating the limits of discrete sequences. (For continuous limits, one will of course have to discretize; e.g. for something like $limlimits_hto 0f(h)$, one might work with the discrete sequences $f(1/k)$ or $f(2^-k)$ for integer $k$.) Two of the more common methods are Richardson extrapolation, which is based on constructing an interpolating polynomial from the members of your sequence, and the Shanks transformation, which is essentially a transformation of a power series into an approximating rational function. I had already discussed these here and here, and used these methods in this example, so I won't be repeating myself. See those answers for references to books and other literature.
Here's a quick demonstration of Richardson extrapolation, using the example $limlimits_xto 0dfracsin,xx$. Using the discretization $x=2^-k$, $k$ a positive integer, (the trick is to pick a discrete sequence that approaches the limiting value, in this case $0$, at an "appropriate" rate) we have the short table
$$beginarrayccck&2^-k&2^k sinfrac12^k\2&0.25&0.989615837018\3&0.125&0.997397867082\4&0.0625&0.999349085478\5&0.03125&0.999837247530\6&0.015625&0.999959310393\7&0.0078125&0.999989827505endarray$$
Note that the last approximation is only good to four or so digits.
To do Richardson extrapolation here, we take these six points and form the fifth-degree (why?) interpolating polynomial passing through these points through the usual methods (Lagrange, Newton). Having done so, we evaluate the interpolating polynomial thus constructed at $0$ (which is in fact the value that the sequence $2^-k$ approaches). The result of this extrapolation (since $0$ is not within the range of the points we are interpolating) gives a result good to eleven digits (error $1.5times10^-12$), quite a bit more than the estimates we started from.
Comparable results are obtained if we use the discretization $x=k^-2$ instead (error $7.8times10^-12$). In general, however, one will have to experiment with various "auxiliary sequences", since not all sequences can always give results as good as in the example above.
Here's a Mathematica one-liner using Richardson extrapolation for the limit considered above:
InterpolatingPolynomial[Table[N[2^-k, Sinc[2^-k], 20], k, 2, 7], 0]
One can certainly do a bit better than Thomas's initial suggestion; there are a number of so-called sequence transformations that are used for estimating the limits of discrete sequences. (For continuous limits, one will of course have to discretize; e.g. for something like $limlimits_hto 0f(h)$, one might work with the discrete sequences $f(1/k)$ or $f(2^-k)$ for integer $k$.) Two of the more common methods are Richardson extrapolation, which is based on constructing an interpolating polynomial from the members of your sequence, and the Shanks transformation, which is essentially a transformation of a power series into an approximating rational function. I had already discussed these here and here, and used these methods in this example, so I won't be repeating myself. See those answers for references to books and other literature.
Here's a quick demonstration of Richardson extrapolation, using the example $limlimits_xto 0dfracsin,xx$. Using the discretization $x=2^-k$, $k$ a positive integer, (the trick is to pick a discrete sequence that approaches the limiting value, in this case $0$, at an "appropriate" rate) we have the short table
$$beginarrayccck&2^-k&2^k sinfrac12^k\2&0.25&0.989615837018\3&0.125&0.997397867082\4&0.0625&0.999349085478\5&0.03125&0.999837247530\6&0.015625&0.999959310393\7&0.0078125&0.999989827505endarray$$
Note that the last approximation is only good to four or so digits.
To do Richardson extrapolation here, we take these six points and form the fifth-degree (why?) interpolating polynomial passing through these points through the usual methods (Lagrange, Newton). Having done so, we evaluate the interpolating polynomial thus constructed at $0$ (which is in fact the value that the sequence $2^-k$ approaches). The result of this extrapolation (since $0$ is not within the range of the points we are interpolating) gives a result good to eleven digits (error $1.5times10^-12$), quite a bit more than the estimates we started from.
Comparable results are obtained if we use the discretization $x=k^-2$ instead (error $7.8times10^-12$). In general, however, one will have to experiment with various "auxiliary sequences", since not all sequences can always give results as good as in the example above.
Here's a Mathematica one-liner using Richardson extrapolation for the limit considered above:
InterpolatingPolynomial[Table[N[2^-k, Sinc[2^-k], 20], k, 2, 7], 0]
edited Apr 13 '17 at 12:20
Communityâ¦
1
1
answered Apr 10 '12 at 8:15
J. M. is not a mathematician
59.9k5146283
59.9k5146283
add a comment |Â
add a comment |Â
up vote
1
down vote
I have always just made the variable approach the limit by making it sufficiently close, which will give an approximation of the limit. To evaluate limits like in the link you provided would require lots of work coding it as it would need to understand what to do with the input.
Limits are often necessarily in numerical analysis, as they are used to compute many things, like derivatives, integrals, constants, transcendental and other functions, etc. For example, the decimal value of $sqrt2$ cannot be computed to 100% accuracy as it has an infinite number of decimal places. However, it can be estimated to sufficient accuracy with numerical methods.
I hope this helps.
add a comment |Â
up vote
1
down vote
I have always just made the variable approach the limit by making it sufficiently close, which will give an approximation of the limit. To evaluate limits like in the link you provided would require lots of work coding it as it would need to understand what to do with the input.
Limits are often necessarily in numerical analysis, as they are used to compute many things, like derivatives, integrals, constants, transcendental and other functions, etc. For example, the decimal value of $sqrt2$ cannot be computed to 100% accuracy as it has an infinite number of decimal places. However, it can be estimated to sufficient accuracy with numerical methods.
I hope this helps.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
I have always just made the variable approach the limit by making it sufficiently close, which will give an approximation of the limit. To evaluate limits like in the link you provided would require lots of work coding it as it would need to understand what to do with the input.
Limits are often necessarily in numerical analysis, as they are used to compute many things, like derivatives, integrals, constants, transcendental and other functions, etc. For example, the decimal value of $sqrt2$ cannot be computed to 100% accuracy as it has an infinite number of decimal places. However, it can be estimated to sufficient accuracy with numerical methods.
I hope this helps.
I have always just made the variable approach the limit by making it sufficiently close, which will give an approximation of the limit. To evaluate limits like in the link you provided would require lots of work coding it as it would need to understand what to do with the input.
Limits are often necessarily in numerical analysis, as they are used to compute many things, like derivatives, integrals, constants, transcendental and other functions, etc. For example, the decimal value of $sqrt2$ cannot be computed to 100% accuracy as it has an infinite number of decimal places. However, it can be estimated to sufficient accuracy with numerical methods.
I hope this helps.
edited Apr 9 '12 at 0:58
answered Apr 9 '12 at 0:45
Argon
16.1k669122
16.1k669122
add a comment |Â
add a comment |Â
up vote
0
down vote
A very simple way of calculating $lim_xto x_0 f(x)$ numerically would be to approach the limiting $x_0$ by smaller and smaller steps, and just stop whenever the accuracy of the obtained value is high enough.
For example, the following would suffice to calculate $lim_xto 0^+ fracsin xx$ numerically (I'm doing it in pseudo-Python here):
x = 1
dx = 0.1
dlim = 1
lim = 0
prev = 0
while abs(dlim) > 1e-6:
if x<=dx: dx *= 0.1
x -= dx
lim = sin(x)/x
dlim = lim-prev
This algorithm will take smaller and smaller steps and approach $x=0$ from above, until the obtained limit value changes by less than the required accuracy. Of course, in this case you could really start with $x=0.1$ and $dx = 0.01$, since we know that nothing interesting happens before very close to $x=0$, where Python sooner or later will return NaN
or Inf
because we're dividing by zero...
Note: I thought this up from the top of my head, so it might not be an optimal approach - especially not for an arbitrary, unknown function (which you can evaluate, but you don't know how it behaves...). Also, I cannot really see a use case scenario when you'd need to do this.
add a comment |Â
up vote
0
down vote
A very simple way of calculating $lim_xto x_0 f(x)$ numerically would be to approach the limiting $x_0$ by smaller and smaller steps, and just stop whenever the accuracy of the obtained value is high enough.
For example, the following would suffice to calculate $lim_xto 0^+ fracsin xx$ numerically (I'm doing it in pseudo-Python here):
x = 1
dx = 0.1
dlim = 1
lim = 0
prev = 0
while abs(dlim) > 1e-6:
if x<=dx: dx *= 0.1
x -= dx
lim = sin(x)/x
dlim = lim-prev
This algorithm will take smaller and smaller steps and approach $x=0$ from above, until the obtained limit value changes by less than the required accuracy. Of course, in this case you could really start with $x=0.1$ and $dx = 0.01$, since we know that nothing interesting happens before very close to $x=0$, where Python sooner or later will return NaN
or Inf
because we're dividing by zero...
Note: I thought this up from the top of my head, so it might not be an optimal approach - especially not for an arbitrary, unknown function (which you can evaluate, but you don't know how it behaves...). Also, I cannot really see a use case scenario when you'd need to do this.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
A very simple way of calculating $lim_xto x_0 f(x)$ numerically would be to approach the limiting $x_0$ by smaller and smaller steps, and just stop whenever the accuracy of the obtained value is high enough.
For example, the following would suffice to calculate $lim_xto 0^+ fracsin xx$ numerically (I'm doing it in pseudo-Python here):
x = 1
dx = 0.1
dlim = 1
lim = 0
prev = 0
while abs(dlim) > 1e-6:
if x<=dx: dx *= 0.1
x -= dx
lim = sin(x)/x
dlim = lim-prev
This algorithm will take smaller and smaller steps and approach $x=0$ from above, until the obtained limit value changes by less than the required accuracy. Of course, in this case you could really start with $x=0.1$ and $dx = 0.01$, since we know that nothing interesting happens before very close to $x=0$, where Python sooner or later will return NaN
or Inf
because we're dividing by zero...
Note: I thought this up from the top of my head, so it might not be an optimal approach - especially not for an arbitrary, unknown function (which you can evaluate, but you don't know how it behaves...). Also, I cannot really see a use case scenario when you'd need to do this.
A very simple way of calculating $lim_xto x_0 f(x)$ numerically would be to approach the limiting $x_0$ by smaller and smaller steps, and just stop whenever the accuracy of the obtained value is high enough.
For example, the following would suffice to calculate $lim_xto 0^+ fracsin xx$ numerically (I'm doing it in pseudo-Python here):
x = 1
dx = 0.1
dlim = 1
lim = 0
prev = 0
while abs(dlim) > 1e-6:
if x<=dx: dx *= 0.1
x -= dx
lim = sin(x)/x
dlim = lim-prev
This algorithm will take smaller and smaller steps and approach $x=0$ from above, until the obtained limit value changes by less than the required accuracy. Of course, in this case you could really start with $x=0.1$ and $dx = 0.01$, since we know that nothing interesting happens before very close to $x=0$, where Python sooner or later will return NaN
or Inf
because we're dividing by zero...
Note: I thought this up from the top of my head, so it might not be an optimal approach - especially not for an arbitrary, unknown function (which you can evaluate, but you don't know how it behaves...). Also, I cannot really see a use case scenario when you'd need to do this.
answered Apr 9 '12 at 1:06
Tomas Lycken
35519
35519
add a comment |Â
add a comment |Â
up vote
0
down vote
Regarding Limits at Infinity
The limit of a function, $f(x)$, as $x$ tends to $+infty$ is equivalent to the limit of the sequence $left(f_nright)_ninmathbbN$. Hence, you can use sequence transformations methods to improve the rate of convergence, as referenced in @j-m-is-not-a-mathematician's answer. Sequence acceleration methods include:
Shank's transformation. $S(f_n)=fracf_n+1f_n-1-f_n^2f_n+1-2f_n+f_n-1$
Aitken's delta squared process
Richardson's transformation, as described in J. M's answer. This method is used if $f_n$ depends on a small parameter (such as in the case of an integral).
Euler's series transform. This method may be used if $f_n$ can be expressed as an alternating series.
Series acceleration methods are further discussed at this (Stackexchange question).
add a comment |Â
up vote
0
down vote
Regarding Limits at Infinity
The limit of a function, $f(x)$, as $x$ tends to $+infty$ is equivalent to the limit of the sequence $left(f_nright)_ninmathbbN$. Hence, you can use sequence transformations methods to improve the rate of convergence, as referenced in @j-m-is-not-a-mathematician's answer. Sequence acceleration methods include:
Shank's transformation. $S(f_n)=fracf_n+1f_n-1-f_n^2f_n+1-2f_n+f_n-1$
Aitken's delta squared process
Richardson's transformation, as described in J. M's answer. This method is used if $f_n$ depends on a small parameter (such as in the case of an integral).
Euler's series transform. This method may be used if $f_n$ can be expressed as an alternating series.
Series acceleration methods are further discussed at this (Stackexchange question).
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Regarding Limits at Infinity
The limit of a function, $f(x)$, as $x$ tends to $+infty$ is equivalent to the limit of the sequence $left(f_nright)_ninmathbbN$. Hence, you can use sequence transformations methods to improve the rate of convergence, as referenced in @j-m-is-not-a-mathematician's answer. Sequence acceleration methods include:
Shank's transformation. $S(f_n)=fracf_n+1f_n-1-f_n^2f_n+1-2f_n+f_n-1$
Aitken's delta squared process
Richardson's transformation, as described in J. M's answer. This method is used if $f_n$ depends on a small parameter (such as in the case of an integral).
Euler's series transform. This method may be used if $f_n$ can be expressed as an alternating series.
Series acceleration methods are further discussed at this (Stackexchange question).
Regarding Limits at Infinity
The limit of a function, $f(x)$, as $x$ tends to $+infty$ is equivalent to the limit of the sequence $left(f_nright)_ninmathbbN$. Hence, you can use sequence transformations methods to improve the rate of convergence, as referenced in @j-m-is-not-a-mathematician's answer. Sequence acceleration methods include:
Shank's transformation. $S(f_n)=fracf_n+1f_n-1-f_n^2f_n+1-2f_n+f_n-1$
Aitken's delta squared process
Richardson's transformation, as described in J. M's answer. This method is used if $f_n$ depends on a small parameter (such as in the case of an integral).
Euler's series transform. This method may be used if $f_n$ can be expressed as an alternating series.
Series acceleration methods are further discussed at this (Stackexchange question).
edited Aug 26 at 19:36
answered Aug 26 at 19:29
Jam
4,35811330
4,35811330
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f129467%2fnumerical-computation-and-limits-of-a-function%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Hm. Did you think about using Newton's method? Not sure how to formulate it though.
â user2468
Apr 9 '12 at 1:12