Proving that $lim_tto infty r(t) = 1$
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
In one of my recent answers, I claim the following:
Since $dot r > 0$ when $0 < r < 1,$ and $dot r < 0$ when $r>1$, we may conclude that any solution starting in $(x_0,y_0)$ with $x_0^2+y_0^2 > 0$ will be attracted to the unit circle, so that
$$lim_t to infty r(t) = 1.$$
To briefly put this into context, we are working in polar coordinates, $r:mathbbRto [0,infty)$ is the radius function defined by $r(t)^2 = x(t)^2+y(t)^2$ and can be assumed to be smooth for the purposes of this question.
Of course, the result is intuitively clear, but I was unable to formulate a rigorous proof of this claim.
The thing that confuses me here is that we know the behaviour of the derivative depending on the value of $r$, rather than on the parameter $t$, so I am not sure how to use the information about the derivative in connection to the limit.
Could someone give me a hint on where/how to start?
EDIT: Just to be fully clear, I would like to prove the following:
Suppose that $r:mathbbRto(0,infty)$ is smooth and satisfies
beginalign
(1) quad dot r > 0 quad &textfor quad r < 1\
(2) quad dot r < 0 quad &textfor quad r > 1.
endalign
Then
$ lim_tto infty r(t) = 1. $
EDIT 2:
As pointed out in one of the answers, the claim is not true as stated.
calculus real-analysis limits
 |Â
show 4 more comments
up vote
2
down vote
favorite
In one of my recent answers, I claim the following:
Since $dot r > 0$ when $0 < r < 1,$ and $dot r < 0$ when $r>1$, we may conclude that any solution starting in $(x_0,y_0)$ with $x_0^2+y_0^2 > 0$ will be attracted to the unit circle, so that
$$lim_t to infty r(t) = 1.$$
To briefly put this into context, we are working in polar coordinates, $r:mathbbRto [0,infty)$ is the radius function defined by $r(t)^2 = x(t)^2+y(t)^2$ and can be assumed to be smooth for the purposes of this question.
Of course, the result is intuitively clear, but I was unable to formulate a rigorous proof of this claim.
The thing that confuses me here is that we know the behaviour of the derivative depending on the value of $r$, rather than on the parameter $t$, so I am not sure how to use the information about the derivative in connection to the limit.
Could someone give me a hint on where/how to start?
EDIT: Just to be fully clear, I would like to prove the following:
Suppose that $r:mathbbRto(0,infty)$ is smooth and satisfies
beginalign
(1) quad dot r > 0 quad &textfor quad r < 1\
(2) quad dot r < 0 quad &textfor quad r > 1.
endalign
Then
$ lim_tto infty r(t) = 1. $
EDIT 2:
As pointed out in one of the answers, the claim is not true as stated.
calculus real-analysis limits
The differential equation you are considering is $dotr=r(1-r^2)$?
â Fakemistake
Aug 24 at 8:25
@Fakemistake Yes. Is that relevant though?
â Sobi
Aug 24 at 8:27
In general: If $dotr=f(r)$ is given, then the roots of $f$ are (constant) solutions of the differential equation. Any other solution converges to one of the constant solutions.
â Fakemistake
Aug 24 at 8:31
@Fakemistake Yes, I am aware of that. As I said, this is intuitively clear as well. But my question concerns proving this claim e.g. using the $epsilon$-$delta$ definition of the limit or via other properties of limits.
â Sobi
Aug 24 at 8:33
Well, one way is to consider the solution of the given differential equation (wolfram alpha provides one) and then take the limit, but it guess this is not what you want.
â Fakemistake
Aug 24 at 8:38
 |Â
show 4 more comments
up vote
2
down vote
favorite
up vote
2
down vote
favorite
In one of my recent answers, I claim the following:
Since $dot r > 0$ when $0 < r < 1,$ and $dot r < 0$ when $r>1$, we may conclude that any solution starting in $(x_0,y_0)$ with $x_0^2+y_0^2 > 0$ will be attracted to the unit circle, so that
$$lim_t to infty r(t) = 1.$$
To briefly put this into context, we are working in polar coordinates, $r:mathbbRto [0,infty)$ is the radius function defined by $r(t)^2 = x(t)^2+y(t)^2$ and can be assumed to be smooth for the purposes of this question.
Of course, the result is intuitively clear, but I was unable to formulate a rigorous proof of this claim.
The thing that confuses me here is that we know the behaviour of the derivative depending on the value of $r$, rather than on the parameter $t$, so I am not sure how to use the information about the derivative in connection to the limit.
Could someone give me a hint on where/how to start?
EDIT: Just to be fully clear, I would like to prove the following:
Suppose that $r:mathbbRto(0,infty)$ is smooth and satisfies
beginalign
(1) quad dot r > 0 quad &textfor quad r < 1\
(2) quad dot r < 0 quad &textfor quad r > 1.
endalign
Then
$ lim_tto infty r(t) = 1. $
EDIT 2:
As pointed out in one of the answers, the claim is not true as stated.
calculus real-analysis limits
In one of my recent answers, I claim the following:
Since $dot r > 0$ when $0 < r < 1,$ and $dot r < 0$ when $r>1$, we may conclude that any solution starting in $(x_0,y_0)$ with $x_0^2+y_0^2 > 0$ will be attracted to the unit circle, so that
$$lim_t to infty r(t) = 1.$$
To briefly put this into context, we are working in polar coordinates, $r:mathbbRto [0,infty)$ is the radius function defined by $r(t)^2 = x(t)^2+y(t)^2$ and can be assumed to be smooth for the purposes of this question.
Of course, the result is intuitively clear, but I was unable to formulate a rigorous proof of this claim.
The thing that confuses me here is that we know the behaviour of the derivative depending on the value of $r$, rather than on the parameter $t$, so I am not sure how to use the information about the derivative in connection to the limit.
Could someone give me a hint on where/how to start?
EDIT: Just to be fully clear, I would like to prove the following:
Suppose that $r:mathbbRto(0,infty)$ is smooth and satisfies
beginalign
(1) quad dot r > 0 quad &textfor quad r < 1\
(2) quad dot r < 0 quad &textfor quad r > 1.
endalign
Then
$ lim_tto infty r(t) = 1. $
EDIT 2:
As pointed out in one of the answers, the claim is not true as stated.
calculus real-analysis limits
edited Aug 24 at 11:23
asked Aug 24 at 8:21
Sobi
2,185314
2,185314
The differential equation you are considering is $dotr=r(1-r^2)$?
â Fakemistake
Aug 24 at 8:25
@Fakemistake Yes. Is that relevant though?
â Sobi
Aug 24 at 8:27
In general: If $dotr=f(r)$ is given, then the roots of $f$ are (constant) solutions of the differential equation. Any other solution converges to one of the constant solutions.
â Fakemistake
Aug 24 at 8:31
@Fakemistake Yes, I am aware of that. As I said, this is intuitively clear as well. But my question concerns proving this claim e.g. using the $epsilon$-$delta$ definition of the limit or via other properties of limits.
â Sobi
Aug 24 at 8:33
Well, one way is to consider the solution of the given differential equation (wolfram alpha provides one) and then take the limit, but it guess this is not what you want.
â Fakemistake
Aug 24 at 8:38
 |Â
show 4 more comments
The differential equation you are considering is $dotr=r(1-r^2)$?
â Fakemistake
Aug 24 at 8:25
@Fakemistake Yes. Is that relevant though?
â Sobi
Aug 24 at 8:27
In general: If $dotr=f(r)$ is given, then the roots of $f$ are (constant) solutions of the differential equation. Any other solution converges to one of the constant solutions.
â Fakemistake
Aug 24 at 8:31
@Fakemistake Yes, I am aware of that. As I said, this is intuitively clear as well. But my question concerns proving this claim e.g. using the $epsilon$-$delta$ definition of the limit or via other properties of limits.
â Sobi
Aug 24 at 8:33
Well, one way is to consider the solution of the given differential equation (wolfram alpha provides one) and then take the limit, but it guess this is not what you want.
â Fakemistake
Aug 24 at 8:38
The differential equation you are considering is $dotr=r(1-r^2)$?
â Fakemistake
Aug 24 at 8:25
The differential equation you are considering is $dotr=r(1-r^2)$?
â Fakemistake
Aug 24 at 8:25
@Fakemistake Yes. Is that relevant though?
â Sobi
Aug 24 at 8:27
@Fakemistake Yes. Is that relevant though?
â Sobi
Aug 24 at 8:27
In general: If $dotr=f(r)$ is given, then the roots of $f$ are (constant) solutions of the differential equation. Any other solution converges to one of the constant solutions.
â Fakemistake
Aug 24 at 8:31
In general: If $dotr=f(r)$ is given, then the roots of $f$ are (constant) solutions of the differential equation. Any other solution converges to one of the constant solutions.
â Fakemistake
Aug 24 at 8:31
@Fakemistake Yes, I am aware of that. As I said, this is intuitively clear as well. But my question concerns proving this claim e.g. using the $epsilon$-$delta$ definition of the limit or via other properties of limits.
â Sobi
Aug 24 at 8:33
@Fakemistake Yes, I am aware of that. As I said, this is intuitively clear as well. But my question concerns proving this claim e.g. using the $epsilon$-$delta$ definition of the limit or via other properties of limits.
â Sobi
Aug 24 at 8:33
Well, one way is to consider the solution of the given differential equation (wolfram alpha provides one) and then take the limit, but it guess this is not what you want.
â Fakemistake
Aug 24 at 8:38
Well, one way is to consider the solution of the given differential equation (wolfram alpha provides one) and then take the limit, but it guess this is not what you want.
â Fakemistake
Aug 24 at 8:38
 |Â
show 4 more comments
2 Answers
2
active
oldest
votes
up vote
3
down vote
accepted
This is wrong as it stands. Consider $r(t)=frac110arctan t$. Then $r(t)<1$ for all $t$ and $dot r(t)>0$ for all $t$ but $r(t)not to 1$ as $ttoinfty$.
You will need $r$ to satisfy some other conditions, for example a suitable ODE.
Nice example! What if we added the condition $dot r = 0$ at $r = 1$?
â Sobi
Aug 24 at 11:19
2
That is true in my example: whenever $r(t)=1$ (i.e. never), $dot r=0$.
â Kusma
Aug 24 at 11:20
Oh, indeed! That came as a surprise to me, thanks!
â Sobi
Aug 24 at 11:22
add a comment |Â
up vote
1
down vote
One way to approach this problem might be the following: If $r(t) = alpha in (0, 1)$, say, you may compute the derivative at this point. It will be an $epsilon$ away from $0$. You want to prove that for a sufficiently large interval, it remains away from $0$, so that the solution increases (by the differential equation, $r$ will stay below $1$ all the time, otherwise the derivative becomes positive).
So write down the equation
$$
r(1-r^2) le epsilon/2.
$$
What condition must $r$ satisfy so that this holds?
EDIT: For the general case, see my comment below.
Should the result not hold regardless of the differential equation behind all of this? That is, if we only know that $dot r > 0$ for $0 < r < 1$ and $dot r < 0$ for $r > 1$, can we not still say that $lim_tto infty r(t) = 1$?
â Sobi
Aug 24 at 8:41
1
Yes, you may use that r(t) increases monotonically until it has reached 1, so the limit exists by the convergence of monotonely increasing bounded things. If it's not one, it's strictly smaller. But then the derivative will always be strictly larger than something, and you converge to infinity.
â AlgebraicsAnonymous
Aug 24 at 8:50
That seems like a good idea. Maybe you could slightly formalize your contradiction argument and edit your answer so that I can mark it as an answer?
â Sobi
Aug 24 at 8:58
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
This is wrong as it stands. Consider $r(t)=frac110arctan t$. Then $r(t)<1$ for all $t$ and $dot r(t)>0$ for all $t$ but $r(t)not to 1$ as $ttoinfty$.
You will need $r$ to satisfy some other conditions, for example a suitable ODE.
Nice example! What if we added the condition $dot r = 0$ at $r = 1$?
â Sobi
Aug 24 at 11:19
2
That is true in my example: whenever $r(t)=1$ (i.e. never), $dot r=0$.
â Kusma
Aug 24 at 11:20
Oh, indeed! That came as a surprise to me, thanks!
â Sobi
Aug 24 at 11:22
add a comment |Â
up vote
3
down vote
accepted
This is wrong as it stands. Consider $r(t)=frac110arctan t$. Then $r(t)<1$ for all $t$ and $dot r(t)>0$ for all $t$ but $r(t)not to 1$ as $ttoinfty$.
You will need $r$ to satisfy some other conditions, for example a suitable ODE.
Nice example! What if we added the condition $dot r = 0$ at $r = 1$?
â Sobi
Aug 24 at 11:19
2
That is true in my example: whenever $r(t)=1$ (i.e. never), $dot r=0$.
â Kusma
Aug 24 at 11:20
Oh, indeed! That came as a surprise to me, thanks!
â Sobi
Aug 24 at 11:22
add a comment |Â
up vote
3
down vote
accepted
up vote
3
down vote
accepted
This is wrong as it stands. Consider $r(t)=frac110arctan t$. Then $r(t)<1$ for all $t$ and $dot r(t)>0$ for all $t$ but $r(t)not to 1$ as $ttoinfty$.
You will need $r$ to satisfy some other conditions, for example a suitable ODE.
This is wrong as it stands. Consider $r(t)=frac110arctan t$. Then $r(t)<1$ for all $t$ and $dot r(t)>0$ for all $t$ but $r(t)not to 1$ as $ttoinfty$.
You will need $r$ to satisfy some other conditions, for example a suitable ODE.
edited Aug 24 at 11:19
answered Aug 24 at 11:03
Kusma
2,697214
2,697214
Nice example! What if we added the condition $dot r = 0$ at $r = 1$?
â Sobi
Aug 24 at 11:19
2
That is true in my example: whenever $r(t)=1$ (i.e. never), $dot r=0$.
â Kusma
Aug 24 at 11:20
Oh, indeed! That came as a surprise to me, thanks!
â Sobi
Aug 24 at 11:22
add a comment |Â
Nice example! What if we added the condition $dot r = 0$ at $r = 1$?
â Sobi
Aug 24 at 11:19
2
That is true in my example: whenever $r(t)=1$ (i.e. never), $dot r=0$.
â Kusma
Aug 24 at 11:20
Oh, indeed! That came as a surprise to me, thanks!
â Sobi
Aug 24 at 11:22
Nice example! What if we added the condition $dot r = 0$ at $r = 1$?
â Sobi
Aug 24 at 11:19
Nice example! What if we added the condition $dot r = 0$ at $r = 1$?
â Sobi
Aug 24 at 11:19
2
2
That is true in my example: whenever $r(t)=1$ (i.e. never), $dot r=0$.
â Kusma
Aug 24 at 11:20
That is true in my example: whenever $r(t)=1$ (i.e. never), $dot r=0$.
â Kusma
Aug 24 at 11:20
Oh, indeed! That came as a surprise to me, thanks!
â Sobi
Aug 24 at 11:22
Oh, indeed! That came as a surprise to me, thanks!
â Sobi
Aug 24 at 11:22
add a comment |Â
up vote
1
down vote
One way to approach this problem might be the following: If $r(t) = alpha in (0, 1)$, say, you may compute the derivative at this point. It will be an $epsilon$ away from $0$. You want to prove that for a sufficiently large interval, it remains away from $0$, so that the solution increases (by the differential equation, $r$ will stay below $1$ all the time, otherwise the derivative becomes positive).
So write down the equation
$$
r(1-r^2) le epsilon/2.
$$
What condition must $r$ satisfy so that this holds?
EDIT: For the general case, see my comment below.
Should the result not hold regardless of the differential equation behind all of this? That is, if we only know that $dot r > 0$ for $0 < r < 1$ and $dot r < 0$ for $r > 1$, can we not still say that $lim_tto infty r(t) = 1$?
â Sobi
Aug 24 at 8:41
1
Yes, you may use that r(t) increases monotonically until it has reached 1, so the limit exists by the convergence of monotonely increasing bounded things. If it's not one, it's strictly smaller. But then the derivative will always be strictly larger than something, and you converge to infinity.
â AlgebraicsAnonymous
Aug 24 at 8:50
That seems like a good idea. Maybe you could slightly formalize your contradiction argument and edit your answer so that I can mark it as an answer?
â Sobi
Aug 24 at 8:58
add a comment |Â
up vote
1
down vote
One way to approach this problem might be the following: If $r(t) = alpha in (0, 1)$, say, you may compute the derivative at this point. It will be an $epsilon$ away from $0$. You want to prove that for a sufficiently large interval, it remains away from $0$, so that the solution increases (by the differential equation, $r$ will stay below $1$ all the time, otherwise the derivative becomes positive).
So write down the equation
$$
r(1-r^2) le epsilon/2.
$$
What condition must $r$ satisfy so that this holds?
EDIT: For the general case, see my comment below.
Should the result not hold regardless of the differential equation behind all of this? That is, if we only know that $dot r > 0$ for $0 < r < 1$ and $dot r < 0$ for $r > 1$, can we not still say that $lim_tto infty r(t) = 1$?
â Sobi
Aug 24 at 8:41
1
Yes, you may use that r(t) increases monotonically until it has reached 1, so the limit exists by the convergence of monotonely increasing bounded things. If it's not one, it's strictly smaller. But then the derivative will always be strictly larger than something, and you converge to infinity.
â AlgebraicsAnonymous
Aug 24 at 8:50
That seems like a good idea. Maybe you could slightly formalize your contradiction argument and edit your answer so that I can mark it as an answer?
â Sobi
Aug 24 at 8:58
add a comment |Â
up vote
1
down vote
up vote
1
down vote
One way to approach this problem might be the following: If $r(t) = alpha in (0, 1)$, say, you may compute the derivative at this point. It will be an $epsilon$ away from $0$. You want to prove that for a sufficiently large interval, it remains away from $0$, so that the solution increases (by the differential equation, $r$ will stay below $1$ all the time, otherwise the derivative becomes positive).
So write down the equation
$$
r(1-r^2) le epsilon/2.
$$
What condition must $r$ satisfy so that this holds?
EDIT: For the general case, see my comment below.
One way to approach this problem might be the following: If $r(t) = alpha in (0, 1)$, say, you may compute the derivative at this point. It will be an $epsilon$ away from $0$. You want to prove that for a sufficiently large interval, it remains away from $0$, so that the solution increases (by the differential equation, $r$ will stay below $1$ all the time, otherwise the derivative becomes positive).
So write down the equation
$$
r(1-r^2) le epsilon/2.
$$
What condition must $r$ satisfy so that this holds?
EDIT: For the general case, see my comment below.
edited Aug 24 at 8:51
answered Aug 24 at 8:37
AlgebraicsAnonymous
1,03012
1,03012
Should the result not hold regardless of the differential equation behind all of this? That is, if we only know that $dot r > 0$ for $0 < r < 1$ and $dot r < 0$ for $r > 1$, can we not still say that $lim_tto infty r(t) = 1$?
â Sobi
Aug 24 at 8:41
1
Yes, you may use that r(t) increases monotonically until it has reached 1, so the limit exists by the convergence of monotonely increasing bounded things. If it's not one, it's strictly smaller. But then the derivative will always be strictly larger than something, and you converge to infinity.
â AlgebraicsAnonymous
Aug 24 at 8:50
That seems like a good idea. Maybe you could slightly formalize your contradiction argument and edit your answer so that I can mark it as an answer?
â Sobi
Aug 24 at 8:58
add a comment |Â
Should the result not hold regardless of the differential equation behind all of this? That is, if we only know that $dot r > 0$ for $0 < r < 1$ and $dot r < 0$ for $r > 1$, can we not still say that $lim_tto infty r(t) = 1$?
â Sobi
Aug 24 at 8:41
1
Yes, you may use that r(t) increases monotonically until it has reached 1, so the limit exists by the convergence of monotonely increasing bounded things. If it's not one, it's strictly smaller. But then the derivative will always be strictly larger than something, and you converge to infinity.
â AlgebraicsAnonymous
Aug 24 at 8:50
That seems like a good idea. Maybe you could slightly formalize your contradiction argument and edit your answer so that I can mark it as an answer?
â Sobi
Aug 24 at 8:58
Should the result not hold regardless of the differential equation behind all of this? That is, if we only know that $dot r > 0$ for $0 < r < 1$ and $dot r < 0$ for $r > 1$, can we not still say that $lim_tto infty r(t) = 1$?
â Sobi
Aug 24 at 8:41
Should the result not hold regardless of the differential equation behind all of this? That is, if we only know that $dot r > 0$ for $0 < r < 1$ and $dot r < 0$ for $r > 1$, can we not still say that $lim_tto infty r(t) = 1$?
â Sobi
Aug 24 at 8:41
1
1
Yes, you may use that r(t) increases monotonically until it has reached 1, so the limit exists by the convergence of monotonely increasing bounded things. If it's not one, it's strictly smaller. But then the derivative will always be strictly larger than something, and you converge to infinity.
â AlgebraicsAnonymous
Aug 24 at 8:50
Yes, you may use that r(t) increases monotonically until it has reached 1, so the limit exists by the convergence of monotonely increasing bounded things. If it's not one, it's strictly smaller. But then the derivative will always be strictly larger than something, and you converge to infinity.
â AlgebraicsAnonymous
Aug 24 at 8:50
That seems like a good idea. Maybe you could slightly formalize your contradiction argument and edit your answer so that I can mark it as an answer?
â Sobi
Aug 24 at 8:58
That seems like a good idea. Maybe you could slightly formalize your contradiction argument and edit your answer so that I can mark it as an answer?
â Sobi
Aug 24 at 8:58
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2892892%2fproving-that-lim-t-to-infty-rt-1%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
The differential equation you are considering is $dotr=r(1-r^2)$?
â Fakemistake
Aug 24 at 8:25
@Fakemistake Yes. Is that relevant though?
â Sobi
Aug 24 at 8:27
In general: If $dotr=f(r)$ is given, then the roots of $f$ are (constant) solutions of the differential equation. Any other solution converges to one of the constant solutions.
â Fakemistake
Aug 24 at 8:31
@Fakemistake Yes, I am aware of that. As I said, this is intuitively clear as well. But my question concerns proving this claim e.g. using the $epsilon$-$delta$ definition of the limit or via other properties of limits.
â Sobi
Aug 24 at 8:33
Well, one way is to consider the solution of the given differential equation (wolfram alpha provides one) and then take the limit, but it guess this is not what you want.
â Fakemistake
Aug 24 at 8:38