How Euler arrived at power series for $a^x$ and $ln x$
Clash Royale CLAN TAG#URR8PPP
up vote
7
down vote
favorite
I'm reading a book that reproduces Euler's arguments, and I have a few questions about a few things he does. Below are parts of the argument:
Let $a > 1$. Consider an "infinitely small quantity" $omega$.
$a^omega$ $approx$ $1$.
Let $a^omega$ = $1 + psi$, for $psi$ an "infinitely small number".
Then, wishing to relate $psi$ and $omega$. He says let $psi$ = $k$$omega$ for real number $k$.
So we have $a^omega$ = $1 + k$$omega$.
At this point apparently Euler computed some examples:
for $a = 10$ and $omega = 0.000001$ $k = 2.3026$.
and for $a = 5$ and $omega = 0.000001$ $k = 1.60944$.
He then concluded that $k$ is a finite number that depends on the value of the base $a$. *
Now for a finite number $x$ he sought the expansion of $a^x$. To do this he said let $j = fracxomega$ and expressed $x$ as $x = omega j$, and continued.
After he succeeded in finding an expansion for $a^x$ he sought the expansion for the natural logarithm (the inverse function of $a^x$ where the base $a$ is the one for which $k = 1$, in our (and Euler's) notation $a = e$).
1) How should one think of infinitely small and infinitely large numbers?
2) It's not clear to me that the value of $k$ in the derivation of a power series for $a^x$ doesn't also depend on $omega$. As in for $a = 10$ if we take $omega$ to be a different (small) value, it's not clear to me that $k$ wouldn't change. Unless the idea is that we let $omega$ go to $0$ and and in the limit $a^omega = 1 + komega$?
3)Not clear to me that a finite positive number $x$ can be expressed as $x = omega j$ for some $j$, since $omega$ is a mysterious "infinitely small" quantity
4)It's not clear to me that there should exist a unique base value $a$ for which $k = 1$ apriori, which Euler seems to assume exists, although I suppose the expansion of $a^x$ is in terms of $k$, and setting $x = 1$ and $k = 1$ we can compute the base $a$ for which $k = 1$ and see that it is the value we take our constant $e$ to be. Is this how Euler could've known there exists such a base?
In his derivation for the expansion of $ln(1+x)$, he writes:
Thus for "infinitely small $omega$" $e^omega = 1 + omega$.
Thus $ln(1 + omega) = omega$.
So $jomega = ln(1 + omega)^j$ But $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$.
So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
From this we he concludes that $1 + x = (1 + omega)^j$, and so $ln(1 + x) = jomega$. And since $ln(1 + x)$ is finite and $omega$ is infinitely small, $j$ must be infinitely large.
5) in deriving an expansion for $ln(1+x)$, Euler argues that $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$. So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
This makes the infinitely small notion even more confusing, as $1 + omega$ can be made arbitrarily large by raising $1 + omega$ to higher powers, and so $omega$ contributes a nonzero amount, and so how can it be infinitely small? It turns out that $j$ must be infinitely large, but we were told $(1 + omega)^j$ is larger when a larger number is chosen for $j$. How can a larger number be chosen than an "infinitely large" number?
sequences-and-series math-history infinitesimals
add a comment |Â
up vote
7
down vote
favorite
I'm reading a book that reproduces Euler's arguments, and I have a few questions about a few things he does. Below are parts of the argument:
Let $a > 1$. Consider an "infinitely small quantity" $omega$.
$a^omega$ $approx$ $1$.
Let $a^omega$ = $1 + psi$, for $psi$ an "infinitely small number".
Then, wishing to relate $psi$ and $omega$. He says let $psi$ = $k$$omega$ for real number $k$.
So we have $a^omega$ = $1 + k$$omega$.
At this point apparently Euler computed some examples:
for $a = 10$ and $omega = 0.000001$ $k = 2.3026$.
and for $a = 5$ and $omega = 0.000001$ $k = 1.60944$.
He then concluded that $k$ is a finite number that depends on the value of the base $a$. *
Now for a finite number $x$ he sought the expansion of $a^x$. To do this he said let $j = fracxomega$ and expressed $x$ as $x = omega j$, and continued.
After he succeeded in finding an expansion for $a^x$ he sought the expansion for the natural logarithm (the inverse function of $a^x$ where the base $a$ is the one for which $k = 1$, in our (and Euler's) notation $a = e$).
1) How should one think of infinitely small and infinitely large numbers?
2) It's not clear to me that the value of $k$ in the derivation of a power series for $a^x$ doesn't also depend on $omega$. As in for $a = 10$ if we take $omega$ to be a different (small) value, it's not clear to me that $k$ wouldn't change. Unless the idea is that we let $omega$ go to $0$ and and in the limit $a^omega = 1 + komega$?
3)Not clear to me that a finite positive number $x$ can be expressed as $x = omega j$ for some $j$, since $omega$ is a mysterious "infinitely small" quantity
4)It's not clear to me that there should exist a unique base value $a$ for which $k = 1$ apriori, which Euler seems to assume exists, although I suppose the expansion of $a^x$ is in terms of $k$, and setting $x = 1$ and $k = 1$ we can compute the base $a$ for which $k = 1$ and see that it is the value we take our constant $e$ to be. Is this how Euler could've known there exists such a base?
In his derivation for the expansion of $ln(1+x)$, he writes:
Thus for "infinitely small $omega$" $e^omega = 1 + omega$.
Thus $ln(1 + omega) = omega$.
So $jomega = ln(1 + omega)^j$ But $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$.
So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
From this we he concludes that $1 + x = (1 + omega)^j$, and so $ln(1 + x) = jomega$. And since $ln(1 + x)$ is finite and $omega$ is infinitely small, $j$ must be infinitely large.
5) in deriving an expansion for $ln(1+x)$, Euler argues that $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$. So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
This makes the infinitely small notion even more confusing, as $1 + omega$ can be made arbitrarily large by raising $1 + omega$ to higher powers, and so $omega$ contributes a nonzero amount, and so how can it be infinitely small? It turns out that $j$ must be infinitely large, but we were told $(1 + omega)^j$ is larger when a larger number is chosen for $j$. How can a larger number be chosen than an "infinitely large" number?
sequences-and-series math-history infinitesimals
What is the book?
â Jair Taylor
Aug 17 at 2:19
I think you could think of infinitesimals in terms of limits. If you have $lim_xto af(x)=L$ then you could imagine that there is a putative $x$ arbitrarily close to $a$ such that $f(x)=L$. Or that $x$ is somehow the "smallest real number that is greater than $a$". We now know that such a number doesn't exist but it could work conceptually, if not rigorously. And if it's an Euler argument, it's bound to work conceptually but not rigorously. Of course, other rigorous arguments are available
â Jam
Aug 17 at 2:37
1
@JairTaylor Euler - master of us all (this was in the section Euler and Logarithms)
â trynalearn
Aug 17 at 3:28
add a comment |Â
up vote
7
down vote
favorite
up vote
7
down vote
favorite
I'm reading a book that reproduces Euler's arguments, and I have a few questions about a few things he does. Below are parts of the argument:
Let $a > 1$. Consider an "infinitely small quantity" $omega$.
$a^omega$ $approx$ $1$.
Let $a^omega$ = $1 + psi$, for $psi$ an "infinitely small number".
Then, wishing to relate $psi$ and $omega$. He says let $psi$ = $k$$omega$ for real number $k$.
So we have $a^omega$ = $1 + k$$omega$.
At this point apparently Euler computed some examples:
for $a = 10$ and $omega = 0.000001$ $k = 2.3026$.
and for $a = 5$ and $omega = 0.000001$ $k = 1.60944$.
He then concluded that $k$ is a finite number that depends on the value of the base $a$. *
Now for a finite number $x$ he sought the expansion of $a^x$. To do this he said let $j = fracxomega$ and expressed $x$ as $x = omega j$, and continued.
After he succeeded in finding an expansion for $a^x$ he sought the expansion for the natural logarithm (the inverse function of $a^x$ where the base $a$ is the one for which $k = 1$, in our (and Euler's) notation $a = e$).
1) How should one think of infinitely small and infinitely large numbers?
2) It's not clear to me that the value of $k$ in the derivation of a power series for $a^x$ doesn't also depend on $omega$. As in for $a = 10$ if we take $omega$ to be a different (small) value, it's not clear to me that $k$ wouldn't change. Unless the idea is that we let $omega$ go to $0$ and and in the limit $a^omega = 1 + komega$?
3)Not clear to me that a finite positive number $x$ can be expressed as $x = omega j$ for some $j$, since $omega$ is a mysterious "infinitely small" quantity
4)It's not clear to me that there should exist a unique base value $a$ for which $k = 1$ apriori, which Euler seems to assume exists, although I suppose the expansion of $a^x$ is in terms of $k$, and setting $x = 1$ and $k = 1$ we can compute the base $a$ for which $k = 1$ and see that it is the value we take our constant $e$ to be. Is this how Euler could've known there exists such a base?
In his derivation for the expansion of $ln(1+x)$, he writes:
Thus for "infinitely small $omega$" $e^omega = 1 + omega$.
Thus $ln(1 + omega) = omega$.
So $jomega = ln(1 + omega)^j$ But $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$.
So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
From this we he concludes that $1 + x = (1 + omega)^j$, and so $ln(1 + x) = jomega$. And since $ln(1 + x)$ is finite and $omega$ is infinitely small, $j$ must be infinitely large.
5) in deriving an expansion for $ln(1+x)$, Euler argues that $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$. So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
This makes the infinitely small notion even more confusing, as $1 + omega$ can be made arbitrarily large by raising $1 + omega$ to higher powers, and so $omega$ contributes a nonzero amount, and so how can it be infinitely small? It turns out that $j$ must be infinitely large, but we were told $(1 + omega)^j$ is larger when a larger number is chosen for $j$. How can a larger number be chosen than an "infinitely large" number?
sequences-and-series math-history infinitesimals
I'm reading a book that reproduces Euler's arguments, and I have a few questions about a few things he does. Below are parts of the argument:
Let $a > 1$. Consider an "infinitely small quantity" $omega$.
$a^omega$ $approx$ $1$.
Let $a^omega$ = $1 + psi$, for $psi$ an "infinitely small number".
Then, wishing to relate $psi$ and $omega$. He says let $psi$ = $k$$omega$ for real number $k$.
So we have $a^omega$ = $1 + k$$omega$.
At this point apparently Euler computed some examples:
for $a = 10$ and $omega = 0.000001$ $k = 2.3026$.
and for $a = 5$ and $omega = 0.000001$ $k = 1.60944$.
He then concluded that $k$ is a finite number that depends on the value of the base $a$. *
Now for a finite number $x$ he sought the expansion of $a^x$. To do this he said let $j = fracxomega$ and expressed $x$ as $x = omega j$, and continued.
After he succeeded in finding an expansion for $a^x$ he sought the expansion for the natural logarithm (the inverse function of $a^x$ where the base $a$ is the one for which $k = 1$, in our (and Euler's) notation $a = e$).
1) How should one think of infinitely small and infinitely large numbers?
2) It's not clear to me that the value of $k$ in the derivation of a power series for $a^x$ doesn't also depend on $omega$. As in for $a = 10$ if we take $omega$ to be a different (small) value, it's not clear to me that $k$ wouldn't change. Unless the idea is that we let $omega$ go to $0$ and and in the limit $a^omega = 1 + komega$?
3)Not clear to me that a finite positive number $x$ can be expressed as $x = omega j$ for some $j$, since $omega$ is a mysterious "infinitely small" quantity
4)It's not clear to me that there should exist a unique base value $a$ for which $k = 1$ apriori, which Euler seems to assume exists, although I suppose the expansion of $a^x$ is in terms of $k$, and setting $x = 1$ and $k = 1$ we can compute the base $a$ for which $k = 1$ and see that it is the value we take our constant $e$ to be. Is this how Euler could've known there exists such a base?
In his derivation for the expansion of $ln(1+x)$, he writes:
Thus for "infinitely small $omega$" $e^omega = 1 + omega$.
Thus $ln(1 + omega) = omega$.
So $jomega = ln(1 + omega)^j$ But $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$.
So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
From this we he concludes that $1 + x = (1 + omega)^j$, and so $ln(1 + x) = jomega$. And since $ln(1 + x)$ is finite and $omega$ is infinitely small, $j$ must be infinitely large.
5) in deriving an expansion for $ln(1+x)$, Euler argues that $omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+omega)^j$ will exceed $1$. So for any positive $x$, we can find $j$ so that $x = (1 + omega)^j - 1$.
This makes the infinitely small notion even more confusing, as $1 + omega$ can be made arbitrarily large by raising $1 + omega$ to higher powers, and so $omega$ contributes a nonzero amount, and so how can it be infinitely small? It turns out that $j$ must be infinitely large, but we were told $(1 + omega)^j$ is larger when a larger number is chosen for $j$. How can a larger number be chosen than an "infinitely large" number?
sequences-and-series math-history infinitesimals
edited Aug 17 at 2:09
gt6989b
30.3k22248
30.3k22248
asked Aug 17 at 2:02
trynalearn
545213
545213
What is the book?
â Jair Taylor
Aug 17 at 2:19
I think you could think of infinitesimals in terms of limits. If you have $lim_xto af(x)=L$ then you could imagine that there is a putative $x$ arbitrarily close to $a$ such that $f(x)=L$. Or that $x$ is somehow the "smallest real number that is greater than $a$". We now know that such a number doesn't exist but it could work conceptually, if not rigorously. And if it's an Euler argument, it's bound to work conceptually but not rigorously. Of course, other rigorous arguments are available
â Jam
Aug 17 at 2:37
1
@JairTaylor Euler - master of us all (this was in the section Euler and Logarithms)
â trynalearn
Aug 17 at 3:28
add a comment |Â
What is the book?
â Jair Taylor
Aug 17 at 2:19
I think you could think of infinitesimals in terms of limits. If you have $lim_xto af(x)=L$ then you could imagine that there is a putative $x$ arbitrarily close to $a$ such that $f(x)=L$. Or that $x$ is somehow the "smallest real number that is greater than $a$". We now know that such a number doesn't exist but it could work conceptually, if not rigorously. And if it's an Euler argument, it's bound to work conceptually but not rigorously. Of course, other rigorous arguments are available
â Jam
Aug 17 at 2:37
1
@JairTaylor Euler - master of us all (this was in the section Euler and Logarithms)
â trynalearn
Aug 17 at 3:28
What is the book?
â Jair Taylor
Aug 17 at 2:19
What is the book?
â Jair Taylor
Aug 17 at 2:19
I think you could think of infinitesimals in terms of limits. If you have $lim_xto af(x)=L$ then you could imagine that there is a putative $x$ arbitrarily close to $a$ such that $f(x)=L$. Or that $x$ is somehow the "smallest real number that is greater than $a$". We now know that such a number doesn't exist but it could work conceptually, if not rigorously. And if it's an Euler argument, it's bound to work conceptually but not rigorously. Of course, other rigorous arguments are available
â Jam
Aug 17 at 2:37
I think you could think of infinitesimals in terms of limits. If you have $lim_xto af(x)=L$ then you could imagine that there is a putative $x$ arbitrarily close to $a$ such that $f(x)=L$. Or that $x$ is somehow the "smallest real number that is greater than $a$". We now know that such a number doesn't exist but it could work conceptually, if not rigorously. And if it's an Euler argument, it's bound to work conceptually but not rigorously. Of course, other rigorous arguments are available
â Jam
Aug 17 at 2:37
1
1
@JairTaylor Euler - master of us all (this was in the section Euler and Logarithms)
â trynalearn
Aug 17 at 3:28
@JairTaylor Euler - master of us all (this was in the section Euler and Logarithms)
â trynalearn
Aug 17 at 3:28
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
3
down vote
Standards of mathematical rigor were drastically different in Euler's time, especially when dealing with infinite and infinitesimal elements. For a really good illustration of how different the rules were understood to be, you might want to read the following article:
Grabiner, J. (1974). Is Mathematical Truth Time-Dependent? The
American Mathematical Monthly, 81(4), 354-365. doi:10.2307/2318997.
(online at
https://www.jstor.org/stable/2318997?seq=1#page_scan_tab_contents).
Grabiner's thesis is that there was a massive transformation in the norms of what constituted "proof" (and hence what constituted "truth") in the late 19th century. She illustrates her thesis with an examination of a proof of Euler's very similar to the one you ask about. Grabiner writes:
To establish what eighteenth-century mathematical practice was like,
let us first look at a brilliant derivation of a now well-known
result. Here is how Leonhard Euler derived the infinite series for the
cosine of an angle. He began with the identity $$(cos z + isin z)^n = cos nz + i sin nz $$
He then expanded the left-hand side of the equation according to the
binomial theorem. Taking the real part of that binomial expansion and
equating it to $cos nz$, he obtained $$cos nz = (cos z)^n -fracn(n-1)2!(cos z)^n-2(sin z)^2 +fracn(n-1)(n-2)(n-3)4!(cos z)^n-4(sin z)^4 - cdots$$ Let $z$
be an infinitely small arc, and let $n$ be infinitely large. Then:
$$cos z = 1, sin z = z, n(n-1)=n^2, n(n-1)(n-2)(n-3)=n^4, textrm etc.$$ The equation now becomes recognizable: $$cos nz = 1 - fracn^2z^22! + fracn^4z^44! - cdots $$ But since $z$ is
infinitely small and $n$ infinitely large, Euler concludes that $nz$
is a finite quantity. So let $nz = v$. The modern reader may be left
slightly breathless; still, we have $$cos v = 1 - fracv^22! + fracv^44! - cdots$$ Now that we have worked through one example,
we shall be able to appreciate some generalizations about the way many
eighteenth-century mathematicians worked. First, the primary emphasis
was on getting results. All mathematicians know many of the results
from this period, results which bear the names of Leibniz, Bernoulli,
L'Hospital, Taylor, Euler, and Laplace. But the chances are good that
these results were originally obtained in ways utterly different from
the ways we prove them today. It is doubtful that Euler and his
contemporaries would have been able to derive their results if they
had been burdened with our standards of rigor. Here, then, is one
major difference between the eighteenth-century way of doing
mathematics and our way.
Grabiner identifies two key features of mathematical work of Euler's era:
- "The primary emphasis was on getting results... For eighteenth-century mathematicians, the end justified the means." (p. 356)
- "Mathematicians placed great reliance on the power of symbols. Sometimes it seems to have been assumed that if one could just write down something which was symbolically coherent, the truth of the statement was guaranteed." (p. 356)
The rest of the article describes when, why and how the standards of mathematical proof began to change, and you should definitely take a look at it. Grabiner (1974) is well-known in the Math Ed research community, but I think it is under-recognized among contemporary mathematicians.
add a comment |Â
up vote
2
down vote
Short answer. You ask
How should one think of infinitely small and infinitely large numbers?
to which my answer is
Don't rely on naive formal manipulations, unless you happen to be Euler.
Instead
use modern definitions of limits
or, if you're brave,
develop nonstandard analysis.
Edit in response to a comment.
Essentially, I agree with @JairTaylor . We all think naively before rigorizing. I meant my answer as a tribute to Euler, who thought his way through to correct conclusions before rigor in analysis was invented.
5
I disagree. Sometimes naive formal manipulations can be very useful, especially when thinking creatively. And often, they can be made rigorous later.
â Jair Taylor
Aug 17 at 2:24
What is meant by "naive formal manipulations"? Also can Euler's argument be made rigorous by simply replacing these "infinitely large/small" quantities by limits or objects of nonstandard analysis? Or do his arguments fail to "generalize".
â trynalearn
Aug 17 at 7:04
@JairTaylor Thanks. See my edit.
â Ethan Bolker
Aug 17 at 11:58
1
@trynalearn The naive manipulations were Euler's you describe. They are still useful today. They can be made rigorous either with $epsilon-delta$ limit arguments or nonstandard analysis.
â Ethan Bolker
Aug 17 at 12:00
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
Standards of mathematical rigor were drastically different in Euler's time, especially when dealing with infinite and infinitesimal elements. For a really good illustration of how different the rules were understood to be, you might want to read the following article:
Grabiner, J. (1974). Is Mathematical Truth Time-Dependent? The
American Mathematical Monthly, 81(4), 354-365. doi:10.2307/2318997.
(online at
https://www.jstor.org/stable/2318997?seq=1#page_scan_tab_contents).
Grabiner's thesis is that there was a massive transformation in the norms of what constituted "proof" (and hence what constituted "truth") in the late 19th century. She illustrates her thesis with an examination of a proof of Euler's very similar to the one you ask about. Grabiner writes:
To establish what eighteenth-century mathematical practice was like,
let us first look at a brilliant derivation of a now well-known
result. Here is how Leonhard Euler derived the infinite series for the
cosine of an angle. He began with the identity $$(cos z + isin z)^n = cos nz + i sin nz $$
He then expanded the left-hand side of the equation according to the
binomial theorem. Taking the real part of that binomial expansion and
equating it to $cos nz$, he obtained $$cos nz = (cos z)^n -fracn(n-1)2!(cos z)^n-2(sin z)^2 +fracn(n-1)(n-2)(n-3)4!(cos z)^n-4(sin z)^4 - cdots$$ Let $z$
be an infinitely small arc, and let $n$ be infinitely large. Then:
$$cos z = 1, sin z = z, n(n-1)=n^2, n(n-1)(n-2)(n-3)=n^4, textrm etc.$$ The equation now becomes recognizable: $$cos nz = 1 - fracn^2z^22! + fracn^4z^44! - cdots $$ But since $z$ is
infinitely small and $n$ infinitely large, Euler concludes that $nz$
is a finite quantity. So let $nz = v$. The modern reader may be left
slightly breathless; still, we have $$cos v = 1 - fracv^22! + fracv^44! - cdots$$ Now that we have worked through one example,
we shall be able to appreciate some generalizations about the way many
eighteenth-century mathematicians worked. First, the primary emphasis
was on getting results. All mathematicians know many of the results
from this period, results which bear the names of Leibniz, Bernoulli,
L'Hospital, Taylor, Euler, and Laplace. But the chances are good that
these results were originally obtained in ways utterly different from
the ways we prove them today. It is doubtful that Euler and his
contemporaries would have been able to derive their results if they
had been burdened with our standards of rigor. Here, then, is one
major difference between the eighteenth-century way of doing
mathematics and our way.
Grabiner identifies two key features of mathematical work of Euler's era:
- "The primary emphasis was on getting results... For eighteenth-century mathematicians, the end justified the means." (p. 356)
- "Mathematicians placed great reliance on the power of symbols. Sometimes it seems to have been assumed that if one could just write down something which was symbolically coherent, the truth of the statement was guaranteed." (p. 356)
The rest of the article describes when, why and how the standards of mathematical proof began to change, and you should definitely take a look at it. Grabiner (1974) is well-known in the Math Ed research community, but I think it is under-recognized among contemporary mathematicians.
add a comment |Â
up vote
3
down vote
Standards of mathematical rigor were drastically different in Euler's time, especially when dealing with infinite and infinitesimal elements. For a really good illustration of how different the rules were understood to be, you might want to read the following article:
Grabiner, J. (1974). Is Mathematical Truth Time-Dependent? The
American Mathematical Monthly, 81(4), 354-365. doi:10.2307/2318997.
(online at
https://www.jstor.org/stable/2318997?seq=1#page_scan_tab_contents).
Grabiner's thesis is that there was a massive transformation in the norms of what constituted "proof" (and hence what constituted "truth") in the late 19th century. She illustrates her thesis with an examination of a proof of Euler's very similar to the one you ask about. Grabiner writes:
To establish what eighteenth-century mathematical practice was like,
let us first look at a brilliant derivation of a now well-known
result. Here is how Leonhard Euler derived the infinite series for the
cosine of an angle. He began with the identity $$(cos z + isin z)^n = cos nz + i sin nz $$
He then expanded the left-hand side of the equation according to the
binomial theorem. Taking the real part of that binomial expansion and
equating it to $cos nz$, he obtained $$cos nz = (cos z)^n -fracn(n-1)2!(cos z)^n-2(sin z)^2 +fracn(n-1)(n-2)(n-3)4!(cos z)^n-4(sin z)^4 - cdots$$ Let $z$
be an infinitely small arc, and let $n$ be infinitely large. Then:
$$cos z = 1, sin z = z, n(n-1)=n^2, n(n-1)(n-2)(n-3)=n^4, textrm etc.$$ The equation now becomes recognizable: $$cos nz = 1 - fracn^2z^22! + fracn^4z^44! - cdots $$ But since $z$ is
infinitely small and $n$ infinitely large, Euler concludes that $nz$
is a finite quantity. So let $nz = v$. The modern reader may be left
slightly breathless; still, we have $$cos v = 1 - fracv^22! + fracv^44! - cdots$$ Now that we have worked through one example,
we shall be able to appreciate some generalizations about the way many
eighteenth-century mathematicians worked. First, the primary emphasis
was on getting results. All mathematicians know many of the results
from this period, results which bear the names of Leibniz, Bernoulli,
L'Hospital, Taylor, Euler, and Laplace. But the chances are good that
these results were originally obtained in ways utterly different from
the ways we prove them today. It is doubtful that Euler and his
contemporaries would have been able to derive their results if they
had been burdened with our standards of rigor. Here, then, is one
major difference between the eighteenth-century way of doing
mathematics and our way.
Grabiner identifies two key features of mathematical work of Euler's era:
- "The primary emphasis was on getting results... For eighteenth-century mathematicians, the end justified the means." (p. 356)
- "Mathematicians placed great reliance on the power of symbols. Sometimes it seems to have been assumed that if one could just write down something which was symbolically coherent, the truth of the statement was guaranteed." (p. 356)
The rest of the article describes when, why and how the standards of mathematical proof began to change, and you should definitely take a look at it. Grabiner (1974) is well-known in the Math Ed research community, but I think it is under-recognized among contemporary mathematicians.
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Standards of mathematical rigor were drastically different in Euler's time, especially when dealing with infinite and infinitesimal elements. For a really good illustration of how different the rules were understood to be, you might want to read the following article:
Grabiner, J. (1974). Is Mathematical Truth Time-Dependent? The
American Mathematical Monthly, 81(4), 354-365. doi:10.2307/2318997.
(online at
https://www.jstor.org/stable/2318997?seq=1#page_scan_tab_contents).
Grabiner's thesis is that there was a massive transformation in the norms of what constituted "proof" (and hence what constituted "truth") in the late 19th century. She illustrates her thesis with an examination of a proof of Euler's very similar to the one you ask about. Grabiner writes:
To establish what eighteenth-century mathematical practice was like,
let us first look at a brilliant derivation of a now well-known
result. Here is how Leonhard Euler derived the infinite series for the
cosine of an angle. He began with the identity $$(cos z + isin z)^n = cos nz + i sin nz $$
He then expanded the left-hand side of the equation according to the
binomial theorem. Taking the real part of that binomial expansion and
equating it to $cos nz$, he obtained $$cos nz = (cos z)^n -fracn(n-1)2!(cos z)^n-2(sin z)^2 +fracn(n-1)(n-2)(n-3)4!(cos z)^n-4(sin z)^4 - cdots$$ Let $z$
be an infinitely small arc, and let $n$ be infinitely large. Then:
$$cos z = 1, sin z = z, n(n-1)=n^2, n(n-1)(n-2)(n-3)=n^4, textrm etc.$$ The equation now becomes recognizable: $$cos nz = 1 - fracn^2z^22! + fracn^4z^44! - cdots $$ But since $z$ is
infinitely small and $n$ infinitely large, Euler concludes that $nz$
is a finite quantity. So let $nz = v$. The modern reader may be left
slightly breathless; still, we have $$cos v = 1 - fracv^22! + fracv^44! - cdots$$ Now that we have worked through one example,
we shall be able to appreciate some generalizations about the way many
eighteenth-century mathematicians worked. First, the primary emphasis
was on getting results. All mathematicians know many of the results
from this period, results which bear the names of Leibniz, Bernoulli,
L'Hospital, Taylor, Euler, and Laplace. But the chances are good that
these results were originally obtained in ways utterly different from
the ways we prove them today. It is doubtful that Euler and his
contemporaries would have been able to derive their results if they
had been burdened with our standards of rigor. Here, then, is one
major difference between the eighteenth-century way of doing
mathematics and our way.
Grabiner identifies two key features of mathematical work of Euler's era:
- "The primary emphasis was on getting results... For eighteenth-century mathematicians, the end justified the means." (p. 356)
- "Mathematicians placed great reliance on the power of symbols. Sometimes it seems to have been assumed that if one could just write down something which was symbolically coherent, the truth of the statement was guaranteed." (p. 356)
The rest of the article describes when, why and how the standards of mathematical proof began to change, and you should definitely take a look at it. Grabiner (1974) is well-known in the Math Ed research community, but I think it is under-recognized among contemporary mathematicians.
Standards of mathematical rigor were drastically different in Euler's time, especially when dealing with infinite and infinitesimal elements. For a really good illustration of how different the rules were understood to be, you might want to read the following article:
Grabiner, J. (1974). Is Mathematical Truth Time-Dependent? The
American Mathematical Monthly, 81(4), 354-365. doi:10.2307/2318997.
(online at
https://www.jstor.org/stable/2318997?seq=1#page_scan_tab_contents).
Grabiner's thesis is that there was a massive transformation in the norms of what constituted "proof" (and hence what constituted "truth") in the late 19th century. She illustrates her thesis with an examination of a proof of Euler's very similar to the one you ask about. Grabiner writes:
To establish what eighteenth-century mathematical practice was like,
let us first look at a brilliant derivation of a now well-known
result. Here is how Leonhard Euler derived the infinite series for the
cosine of an angle. He began with the identity $$(cos z + isin z)^n = cos nz + i sin nz $$
He then expanded the left-hand side of the equation according to the
binomial theorem. Taking the real part of that binomial expansion and
equating it to $cos nz$, he obtained $$cos nz = (cos z)^n -fracn(n-1)2!(cos z)^n-2(sin z)^2 +fracn(n-1)(n-2)(n-3)4!(cos z)^n-4(sin z)^4 - cdots$$ Let $z$
be an infinitely small arc, and let $n$ be infinitely large. Then:
$$cos z = 1, sin z = z, n(n-1)=n^2, n(n-1)(n-2)(n-3)=n^4, textrm etc.$$ The equation now becomes recognizable: $$cos nz = 1 - fracn^2z^22! + fracn^4z^44! - cdots $$ But since $z$ is
infinitely small and $n$ infinitely large, Euler concludes that $nz$
is a finite quantity. So let $nz = v$. The modern reader may be left
slightly breathless; still, we have $$cos v = 1 - fracv^22! + fracv^44! - cdots$$ Now that we have worked through one example,
we shall be able to appreciate some generalizations about the way many
eighteenth-century mathematicians worked. First, the primary emphasis
was on getting results. All mathematicians know many of the results
from this period, results which bear the names of Leibniz, Bernoulli,
L'Hospital, Taylor, Euler, and Laplace. But the chances are good that
these results were originally obtained in ways utterly different from
the ways we prove them today. It is doubtful that Euler and his
contemporaries would have been able to derive their results if they
had been burdened with our standards of rigor. Here, then, is one
major difference between the eighteenth-century way of doing
mathematics and our way.
Grabiner identifies two key features of mathematical work of Euler's era:
- "The primary emphasis was on getting results... For eighteenth-century mathematicians, the end justified the means." (p. 356)
- "Mathematicians placed great reliance on the power of symbols. Sometimes it seems to have been assumed that if one could just write down something which was symbolically coherent, the truth of the statement was guaranteed." (p. 356)
The rest of the article describes when, why and how the standards of mathematical proof began to change, and you should definitely take a look at it. Grabiner (1974) is well-known in the Math Ed research community, but I think it is under-recognized among contemporary mathematicians.
answered Aug 17 at 3:19
mweiss
17.2k23268
17.2k23268
add a comment |Â
add a comment |Â
up vote
2
down vote
Short answer. You ask
How should one think of infinitely small and infinitely large numbers?
to which my answer is
Don't rely on naive formal manipulations, unless you happen to be Euler.
Instead
use modern definitions of limits
or, if you're brave,
develop nonstandard analysis.
Edit in response to a comment.
Essentially, I agree with @JairTaylor . We all think naively before rigorizing. I meant my answer as a tribute to Euler, who thought his way through to correct conclusions before rigor in analysis was invented.
5
I disagree. Sometimes naive formal manipulations can be very useful, especially when thinking creatively. And often, they can be made rigorous later.
â Jair Taylor
Aug 17 at 2:24
What is meant by "naive formal manipulations"? Also can Euler's argument be made rigorous by simply replacing these "infinitely large/small" quantities by limits or objects of nonstandard analysis? Or do his arguments fail to "generalize".
â trynalearn
Aug 17 at 7:04
@JairTaylor Thanks. See my edit.
â Ethan Bolker
Aug 17 at 11:58
1
@trynalearn The naive manipulations were Euler's you describe. They are still useful today. They can be made rigorous either with $epsilon-delta$ limit arguments or nonstandard analysis.
â Ethan Bolker
Aug 17 at 12:00
add a comment |Â
up vote
2
down vote
Short answer. You ask
How should one think of infinitely small and infinitely large numbers?
to which my answer is
Don't rely on naive formal manipulations, unless you happen to be Euler.
Instead
use modern definitions of limits
or, if you're brave,
develop nonstandard analysis.
Edit in response to a comment.
Essentially, I agree with @JairTaylor . We all think naively before rigorizing. I meant my answer as a tribute to Euler, who thought his way through to correct conclusions before rigor in analysis was invented.
5
I disagree. Sometimes naive formal manipulations can be very useful, especially when thinking creatively. And often, they can be made rigorous later.
â Jair Taylor
Aug 17 at 2:24
What is meant by "naive formal manipulations"? Also can Euler's argument be made rigorous by simply replacing these "infinitely large/small" quantities by limits or objects of nonstandard analysis? Or do his arguments fail to "generalize".
â trynalearn
Aug 17 at 7:04
@JairTaylor Thanks. See my edit.
â Ethan Bolker
Aug 17 at 11:58
1
@trynalearn The naive manipulations were Euler's you describe. They are still useful today. They can be made rigorous either with $epsilon-delta$ limit arguments or nonstandard analysis.
â Ethan Bolker
Aug 17 at 12:00
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Short answer. You ask
How should one think of infinitely small and infinitely large numbers?
to which my answer is
Don't rely on naive formal manipulations, unless you happen to be Euler.
Instead
use modern definitions of limits
or, if you're brave,
develop nonstandard analysis.
Edit in response to a comment.
Essentially, I agree with @JairTaylor . We all think naively before rigorizing. I meant my answer as a tribute to Euler, who thought his way through to correct conclusions before rigor in analysis was invented.
Short answer. You ask
How should one think of infinitely small and infinitely large numbers?
to which my answer is
Don't rely on naive formal manipulations, unless you happen to be Euler.
Instead
use modern definitions of limits
or, if you're brave,
develop nonstandard analysis.
Edit in response to a comment.
Essentially, I agree with @JairTaylor . We all think naively before rigorizing. I meant my answer as a tribute to Euler, who thought his way through to correct conclusions before rigor in analysis was invented.
edited Aug 17 at 11:57
answered Aug 17 at 2:16
Ethan Bolker
36.1k54299
36.1k54299
5
I disagree. Sometimes naive formal manipulations can be very useful, especially when thinking creatively. And often, they can be made rigorous later.
â Jair Taylor
Aug 17 at 2:24
What is meant by "naive formal manipulations"? Also can Euler's argument be made rigorous by simply replacing these "infinitely large/small" quantities by limits or objects of nonstandard analysis? Or do his arguments fail to "generalize".
â trynalearn
Aug 17 at 7:04
@JairTaylor Thanks. See my edit.
â Ethan Bolker
Aug 17 at 11:58
1
@trynalearn The naive manipulations were Euler's you describe. They are still useful today. They can be made rigorous either with $epsilon-delta$ limit arguments or nonstandard analysis.
â Ethan Bolker
Aug 17 at 12:00
add a comment |Â
5
I disagree. Sometimes naive formal manipulations can be very useful, especially when thinking creatively. And often, they can be made rigorous later.
â Jair Taylor
Aug 17 at 2:24
What is meant by "naive formal manipulations"? Also can Euler's argument be made rigorous by simply replacing these "infinitely large/small" quantities by limits or objects of nonstandard analysis? Or do his arguments fail to "generalize".
â trynalearn
Aug 17 at 7:04
@JairTaylor Thanks. See my edit.
â Ethan Bolker
Aug 17 at 11:58
1
@trynalearn The naive manipulations were Euler's you describe. They are still useful today. They can be made rigorous either with $epsilon-delta$ limit arguments or nonstandard analysis.
â Ethan Bolker
Aug 17 at 12:00
5
5
I disagree. Sometimes naive formal manipulations can be very useful, especially when thinking creatively. And often, they can be made rigorous later.
â Jair Taylor
Aug 17 at 2:24
I disagree. Sometimes naive formal manipulations can be very useful, especially when thinking creatively. And often, they can be made rigorous later.
â Jair Taylor
Aug 17 at 2:24
What is meant by "naive formal manipulations"? Also can Euler's argument be made rigorous by simply replacing these "infinitely large/small" quantities by limits or objects of nonstandard analysis? Or do his arguments fail to "generalize".
â trynalearn
Aug 17 at 7:04
What is meant by "naive formal manipulations"? Also can Euler's argument be made rigorous by simply replacing these "infinitely large/small" quantities by limits or objects of nonstandard analysis? Or do his arguments fail to "generalize".
â trynalearn
Aug 17 at 7:04
@JairTaylor Thanks. See my edit.
â Ethan Bolker
Aug 17 at 11:58
@JairTaylor Thanks. See my edit.
â Ethan Bolker
Aug 17 at 11:58
1
1
@trynalearn The naive manipulations were Euler's you describe. They are still useful today. They can be made rigorous either with $epsilon-delta$ limit arguments or nonstandard analysis.
â Ethan Bolker
Aug 17 at 12:00
@trynalearn The naive manipulations were Euler's you describe. They are still useful today. They can be made rigorous either with $epsilon-delta$ limit arguments or nonstandard analysis.
â Ethan Bolker
Aug 17 at 12:00
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2885328%2fhow-euler-arrived-at-power-series-for-ax-and-ln-x%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
What is the book?
â Jair Taylor
Aug 17 at 2:19
I think you could think of infinitesimals in terms of limits. If you have $lim_xto af(x)=L$ then you could imagine that there is a putative $x$ arbitrarily close to $a$ such that $f(x)=L$. Or that $x$ is somehow the "smallest real number that is greater than $a$". We now know that such a number doesn't exist but it could work conceptually, if not rigorously. And if it's an Euler argument, it's bound to work conceptually but not rigorously. Of course, other rigorous arguments are available
â Jam
Aug 17 at 2:37
1
@JairTaylor Euler - master of us all (this was in the section Euler and Logarithms)
â trynalearn
Aug 17 at 3:28