General solution to sytem of ODE's
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I came across the following formula for the general solution of a system of first order ODE's of the form $x^'=Ax$
$sum_i^N sum_j^m b_i,je^lambda_it(sum_k^m-1 fract^kk!(A-lambda_i)^k)v_i,j $
Where $N$ is the number of distinct eigenvalues and $m$ each of their respective algebraic multiplicity.
It looks odd to me that any sum in $k$ is equally long, dosnt that imply that any chain and thus any block is of same size ?
linear-algebra differential-equations
add a comment |Â
up vote
1
down vote
favorite
I came across the following formula for the general solution of a system of first order ODE's of the form $x^'=Ax$
$sum_i^N sum_j^m b_i,je^lambda_it(sum_k^m-1 fract^kk!(A-lambda_i)^k)v_i,j $
Where $N$ is the number of distinct eigenvalues and $m$ each of their respective algebraic multiplicity.
It looks odd to me that any sum in $k$ is equally long, dosnt that imply that any chain and thus any block is of same size ?
linear-algebra differential-equations
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I came across the following formula for the general solution of a system of first order ODE's of the form $x^'=Ax$
$sum_i^N sum_j^m b_i,je^lambda_it(sum_k^m-1 fract^kk!(A-lambda_i)^k)v_i,j $
Where $N$ is the number of distinct eigenvalues and $m$ each of their respective algebraic multiplicity.
It looks odd to me that any sum in $k$ is equally long, dosnt that imply that any chain and thus any block is of same size ?
linear-algebra differential-equations
I came across the following formula for the general solution of a system of first order ODE's of the form $x^'=Ax$
$sum_i^N sum_j^m b_i,je^lambda_it(sum_k^m-1 fract^kk!(A-lambda_i)^k)v_i,j $
Where $N$ is the number of distinct eigenvalues and $m$ each of their respective algebraic multiplicity.
It looks odd to me that any sum in $k$ is equally long, dosnt that imply that any chain and thus any block is of same size ?
linear-algebra differential-equations
edited Aug 15 at 16:53
asked Aug 12 at 7:42
user561840
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
accepted
They're not the same length, because $m$ is not truly a constant. You said it yourself: $m$ is the multiplicity of a given eigenvalue $lambda_i$, so its value is a function of $i$ (or of $lambda_i$ depending on your perspective). For this reason it should be written as $m_i$ or $m(lambda_i)$.
Edit: As you note, it is possible to make the sums over $k$ shorter by choosing the $v_i,j$ carefully (i.e. choosing elements of $ker(A-lambda_i)$ whenever possible, then elements of $ker(A-lambda_i)^2$, and so forth). It's not strictly necessary, however, and this form for the solution avoids getting into that. It trades simplicity of solutions for simplicity of presentation.
Basically you got confused here because you know too much linear algebra already :-)
Here's a full description of the solution for posterity:
- The first summation, $sum_i=1^N$, is over the distinct eigenvalues $lambda_1$, $ldots$, $lambda_n$. (This may include complex eigenvalues.) Each $lambda_i$ has a generalized eigenspace $V_i$, consisting of those vectors $vinmathbb R^n$ (or $mathbbC^n$ if complex eigenvalues are necessary) where $(A-lambda_i)^k v=0$ for some $k$.
- The second summation, $sum_j=1^m(i)$, is over any basis $v_i,1, ldots, v_i,m(i)$ for $V_i$ (so $m(i) = dim V_i$).
Together the set of vectors $v_i,j : 1le ile ntextrm and 1le jle m(i)$ is a basis for all of $mathbbR^n$ [$mathbbC^n$]. - The constants $b_i,jinmathbbR$ [$mathbbC$] are chosen so that $x(0) = sum_i=1^N sum_j=1^m(i) b_i,jv_i,j$ (which can always be done uniquely; that's what it means for the $v_i,j$ to form a basis).
- The third sum, $sum_k=0^m(i)-1 fract^kk! (A-lambda_i)^k v_i,j$, is simply equal to $exp((A-lambda_i)t)v_i,j$. This works because $exp((A - lambda_i)t)v_i,j$ can be written as the infinite series $sum_k=0^infty fract^kk! (A-lambda_i)^k v_i,j$, but all the terms where $kge m(i)$ vanish. (Side note: We know from the definition of generalized eigenspace that the terms of the series must vanish eventually, and in fact they must vanish by term $m(i)$. This can be seen by considering the sequence kernels of $(A-lambda_i)^k$ as $k$ ranges over the positive integers; each is a subspace of the next, and once two adjacent terms are equal, all future terms must be as well. You might want to think through why those statements are true and why they imply the statement.)
So is $m$ also a function of $j$? Or is every summand in $j$ really $m-1$ long? Each $j$ corresponds to a repetition of the eignvalue right?
â user561840
Aug 16 at 6:26
Now that you mention it, the first and second $m$ should be represented by different symbols. I'll edit the answer.
â Chad Groft
Aug 16 at 15:37
Never mind, I see what the solution is doing now.
â Chad Groft
Aug 16 at 15:38
Just to clarify, each "third sum" corresponds to a Jordan block right?
â user561840
Aug 22 at 6:08
1
No. Jordan blocks aren't used at all here, just generalized eigenspaces.
â Chad Groft
2 days ago
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
They're not the same length, because $m$ is not truly a constant. You said it yourself: $m$ is the multiplicity of a given eigenvalue $lambda_i$, so its value is a function of $i$ (or of $lambda_i$ depending on your perspective). For this reason it should be written as $m_i$ or $m(lambda_i)$.
Edit: As you note, it is possible to make the sums over $k$ shorter by choosing the $v_i,j$ carefully (i.e. choosing elements of $ker(A-lambda_i)$ whenever possible, then elements of $ker(A-lambda_i)^2$, and so forth). It's not strictly necessary, however, and this form for the solution avoids getting into that. It trades simplicity of solutions for simplicity of presentation.
Basically you got confused here because you know too much linear algebra already :-)
Here's a full description of the solution for posterity:
- The first summation, $sum_i=1^N$, is over the distinct eigenvalues $lambda_1$, $ldots$, $lambda_n$. (This may include complex eigenvalues.) Each $lambda_i$ has a generalized eigenspace $V_i$, consisting of those vectors $vinmathbb R^n$ (or $mathbbC^n$ if complex eigenvalues are necessary) where $(A-lambda_i)^k v=0$ for some $k$.
- The second summation, $sum_j=1^m(i)$, is over any basis $v_i,1, ldots, v_i,m(i)$ for $V_i$ (so $m(i) = dim V_i$).
Together the set of vectors $v_i,j : 1le ile ntextrm and 1le jle m(i)$ is a basis for all of $mathbbR^n$ [$mathbbC^n$]. - The constants $b_i,jinmathbbR$ [$mathbbC$] are chosen so that $x(0) = sum_i=1^N sum_j=1^m(i) b_i,jv_i,j$ (which can always be done uniquely; that's what it means for the $v_i,j$ to form a basis).
- The third sum, $sum_k=0^m(i)-1 fract^kk! (A-lambda_i)^k v_i,j$, is simply equal to $exp((A-lambda_i)t)v_i,j$. This works because $exp((A - lambda_i)t)v_i,j$ can be written as the infinite series $sum_k=0^infty fract^kk! (A-lambda_i)^k v_i,j$, but all the terms where $kge m(i)$ vanish. (Side note: We know from the definition of generalized eigenspace that the terms of the series must vanish eventually, and in fact they must vanish by term $m(i)$. This can be seen by considering the sequence kernels of $(A-lambda_i)^k$ as $k$ ranges over the positive integers; each is a subspace of the next, and once two adjacent terms are equal, all future terms must be as well. You might want to think through why those statements are true and why they imply the statement.)
So is $m$ also a function of $j$? Or is every summand in $j$ really $m-1$ long? Each $j$ corresponds to a repetition of the eignvalue right?
â user561840
Aug 16 at 6:26
Now that you mention it, the first and second $m$ should be represented by different symbols. I'll edit the answer.
â Chad Groft
Aug 16 at 15:37
Never mind, I see what the solution is doing now.
â Chad Groft
Aug 16 at 15:38
Just to clarify, each "third sum" corresponds to a Jordan block right?
â user561840
Aug 22 at 6:08
1
No. Jordan blocks aren't used at all here, just generalized eigenspaces.
â Chad Groft
2 days ago
add a comment |Â
up vote
0
down vote
accepted
They're not the same length, because $m$ is not truly a constant. You said it yourself: $m$ is the multiplicity of a given eigenvalue $lambda_i$, so its value is a function of $i$ (or of $lambda_i$ depending on your perspective). For this reason it should be written as $m_i$ or $m(lambda_i)$.
Edit: As you note, it is possible to make the sums over $k$ shorter by choosing the $v_i,j$ carefully (i.e. choosing elements of $ker(A-lambda_i)$ whenever possible, then elements of $ker(A-lambda_i)^2$, and so forth). It's not strictly necessary, however, and this form for the solution avoids getting into that. It trades simplicity of solutions for simplicity of presentation.
Basically you got confused here because you know too much linear algebra already :-)
Here's a full description of the solution for posterity:
- The first summation, $sum_i=1^N$, is over the distinct eigenvalues $lambda_1$, $ldots$, $lambda_n$. (This may include complex eigenvalues.) Each $lambda_i$ has a generalized eigenspace $V_i$, consisting of those vectors $vinmathbb R^n$ (or $mathbbC^n$ if complex eigenvalues are necessary) where $(A-lambda_i)^k v=0$ for some $k$.
- The second summation, $sum_j=1^m(i)$, is over any basis $v_i,1, ldots, v_i,m(i)$ for $V_i$ (so $m(i) = dim V_i$).
Together the set of vectors $v_i,j : 1le ile ntextrm and 1le jle m(i)$ is a basis for all of $mathbbR^n$ [$mathbbC^n$]. - The constants $b_i,jinmathbbR$ [$mathbbC$] are chosen so that $x(0) = sum_i=1^N sum_j=1^m(i) b_i,jv_i,j$ (which can always be done uniquely; that's what it means for the $v_i,j$ to form a basis).
- The third sum, $sum_k=0^m(i)-1 fract^kk! (A-lambda_i)^k v_i,j$, is simply equal to $exp((A-lambda_i)t)v_i,j$. This works because $exp((A - lambda_i)t)v_i,j$ can be written as the infinite series $sum_k=0^infty fract^kk! (A-lambda_i)^k v_i,j$, but all the terms where $kge m(i)$ vanish. (Side note: We know from the definition of generalized eigenspace that the terms of the series must vanish eventually, and in fact they must vanish by term $m(i)$. This can be seen by considering the sequence kernels of $(A-lambda_i)^k$ as $k$ ranges over the positive integers; each is a subspace of the next, and once two adjacent terms are equal, all future terms must be as well. You might want to think through why those statements are true and why they imply the statement.)
So is $m$ also a function of $j$? Or is every summand in $j$ really $m-1$ long? Each $j$ corresponds to a repetition of the eignvalue right?
â user561840
Aug 16 at 6:26
Now that you mention it, the first and second $m$ should be represented by different symbols. I'll edit the answer.
â Chad Groft
Aug 16 at 15:37
Never mind, I see what the solution is doing now.
â Chad Groft
Aug 16 at 15:38
Just to clarify, each "third sum" corresponds to a Jordan block right?
â user561840
Aug 22 at 6:08
1
No. Jordan blocks aren't used at all here, just generalized eigenspaces.
â Chad Groft
2 days ago
add a comment |Â
up vote
0
down vote
accepted
up vote
0
down vote
accepted
They're not the same length, because $m$ is not truly a constant. You said it yourself: $m$ is the multiplicity of a given eigenvalue $lambda_i$, so its value is a function of $i$ (or of $lambda_i$ depending on your perspective). For this reason it should be written as $m_i$ or $m(lambda_i)$.
Edit: As you note, it is possible to make the sums over $k$ shorter by choosing the $v_i,j$ carefully (i.e. choosing elements of $ker(A-lambda_i)$ whenever possible, then elements of $ker(A-lambda_i)^2$, and so forth). It's not strictly necessary, however, and this form for the solution avoids getting into that. It trades simplicity of solutions for simplicity of presentation.
Basically you got confused here because you know too much linear algebra already :-)
Here's a full description of the solution for posterity:
- The first summation, $sum_i=1^N$, is over the distinct eigenvalues $lambda_1$, $ldots$, $lambda_n$. (This may include complex eigenvalues.) Each $lambda_i$ has a generalized eigenspace $V_i$, consisting of those vectors $vinmathbb R^n$ (or $mathbbC^n$ if complex eigenvalues are necessary) where $(A-lambda_i)^k v=0$ for some $k$.
- The second summation, $sum_j=1^m(i)$, is over any basis $v_i,1, ldots, v_i,m(i)$ for $V_i$ (so $m(i) = dim V_i$).
Together the set of vectors $v_i,j : 1le ile ntextrm and 1le jle m(i)$ is a basis for all of $mathbbR^n$ [$mathbbC^n$]. - The constants $b_i,jinmathbbR$ [$mathbbC$] are chosen so that $x(0) = sum_i=1^N sum_j=1^m(i) b_i,jv_i,j$ (which can always be done uniquely; that's what it means for the $v_i,j$ to form a basis).
- The third sum, $sum_k=0^m(i)-1 fract^kk! (A-lambda_i)^k v_i,j$, is simply equal to $exp((A-lambda_i)t)v_i,j$. This works because $exp((A - lambda_i)t)v_i,j$ can be written as the infinite series $sum_k=0^infty fract^kk! (A-lambda_i)^k v_i,j$, but all the terms where $kge m(i)$ vanish. (Side note: We know from the definition of generalized eigenspace that the terms of the series must vanish eventually, and in fact they must vanish by term $m(i)$. This can be seen by considering the sequence kernels of $(A-lambda_i)^k$ as $k$ ranges over the positive integers; each is a subspace of the next, and once two adjacent terms are equal, all future terms must be as well. You might want to think through why those statements are true and why they imply the statement.)
They're not the same length, because $m$ is not truly a constant. You said it yourself: $m$ is the multiplicity of a given eigenvalue $lambda_i$, so its value is a function of $i$ (or of $lambda_i$ depending on your perspective). For this reason it should be written as $m_i$ or $m(lambda_i)$.
Edit: As you note, it is possible to make the sums over $k$ shorter by choosing the $v_i,j$ carefully (i.e. choosing elements of $ker(A-lambda_i)$ whenever possible, then elements of $ker(A-lambda_i)^2$, and so forth). It's not strictly necessary, however, and this form for the solution avoids getting into that. It trades simplicity of solutions for simplicity of presentation.
Basically you got confused here because you know too much linear algebra already :-)
Here's a full description of the solution for posterity:
- The first summation, $sum_i=1^N$, is over the distinct eigenvalues $lambda_1$, $ldots$, $lambda_n$. (This may include complex eigenvalues.) Each $lambda_i$ has a generalized eigenspace $V_i$, consisting of those vectors $vinmathbb R^n$ (or $mathbbC^n$ if complex eigenvalues are necessary) where $(A-lambda_i)^k v=0$ for some $k$.
- The second summation, $sum_j=1^m(i)$, is over any basis $v_i,1, ldots, v_i,m(i)$ for $V_i$ (so $m(i) = dim V_i$).
Together the set of vectors $v_i,j : 1le ile ntextrm and 1le jle m(i)$ is a basis for all of $mathbbR^n$ [$mathbbC^n$]. - The constants $b_i,jinmathbbR$ [$mathbbC$] are chosen so that $x(0) = sum_i=1^N sum_j=1^m(i) b_i,jv_i,j$ (which can always be done uniquely; that's what it means for the $v_i,j$ to form a basis).
- The third sum, $sum_k=0^m(i)-1 fract^kk! (A-lambda_i)^k v_i,j$, is simply equal to $exp((A-lambda_i)t)v_i,j$. This works because $exp((A - lambda_i)t)v_i,j$ can be written as the infinite series $sum_k=0^infty fract^kk! (A-lambda_i)^k v_i,j$, but all the terms where $kge m(i)$ vanish. (Side note: We know from the definition of generalized eigenspace that the terms of the series must vanish eventually, and in fact they must vanish by term $m(i)$. This can be seen by considering the sequence kernels of $(A-lambda_i)^k$ as $k$ ranges over the positive integers; each is a subspace of the next, and once two adjacent terms are equal, all future terms must be as well. You might want to think through why those statements are true and why they imply the statement.)
edited Aug 16 at 16:18
answered Aug 15 at 19:18
Chad Groft
41327
41327
So is $m$ also a function of $j$? Or is every summand in $j$ really $m-1$ long? Each $j$ corresponds to a repetition of the eignvalue right?
â user561840
Aug 16 at 6:26
Now that you mention it, the first and second $m$ should be represented by different symbols. I'll edit the answer.
â Chad Groft
Aug 16 at 15:37
Never mind, I see what the solution is doing now.
â Chad Groft
Aug 16 at 15:38
Just to clarify, each "third sum" corresponds to a Jordan block right?
â user561840
Aug 22 at 6:08
1
No. Jordan blocks aren't used at all here, just generalized eigenspaces.
â Chad Groft
2 days ago
add a comment |Â
So is $m$ also a function of $j$? Or is every summand in $j$ really $m-1$ long? Each $j$ corresponds to a repetition of the eignvalue right?
â user561840
Aug 16 at 6:26
Now that you mention it, the first and second $m$ should be represented by different symbols. I'll edit the answer.
â Chad Groft
Aug 16 at 15:37
Never mind, I see what the solution is doing now.
â Chad Groft
Aug 16 at 15:38
Just to clarify, each "third sum" corresponds to a Jordan block right?
â user561840
Aug 22 at 6:08
1
No. Jordan blocks aren't used at all here, just generalized eigenspaces.
â Chad Groft
2 days ago
So is $m$ also a function of $j$? Or is every summand in $j$ really $m-1$ long? Each $j$ corresponds to a repetition of the eignvalue right?
â user561840
Aug 16 at 6:26
So is $m$ also a function of $j$? Or is every summand in $j$ really $m-1$ long? Each $j$ corresponds to a repetition of the eignvalue right?
â user561840
Aug 16 at 6:26
Now that you mention it, the first and second $m$ should be represented by different symbols. I'll edit the answer.
â Chad Groft
Aug 16 at 15:37
Now that you mention it, the first and second $m$ should be represented by different symbols. I'll edit the answer.
â Chad Groft
Aug 16 at 15:37
Never mind, I see what the solution is doing now.
â Chad Groft
Aug 16 at 15:38
Never mind, I see what the solution is doing now.
â Chad Groft
Aug 16 at 15:38
Just to clarify, each "third sum" corresponds to a Jordan block right?
â user561840
Aug 22 at 6:08
Just to clarify, each "third sum" corresponds to a Jordan block right?
â user561840
Aug 22 at 6:08
1
1
No. Jordan blocks aren't used at all here, just generalized eigenspaces.
â Chad Groft
2 days ago
No. Jordan blocks aren't used at all here, just generalized eigenspaces.
â Chad Groft
2 days ago
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2880072%2fgeneral-solution-to-sytem-of-odes%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password