Calculating joint probability density
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.
probability analysis
 |Â
show 7 more comments
up vote
1
down vote
favorite
Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.
probability analysis
The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $textâ ldots textfor x_1ge0,, tge 0.text'' qquad$
â Michael Hardy
Feb 13 at 21:34
@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
â user531239
Feb 13 at 21:38
It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
â Michael Hardy
Feb 13 at 22:00
You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
â Michael Hardy
Feb 13 at 22:01
$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
â Michael Hardy
Feb 13 at 22:02
 |Â
show 7 more comments
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.
probability analysis
Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.
probability analysis
asked Feb 13 at 21:19
user531239
62
62
The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $textâ ldots textfor x_1ge0,, tge 0.text'' qquad$
â Michael Hardy
Feb 13 at 21:34
@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
â user531239
Feb 13 at 21:38
It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
â Michael Hardy
Feb 13 at 22:00
You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
â Michael Hardy
Feb 13 at 22:01
$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
â Michael Hardy
Feb 13 at 22:02
 |Â
show 7 more comments
The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $textâ ldots textfor x_1ge0,, tge 0.text'' qquad$
â Michael Hardy
Feb 13 at 21:34
@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
â user531239
Feb 13 at 21:38
It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
â Michael Hardy
Feb 13 at 22:00
You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
â Michael Hardy
Feb 13 at 22:01
$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
â Michael Hardy
Feb 13 at 22:02
The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $textâ ldots textfor x_1ge0,, tge 0.text'' qquad$
â Michael Hardy
Feb 13 at 21:34
The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $textâ ldots textfor x_1ge0,, tge 0.text'' qquad$
â Michael Hardy
Feb 13 at 21:34
@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
â user531239
Feb 13 at 21:38
@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
â user531239
Feb 13 at 21:38
It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
â Michael Hardy
Feb 13 at 22:00
It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
â Michael Hardy
Feb 13 at 22:00
You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
â Michael Hardy
Feb 13 at 22:01
You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
â Michael Hardy
Feb 13 at 22:01
$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
â Michael Hardy
Feb 13 at 22:02
$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
â Michael Hardy
Feb 13 at 22:02
 |Â
show 7 more comments
1 Answer
1
active
oldest
votes
up vote
0
down vote
What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via
$$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$
Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.
You will need the following identity,
$$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$
where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$
This can be proved by induction:
$$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$
Then just integrate by parts.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via
$$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$
Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.
You will need the following identity,
$$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$
where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$
This can be proved by induction:
$$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$
Then just integrate by parts.
add a comment |Â
up vote
0
down vote
What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via
$$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$
Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.
You will need the following identity,
$$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$
where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$
This can be proved by induction:
$$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$
Then just integrate by parts.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via
$$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$
Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.
You will need the following identity,
$$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$
where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$
This can be proved by induction:
$$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$
Then just integrate by parts.
What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via
$$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$
Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.
You will need the following identity,
$$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$
where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$
This can be proved by induction:
$$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$
Then just integrate by parts.
edited Aug 26 at 23:26
answered Feb 16 at 1:23
Tim kinsella
2,8901229
2,8901229
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2649492%2fcalculating-joint-probability-density%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $textâ ldots textfor x_1ge0,, tge 0.text'' qquad$
â Michael Hardy
Feb 13 at 21:34
@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
â user531239
Feb 13 at 21:38
It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
â Michael Hardy
Feb 13 at 22:00
You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
â Michael Hardy
Feb 13 at 22:01
$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
â Michael Hardy
Feb 13 at 22:02