Calculating joint probability density

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.







share|cite|improve this question




















  • The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $text“ ldots textfor x_1ge0,, tge 0.text'' qquad$
    – Michael Hardy
    Feb 13 at 21:34











  • @MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
    – user531239
    Feb 13 at 21:38











  • It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
    – Michael Hardy
    Feb 13 at 22:00










  • You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
    – Michael Hardy
    Feb 13 at 22:01











  • $ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
    – Michael Hardy
    Feb 13 at 22:02















up vote
1
down vote

favorite












Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.







share|cite|improve this question




















  • The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $text“ ldots textfor x_1ge0,, tge 0.text'' qquad$
    – Michael Hardy
    Feb 13 at 21:34











  • @MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
    – user531239
    Feb 13 at 21:38











  • It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
    – Michael Hardy
    Feb 13 at 22:00










  • You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
    – Michael Hardy
    Feb 13 at 22:01











  • $ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
    – Michael Hardy
    Feb 13 at 22:02













up vote
1
down vote

favorite









up vote
1
down vote

favorite











Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.







share|cite|improve this question












Say I have $n$ independent random variable $X_1,...,X_n: Omega rightarrow mathbbR$ on a probability space $Omega$ with the same density $p:mathbbRrightarrow [0,infty)$. Could someone explain to me in the most measure-theoretic terms (i.e. assuming I know plenty of measure theory but not much probability theory terminology) how you can explicitly write down the density of the joint probability distribution $mathbbR^2rightarrow [0,infty)$ of $X_1$ and $t= sum_j=1^nX_j$ using only $p$? For instance I have an example where $p(x) = xe^-x$, and I'm supposed to get $$frac1Gamma ( 2n-2)x_1(t-x_1)^2n-3e^-t$$ for the density of the joint distribution.









share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Feb 13 at 21:19









user531239

62




62











  • The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $text“ ldots textfor x_1ge0,, tge 0.text'' qquad$
    – Michael Hardy
    Feb 13 at 21:34











  • @MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
    – user531239
    Feb 13 at 21:38











  • It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
    – Michael Hardy
    Feb 13 at 22:00










  • You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
    – Michael Hardy
    Feb 13 at 22:01











  • $ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
    – Michael Hardy
    Feb 13 at 22:02

















  • The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $text“ ldots textfor x_1ge0,, tge 0.text'' qquad$
    – Michael Hardy
    Feb 13 at 21:34











  • @MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
    – user531239
    Feb 13 at 21:38











  • It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
    – Michael Hardy
    Feb 13 at 22:00










  • You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
    – Michael Hardy
    Feb 13 at 22:01











  • $ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
    – Michael Hardy
    Feb 13 at 22:02
















The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $text“ ldots textfor x_1ge0,, tge 0.text'' qquad$
– Michael Hardy
Feb 13 at 21:34





The density can be viewed as mappping $[0,infty)^2$ into $[0,infty).$ Thus you could say $text“ ldots textfor x_1ge0,, tge 0.text'' qquad$
– Michael Hardy
Feb 13 at 21:34













@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
– user531239
Feb 13 at 21:38





@MichaelHardy Sorry I don't understand. The density isn't defined on all of $mathbbR^2$? Or its determined by its values on nonnegative $x_1, t$?
– user531239
Feb 13 at 21:38













It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
– Michael Hardy
Feb 13 at 22:00




It can be taken to be defined on all of $mathbb R^2$ but equal to $0$ except in the first quadrant.
– Michael Hardy
Feb 13 at 22:00












You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
– Michael Hardy
Feb 13 at 22:01





You have $p(x) = xe^-x$ for $xge0.$ Note that $$(,underbracep* cdots cdots *p_(n-1)text-fold \ textconvolution,)(x) = frac 1 Gamma(2(n-1)) x^2(n-1)-1 e^-x quad textfor xge 0,$$ where the convolution $f*g$ of $f$ and $g$ is $$ (f*g)(w) = int_0^w f(x)g(w-x) , dx. $$ If we have two independent random variables $U,V$ for which $Pr(Uge0) = Pr(Vge 0) =1,$ with respective probability density functions $f,g,$ then their joint density is $(u,v)mapsto f(u) g(v),$ and the probability density function of the sum $X+Y$ is then$,ldotsqquad$
– Michael Hardy
Feb 13 at 22:01













$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
– Michael Hardy
Feb 13 at 22:02





$ldots,$the convolution $f*g.$ You should be able to prove by induction that the $(n-1)$-fold convolution is as asserted above. $qquad$
– Michael Hardy
Feb 13 at 22:02











1 Answer
1






active

oldest

votes

















up vote
0
down vote













What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via



$$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$



Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.



You will need the following identity,



$$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$



where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$



This can be proved by induction:
$$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$



Then just integrate by parts.






share|cite|improve this answer






















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2649492%2fcalculating-joint-probability-density%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via



    $$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$



    Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.



    You will need the following identity,



    $$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$



    where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$



    This can be proved by induction:
    $$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$



    Then just integrate by parts.






    share|cite|improve this answer


























      up vote
      0
      down vote













      What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via



      $$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$



      Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.



      You will need the following identity,



      $$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$



      where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$



      This can be proved by induction:
      $$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$



      Then just integrate by parts.






      share|cite|improve this answer
























        up vote
        0
        down vote










        up vote
        0
        down vote









        What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via



        $$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$



        Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.



        You will need the following identity,



        $$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$



        where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$



        This can be proved by induction:
        $$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$



        Then just integrate by parts.






        share|cite|improve this answer














        What if you just push the measure from $mathbbR^n$ to $mathbbR^n$ via $$L: (x_1,...,x_n-1, x_n)mapsto (x_1,...,x_n-1,sum_i=1^n x_i)$$ and then project down $ mathbbR^nrightarrow mathbbR^2$ via



        $$pi :(x_1,...,x_n-1, t)mapsto (x_1,t).$$



        Then the density function is easy to track: Since $L$ has an inverse, the density $p$ goes to $pcirc L^-1$ and then since $pi$ is just a projection, the density can be pushed down just by integrating over the fibers of points.



        You will need the following identity,



        $$fracC^2k+1(2k+1)! = int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k,$$



        where $$A_k(C):= (x_1,...,x_k)in mathbbR^k:~~~~sum_i=1^k x_ileq C,~~textand~~ x_igeq 0~~textfor all ~i~.$$



        This can be proved by induction:
        $$int_A_k(C)big(C-sum_i=1^k x_ibig)cdot x_1x_2...x_k~~ dx_1dx_2...dx_k =$$ $$int_0^C ~x_k bigg( ~int_A_k-1(C-x_k)big((C-x_k)- sum_i=1^k-1 x_ibig)cdot x_1x_2...x_k-1~~ dx_1dx_2...bigg) ~dx_k=$$ $$int_0^C ~x_kfrac(C-x_k)^2k-1(2k-1)!dx_k.$$



        Then just integrate by parts.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Aug 26 at 23:26

























        answered Feb 16 at 1:23









        Tim kinsella

        2,8901229




        2,8901229



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2649492%2fcalculating-joint-probability-density%23new-answer', 'question_page');

            );

            Post as a guest













































































            這個網誌中的熱門文章

            How to combine Bézier curves to a surface?

            Why am i infinitely getting the same tweet with the Twitter Search API?

            Carbon dioxide