Convolution of weighted uniformly distributed random variables

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I have problems with deriving the CDF of a weighted difference of iid uniformly distributed random variables.



Assume that $X_1$ and $X_2$ $stackreliidsim U[0,1]$. Define Z = $acdot X_1 - X_2$, where $a > 0$. How can I derive $P(Z<z)$? I was struggling with applying the general definition of a convolution on my particular case...



Thanks in advance!







share|cite|improve this question
























    up vote
    0
    down vote

    favorite












    I have problems with deriving the CDF of a weighted difference of iid uniformly distributed random variables.



    Assume that $X_1$ and $X_2$ $stackreliidsim U[0,1]$. Define Z = $acdot X_1 - X_2$, where $a > 0$. How can I derive $P(Z<z)$? I was struggling with applying the general definition of a convolution on my particular case...



    Thanks in advance!







    share|cite|improve this question






















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I have problems with deriving the CDF of a weighted difference of iid uniformly distributed random variables.



      Assume that $X_1$ and $X_2$ $stackreliidsim U[0,1]$. Define Z = $acdot X_1 - X_2$, where $a > 0$. How can I derive $P(Z<z)$? I was struggling with applying the general definition of a convolution on my particular case...



      Thanks in advance!







      share|cite|improve this question












      I have problems with deriving the CDF of a weighted difference of iid uniformly distributed random variables.



      Assume that $X_1$ and $X_2$ $stackreliidsim U[0,1]$. Define Z = $acdot X_1 - X_2$, where $a > 0$. How can I derive $P(Z<z)$? I was struggling with applying the general definition of a convolution on my particular case...



      Thanks in advance!









      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Aug 20 at 12:46









      P3rs3rk3r

      385




      385




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          0
          down vote













          On the one hand we have $$P(A) = E[1_A]$$ on the other hand we have $$E[g(X_1,X_2)] = int g(x_1,x_2) f_(X_1,X_2)(x_1,x_2) dz$$ for each measurable function $g$ where $f_(X_1,X_2)$ denotes the joint density of $X_1$ and $X_2$



          Now take $$A = Z < z $$ and consider that for independent random variables $X_1,X_2$ it holds $$f_(X_1,X_2)(x_1,x_2) = f_X_1(x_1)f_X_2(x_2)$$ to get



          $$P(Z < z) = int_Bbb R^2 1_aX_1 - X_2 < z, dx_1dx_2$$



          Solving the integral will give you the wanted result.






          share|cite|improve this answer



























            up vote
            0
            down vote













            Hint:



            For fixed $z$:$$mathsf P(Z<z)=int_0^1int_0^1[ax-y<z]dydx$$ where $[ax-y<z]$ is the function $mathbb R^2tomathbb R$ taking value $1$ if $ax-y<z$ and taking value $0$ otherwise.






            share|cite|improve this answer




















              Your Answer




              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "69"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: true,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );








               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2888741%2fconvolution-of-weighted-uniformly-distributed-random-variables%23new-answer', 'question_page');

              );

              Post as a guest






























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              0
              down vote













              On the one hand we have $$P(A) = E[1_A]$$ on the other hand we have $$E[g(X_1,X_2)] = int g(x_1,x_2) f_(X_1,X_2)(x_1,x_2) dz$$ for each measurable function $g$ where $f_(X_1,X_2)$ denotes the joint density of $X_1$ and $X_2$



              Now take $$A = Z < z $$ and consider that for independent random variables $X_1,X_2$ it holds $$f_(X_1,X_2)(x_1,x_2) = f_X_1(x_1)f_X_2(x_2)$$ to get



              $$P(Z < z) = int_Bbb R^2 1_aX_1 - X_2 < z, dx_1dx_2$$



              Solving the integral will give you the wanted result.






              share|cite|improve this answer
























                up vote
                0
                down vote













                On the one hand we have $$P(A) = E[1_A]$$ on the other hand we have $$E[g(X_1,X_2)] = int g(x_1,x_2) f_(X_1,X_2)(x_1,x_2) dz$$ for each measurable function $g$ where $f_(X_1,X_2)$ denotes the joint density of $X_1$ and $X_2$



                Now take $$A = Z < z $$ and consider that for independent random variables $X_1,X_2$ it holds $$f_(X_1,X_2)(x_1,x_2) = f_X_1(x_1)f_X_2(x_2)$$ to get



                $$P(Z < z) = int_Bbb R^2 1_aX_1 - X_2 < z, dx_1dx_2$$



                Solving the integral will give you the wanted result.






                share|cite|improve this answer






















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  On the one hand we have $$P(A) = E[1_A]$$ on the other hand we have $$E[g(X_1,X_2)] = int g(x_1,x_2) f_(X_1,X_2)(x_1,x_2) dz$$ for each measurable function $g$ where $f_(X_1,X_2)$ denotes the joint density of $X_1$ and $X_2$



                  Now take $$A = Z < z $$ and consider that for independent random variables $X_1,X_2$ it holds $$f_(X_1,X_2)(x_1,x_2) = f_X_1(x_1)f_X_2(x_2)$$ to get



                  $$P(Z < z) = int_Bbb R^2 1_aX_1 - X_2 < z, dx_1dx_2$$



                  Solving the integral will give you the wanted result.






                  share|cite|improve this answer












                  On the one hand we have $$P(A) = E[1_A]$$ on the other hand we have $$E[g(X_1,X_2)] = int g(x_1,x_2) f_(X_1,X_2)(x_1,x_2) dz$$ for each measurable function $g$ where $f_(X_1,X_2)$ denotes the joint density of $X_1$ and $X_2$



                  Now take $$A = Z < z $$ and consider that for independent random variables $X_1,X_2$ it holds $$f_(X_1,X_2)(x_1,x_2) = f_X_1(x_1)f_X_2(x_2)$$ to get



                  $$P(Z < z) = int_Bbb R^2 1_aX_1 - X_2 < z, dx_1dx_2$$



                  Solving the integral will give you the wanted result.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Aug 20 at 13:06









                  Gono

                  3,404316




                  3,404316




















                      up vote
                      0
                      down vote













                      Hint:



                      For fixed $z$:$$mathsf P(Z<z)=int_0^1int_0^1[ax-y<z]dydx$$ where $[ax-y<z]$ is the function $mathbb R^2tomathbb R$ taking value $1$ if $ax-y<z$ and taking value $0$ otherwise.






                      share|cite|improve this answer
























                        up vote
                        0
                        down vote













                        Hint:



                        For fixed $z$:$$mathsf P(Z<z)=int_0^1int_0^1[ax-y<z]dydx$$ where $[ax-y<z]$ is the function $mathbb R^2tomathbb R$ taking value $1$ if $ax-y<z$ and taking value $0$ otherwise.






                        share|cite|improve this answer






















                          up vote
                          0
                          down vote










                          up vote
                          0
                          down vote









                          Hint:



                          For fixed $z$:$$mathsf P(Z<z)=int_0^1int_0^1[ax-y<z]dydx$$ where $[ax-y<z]$ is the function $mathbb R^2tomathbb R$ taking value $1$ if $ax-y<z$ and taking value $0$ otherwise.






                          share|cite|improve this answer












                          Hint:



                          For fixed $z$:$$mathsf P(Z<z)=int_0^1int_0^1[ax-y<z]dydx$$ where $[ax-y<z]$ is the function $mathbb R^2tomathbb R$ taking value $1$ if $ax-y<z$ and taking value $0$ otherwise.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Aug 20 at 13:36









                          drhab

                          87.8k541119




                          87.8k541119






















                               

                              draft saved


                              draft discarded


























                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2888741%2fconvolution-of-weighted-uniformly-distributed-random-variables%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              這個網誌中的熱門文章

                              How to combine Bézier curves to a surface?

                              Carbon dioxide

                              Why am i infinitely getting the same tweet with the Twitter Search API?