Prove that the measure-theoretic definition of probability aligns with the basic one?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I often see people write $$P(X in A) = P(X in A | Y in B)P(Y in B) + P(X in A | Y in B^c)P(Y in B^c)$$



I want to formally justify this in a measure theory setting.



We can write $$P(X in A) = int 1_A(X) dP = int 1_A(X) left[ 1_B(Y) + 1_B^c(Y)right] dP $$$$= int 1_A(X)1_B(Y) dP + int 1_A(X) 1_B^c(Y)dP$$ $$ = P(X in A, Y in B) + P(X in A, Y in B^c)$$



And so we would be done if only I could prove that $$P(X in A, Y in B) = P(X in A | Y in B) P(Y in B).$$



This is true by definition in basic probability theory. Is it also true in measure theory?



So I am looking for two things:



  1. What is the definition of conditional probability in measure theory? Personally, I was only ever introduced to a conditional expectation, not a conditional probability.

  2. How does one prove that this abstract definition is identical to the equation above?









share|cite|improve this question





















  • The notions of conditional probabilites, conditional expectations and independence are only available in Probability theory. Trying to generalize them to general measure spaces leads to all kinds of complications with hardly any use in Analysis.
    – Kavi Rama Murthy
    Sep 8 at 11:54














up vote
1
down vote

favorite












I often see people write $$P(X in A) = P(X in A | Y in B)P(Y in B) + P(X in A | Y in B^c)P(Y in B^c)$$



I want to formally justify this in a measure theory setting.



We can write $$P(X in A) = int 1_A(X) dP = int 1_A(X) left[ 1_B(Y) + 1_B^c(Y)right] dP $$$$= int 1_A(X)1_B(Y) dP + int 1_A(X) 1_B^c(Y)dP$$ $$ = P(X in A, Y in B) + P(X in A, Y in B^c)$$



And so we would be done if only I could prove that $$P(X in A, Y in B) = P(X in A | Y in B) P(Y in B).$$



This is true by definition in basic probability theory. Is it also true in measure theory?



So I am looking for two things:



  1. What is the definition of conditional probability in measure theory? Personally, I was only ever introduced to a conditional expectation, not a conditional probability.

  2. How does one prove that this abstract definition is identical to the equation above?









share|cite|improve this question





















  • The notions of conditional probabilites, conditional expectations and independence are only available in Probability theory. Trying to generalize them to general measure spaces leads to all kinds of complications with hardly any use in Analysis.
    – Kavi Rama Murthy
    Sep 8 at 11:54












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I often see people write $$P(X in A) = P(X in A | Y in B)P(Y in B) + P(X in A | Y in B^c)P(Y in B^c)$$



I want to formally justify this in a measure theory setting.



We can write $$P(X in A) = int 1_A(X) dP = int 1_A(X) left[ 1_B(Y) + 1_B^c(Y)right] dP $$$$= int 1_A(X)1_B(Y) dP + int 1_A(X) 1_B^c(Y)dP$$ $$ = P(X in A, Y in B) + P(X in A, Y in B^c)$$



And so we would be done if only I could prove that $$P(X in A, Y in B) = P(X in A | Y in B) P(Y in B).$$



This is true by definition in basic probability theory. Is it also true in measure theory?



So I am looking for two things:



  1. What is the definition of conditional probability in measure theory? Personally, I was only ever introduced to a conditional expectation, not a conditional probability.

  2. How does one prove that this abstract definition is identical to the equation above?









share|cite|improve this question













I often see people write $$P(X in A) = P(X in A | Y in B)P(Y in B) + P(X in A | Y in B^c)P(Y in B^c)$$



I want to formally justify this in a measure theory setting.



We can write $$P(X in A) = int 1_A(X) dP = int 1_A(X) left[ 1_B(Y) + 1_B^c(Y)right] dP $$$$= int 1_A(X)1_B(Y) dP + int 1_A(X) 1_B^c(Y)dP$$ $$ = P(X in A, Y in B) + P(X in A, Y in B^c)$$



And so we would be done if only I could prove that $$P(X in A, Y in B) = P(X in A | Y in B) P(Y in B).$$



This is true by definition in basic probability theory. Is it also true in measure theory?



So I am looking for two things:



  1. What is the definition of conditional probability in measure theory? Personally, I was only ever introduced to a conditional expectation, not a conditional probability.

  2. How does one prove that this abstract definition is identical to the equation above?






probability measure-theory conditional-expectation






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Sep 8 at 11:12









Dalu

92




92











  • The notions of conditional probabilites, conditional expectations and independence are only available in Probability theory. Trying to generalize them to general measure spaces leads to all kinds of complications with hardly any use in Analysis.
    – Kavi Rama Murthy
    Sep 8 at 11:54
















  • The notions of conditional probabilites, conditional expectations and independence are only available in Probability theory. Trying to generalize them to general measure spaces leads to all kinds of complications with hardly any use in Analysis.
    – Kavi Rama Murthy
    Sep 8 at 11:54















The notions of conditional probabilites, conditional expectations and independence are only available in Probability theory. Trying to generalize them to general measure spaces leads to all kinds of complications with hardly any use in Analysis.
– Kavi Rama Murthy
Sep 8 at 11:54




The notions of conditional probabilites, conditional expectations and independence are only available in Probability theory. Trying to generalize them to general measure spaces leads to all kinds of complications with hardly any use in Analysis.
– Kavi Rama Murthy
Sep 8 at 11:54










1 Answer
1






active

oldest

votes

















up vote
0
down vote













You're overthinking things. Just like in naive probability theory, in measure theory the definition of $P(A|B)$ is $fracP(Acap B)P(B)$. Of course, this only works when $P(B)>0$, but that's a separate matter. As long as $P(B)>0$, your equation (the "total probability formula") is entirely justified in measure theory since it just falls out algebraically once you plug in the definition of conditional probability.






share|cite|improve this answer




















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2909520%2fprove-that-the-measure-theoretic-definition-of-probability-aligns-with-the-basic%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    You're overthinking things. Just like in naive probability theory, in measure theory the definition of $P(A|B)$ is $fracP(Acap B)P(B)$. Of course, this only works when $P(B)>0$, but that's a separate matter. As long as $P(B)>0$, your equation (the "total probability formula") is entirely justified in measure theory since it just falls out algebraically once you plug in the definition of conditional probability.






    share|cite|improve this answer
























      up vote
      0
      down vote













      You're overthinking things. Just like in naive probability theory, in measure theory the definition of $P(A|B)$ is $fracP(Acap B)P(B)$. Of course, this only works when $P(B)>0$, but that's a separate matter. As long as $P(B)>0$, your equation (the "total probability formula") is entirely justified in measure theory since it just falls out algebraically once you plug in the definition of conditional probability.






      share|cite|improve this answer






















        up vote
        0
        down vote










        up vote
        0
        down vote









        You're overthinking things. Just like in naive probability theory, in measure theory the definition of $P(A|B)$ is $fracP(Acap B)P(B)$. Of course, this only works when $P(B)>0$, but that's a separate matter. As long as $P(B)>0$, your equation (the "total probability formula") is entirely justified in measure theory since it just falls out algebraically once you plug in the definition of conditional probability.






        share|cite|improve this answer












        You're overthinking things. Just like in naive probability theory, in measure theory the definition of $P(A|B)$ is $fracP(Acap B)P(B)$. Of course, this only works when $P(B)>0$, but that's a separate matter. As long as $P(B)>0$, your equation (the "total probability formula") is entirely justified in measure theory since it just falls out algebraically once you plug in the definition of conditional probability.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Sep 8 at 12:09









        Jack M

        17.8k33674




        17.8k33674



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2909520%2fprove-that-the-measure-theoretic-definition-of-probability-aligns-with-the-basic%23new-answer', 'question_page');

            );

            Post as a guest













































































            這個網誌中的熱門文章

            How to combine Bézier curves to a surface?

            Carbon dioxide

            Why am i infinitely getting the same tweet with the Twitter Search API?