What's the difference between almost surely and surely?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












It might be a duplicate but what's the difference between almost surely and surely?



I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.



From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.



Is "surely" even defined with new probability theory? Why did they added an extra word?



In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."



I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?










share|cite|improve this question





















  • You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
    – Kavi Rama Murthy
    Aug 31 at 7:41










  • Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
    – PackSciences
    Aug 31 at 7:43






  • 3




    The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
    – Kavi Rama Murthy
    Aug 31 at 7:45











  • Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
    – nicola
    Aug 31 at 7:54







  • 1




    "the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
    – Did
    Aug 31 at 9:12














up vote
1
down vote

favorite
1












It might be a duplicate but what's the difference between almost surely and surely?



I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.



From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.



Is "surely" even defined with new probability theory? Why did they added an extra word?



In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."



I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?










share|cite|improve this question





















  • You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
    – Kavi Rama Murthy
    Aug 31 at 7:41










  • Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
    – PackSciences
    Aug 31 at 7:43






  • 3




    The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
    – Kavi Rama Murthy
    Aug 31 at 7:45











  • Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
    – nicola
    Aug 31 at 7:54







  • 1




    "the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
    – Did
    Aug 31 at 9:12












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





It might be a duplicate but what's the difference between almost surely and surely?



I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.



From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.



Is "surely" even defined with new probability theory? Why did they added an extra word?



In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."



I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?










share|cite|improve this question













It might be a duplicate but what's the difference between almost surely and surely?



I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.



From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.



Is "surely" even defined with new probability theory? Why did they added an extra word?



In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."



I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?







probability probability-theory measure-theory






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Aug 31 at 7:35









PackSciences

41414




41414











  • You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
    – Kavi Rama Murthy
    Aug 31 at 7:41










  • Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
    – PackSciences
    Aug 31 at 7:43






  • 3




    The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
    – Kavi Rama Murthy
    Aug 31 at 7:45











  • Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
    – nicola
    Aug 31 at 7:54







  • 1




    "the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
    – Did
    Aug 31 at 9:12
















  • You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
    – Kavi Rama Murthy
    Aug 31 at 7:41










  • Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
    – PackSciences
    Aug 31 at 7:43






  • 3




    The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
    – Kavi Rama Murthy
    Aug 31 at 7:45











  • Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
    – nicola
    Aug 31 at 7:54







  • 1




    "the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
    – Did
    Aug 31 at 9:12















You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
– Kavi Rama Murthy
Aug 31 at 7:41




You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
– Kavi Rama Murthy
Aug 31 at 7:41












Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
– PackSciences
Aug 31 at 7:43




Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
– PackSciences
Aug 31 at 7:43




3




3




The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
– Kavi Rama Murthy
Aug 31 at 7:45





The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
– Kavi Rama Murthy
Aug 31 at 7:45













Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
– nicola
Aug 31 at 7:54





Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
– nicola
Aug 31 at 7:54





1




1




"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
– Did
Aug 31 at 9:12




"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
– Did
Aug 31 at 9:12










1 Answer
1






active

oldest

votes

















up vote
4
down vote



accepted










The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?



Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!



So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.



The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?



Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.



Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.






share|cite|improve this answer




















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900416%2fwhats-the-difference-between-almost-surely-and-surely%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    4
    down vote



    accepted










    The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?



    Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!



    So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.



    The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?



    Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.



    Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.






    share|cite|improve this answer
























      up vote
      4
      down vote



      accepted










      The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?



      Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!



      So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.



      The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?



      Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.



      Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.






      share|cite|improve this answer






















        up vote
        4
        down vote



        accepted







        up vote
        4
        down vote



        accepted






        The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?



        Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!



        So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.



        The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?



        Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.



        Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.






        share|cite|improve this answer












        The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?



        Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!



        So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.



        The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?



        Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.



        Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Aug 31 at 7:59









        celtschk

        28.8k75497




        28.8k75497



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900416%2fwhats-the-difference-between-almost-surely-and-surely%23new-answer', 'question_page');

            );

            Post as a guest













































































            這個網誌中的熱門文章

            How to combine Bézier curves to a surface?

            Mutual Information Always Non-negative

            Why am i infinitely getting the same tweet with the Twitter Search API?