What's the difference between almost surely and surely?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
It might be a duplicate but what's the difference between almost surely and surely?
I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.
From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.
Is "surely" even defined with new probability theory? Why did they added an extra word?
In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."
I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?
probability probability-theory measure-theory
add a comment |Â
up vote
1
down vote
favorite
It might be a duplicate but what's the difference between almost surely and surely?
I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.
From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.
Is "surely" even defined with new probability theory? Why did they added an extra word?
In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."
I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?
probability probability-theory measure-theory
You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
â Kavi Rama Murthy
Aug 31 at 7:41
Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
â PackSciences
Aug 31 at 7:43
3
The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
â Kavi Rama Murthy
Aug 31 at 7:45
Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
â nicola
Aug 31 at 7:54
1
"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
â Did
Aug 31 at 9:12
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
It might be a duplicate but what's the difference between almost surely and surely?
I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.
From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.
Is "surely" even defined with new probability theory? Why did they added an extra word?
In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."
I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?
probability probability-theory measure-theory
It might be a duplicate but what's the difference between almost surely and surely?
I have heard that the concept of "almost" is relatively new in probability theory, but I never understood the core concept that differences it.
From what I have heard, it's linked to the measure in the probability tribu that defines a probability function, but it doesn't help me much.
Is "surely" even defined with new probability theory? Why did they added an extra word?
In this article, Wikipedia states that : "In probability experiments on a finite sample space, there is no difference between almost surely and surely. However, the distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability zero."
I don't understand this sentence. Maybe someone can help me understand this concept and maybe provide an example?
probability probability-theory measure-theory
probability probability-theory measure-theory
asked Aug 31 at 7:35
PackSciences
41414
41414
You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
â Kavi Rama Murthy
Aug 31 at 7:41
Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
â PackSciences
Aug 31 at 7:43
3
The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
â Kavi Rama Murthy
Aug 31 at 7:45
Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
â nicola
Aug 31 at 7:54
1
"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
â Did
Aug 31 at 9:12
add a comment |Â
You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
â Kavi Rama Murthy
Aug 31 at 7:41
Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
â PackSciences
Aug 31 at 7:43
3
The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
â Kavi Rama Murthy
Aug 31 at 7:45
Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
â nicola
Aug 31 at 7:54
1
"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
â Did
Aug 31 at 9:12
You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
â Kavi Rama Murthy
Aug 31 at 7:41
You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
â Kavi Rama Murthy
Aug 31 at 7:41
Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
â PackSciences
Aug 31 at 7:43
Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
â PackSciences
Aug 31 at 7:43
3
3
The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
â Kavi Rama Murthy
Aug 31 at 7:45
The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
â Kavi Rama Murthy
Aug 31 at 7:45
Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
â nicola
Aug 31 at 7:54
Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
â nicola
Aug 31 at 7:54
1
1
"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
â Did
Aug 31 at 9:12
"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
â Did
Aug 31 at 9:12
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
4
down vote
accepted
The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?
Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!
So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.
The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?
Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.
Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
4
down vote
accepted
The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?
Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!
So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.
The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?
Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.
Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.
add a comment |Â
up vote
4
down vote
accepted
The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?
Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!
So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.
The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?
Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.
Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.
add a comment |Â
up vote
4
down vote
accepted
up vote
4
down vote
accepted
The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?
Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!
So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.
The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?
Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.
Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.
The easiest way to see the problem is by considering an uniform distribution on the interval $[0,1]$. Obviously the probability that drawing a random number $x$ gives a real number from $[0,1]$ is $1$. But given a real number $rin[0,1]$, what is the probability that $xinr$, that is, $x=r$?
Well, it has to be the same for all $rin[0,1]$, or else we'd not have an uniform distribution. But consider if it has any value $p>0$. Then there exists a natural number $n$ such that $np>1$. Now take a set of $n$ different numbers, $X=r_1,r_2,ldots,r_nsubset [0,1]$. What is the probability to draw a number from that set? Well, according to the rules of probability, it would have to be $np$, but we just established that this value is larger than $1$, which cannot be!
So the only probability we can assign to the set $r$ without getting any inconsistency is the probability $0$. But then, if we assume that probability $0$ means an impossible result, then we just have claimed that for any number $rin[0,1]$ it is impossible to draw that number! But that would logically mean that it is impossible that the number we draw in in $[0,1]$, which is in direct contradiction to the original assumption that we get a number from that interval with certainty.
The only way to evade that contradiction is to accept that probability zero does not mean the event is impossible. But what, then, does probability zero mean?
Well, it means that almost certainly you'll lose if you bet on that event. That is, your expected win will be negative no matter how little you bet on it, or how much you'll win if the event actually occurs. An event of probability $0$ is an event that is always a bad idea to bet on, even though it may occur in principle.
Note that of course it is also a bad idea and a sure loss to bet on an event that is actually impossible to happen, so this is consistent with the rule that impossible events get the probability $0$.
answered Aug 31 at 7:59
celtschk
28.8k75497
28.8k75497
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900416%2fwhats-the-difference-between-almost-surely-and-surely%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
You have to know basic measure theory to understand the distinction. The difference is technical and it cannot be explained in an intuitive way.
â Kavi Rama Murthy
Aug 31 at 7:41
Well I am ready to understand an explanation. If it's too hard for me, I'll still do my research and ask questions. I don't think "it's too hard for you kid" is a good answer. Also, if you meant that it's hard to explain the difference between those two, then ok, I got it, that's the whole point of the question.
â PackSciences
Aug 31 at 7:43
3
The only sure event in a Probability space is the entire sample space $Omega$ whereas any event with probability $1$ is an almost sure event. My apologies for hurting your feelings by my comment.
â Kavi Rama Murthy
Aug 31 at 7:45
Say for instance that you have a disc and a fixed diameter. Next, draw a random point belonging to the disc. Almost surely you pick a point which doesn't belong to the diameter. The probability of picking a point on the diameter is 0, since the set of points of the diameter has 0 measure.
â nicola
Aug 31 at 7:54
1
"the concept of "almost" is relatively new in probability theory" Sure, in the sense that this "concept" does not exist since "almost surely" only exists as a whole.
â Did
Aug 31 at 9:12