Point in Probability that something is known to be Wrong
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?
probability coincidences
add a comment |Â
up vote
0
down vote
favorite
How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?
probability coincidences
1
You can never know.
â Mike Earnest
Aug 8 at 17:21
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?
probability coincidences
How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?
probability coincidences
asked Aug 8 at 17:15
cjh
1
1
1
You can never know.
â Mike Earnest
Aug 8 at 17:21
add a comment |Â
1
You can never know.
â Mike Earnest
Aug 8 at 17:21
1
1
You can never know.
â Mike Earnest
Aug 8 at 17:21
You can never know.
â Mike Earnest
Aug 8 at 17:21
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
1
down vote
This is the question that statistical hypothesis testing tries to answer.
First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."
Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.
Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.
An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.
1
It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
â hardmath
Aug 8 at 18:25
add a comment |Â
up vote
0
down vote
The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).
Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05
$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$
$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$
$z = -2.0$ and from a $Z$ table $p = .0455$
We therefore have evidence that the coin is not fair as $.0455 < .05$
We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.
We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.
So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
â cjh
Aug 8 at 19:07
Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
â Phil H
Aug 8 at 19:20
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
This is the question that statistical hypothesis testing tries to answer.
First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."
Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.
Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.
An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.
1
It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
â hardmath
Aug 8 at 18:25
add a comment |Â
up vote
1
down vote
This is the question that statistical hypothesis testing tries to answer.
First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."
Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.
Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.
An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.
1
It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
â hardmath
Aug 8 at 18:25
add a comment |Â
up vote
1
down vote
up vote
1
down vote
This is the question that statistical hypothesis testing tries to answer.
First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."
Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.
Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.
An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.
This is the question that statistical hypothesis testing tries to answer.
First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."
Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.
Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.
An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.
answered Aug 8 at 17:34
Tanner Swett
3,6701537
3,6701537
1
It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
â hardmath
Aug 8 at 18:25
add a comment |Â
1
It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
â hardmath
Aug 8 at 18:25
1
1
It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
â hardmath
Aug 8 at 18:25
It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
â hardmath
Aug 8 at 18:25
add a comment |Â
up vote
0
down vote
The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).
Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05
$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$
$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$
$z = -2.0$ and from a $Z$ table $p = .0455$
We therefore have evidence that the coin is not fair as $.0455 < .05$
We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.
We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.
So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
â cjh
Aug 8 at 19:07
Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
â Phil H
Aug 8 at 19:20
add a comment |Â
up vote
0
down vote
The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).
Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05
$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$
$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$
$z = -2.0$ and from a $Z$ table $p = .0455$
We therefore have evidence that the coin is not fair as $.0455 < .05$
We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.
We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.
So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
â cjh
Aug 8 at 19:07
Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
â Phil H
Aug 8 at 19:20
add a comment |Â
up vote
0
down vote
up vote
0
down vote
The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).
Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05
$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$
$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$
$z = -2.0$ and from a $Z$ table $p = .0455$
We therefore have evidence that the coin is not fair as $.0455 < .05$
We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.
We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.
The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).
Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05
$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$
$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$
$z = -2.0$ and from a $Z$ table $p = .0455$
We therefore have evidence that the coin is not fair as $.0455 < .05$
We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.
We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.
answered Aug 8 at 19:01
Phil H
1,8442311
1,8442311
So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
â cjh
Aug 8 at 19:07
Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
â Phil H
Aug 8 at 19:20
add a comment |Â
So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
â cjh
Aug 8 at 19:07
Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
â Phil H
Aug 8 at 19:20
So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
â cjh
Aug 8 at 19:07
So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
â cjh
Aug 8 at 19:07
Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
â Phil H
Aug 8 at 19:20
Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
â Phil H
Aug 8 at 19:20
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876347%2fpoint-in-probability-that-something-is-known-to-be-wrong%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
You can never know.
â Mike Earnest
Aug 8 at 17:21