Point in Probability that something is known to be Wrong

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?







share|cite|improve this question
















  • 1




    You can never know.
    – Mike Earnest
    Aug 8 at 17:21














up vote
0
down vote

favorite












How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?







share|cite|improve this question
















  • 1




    You can never know.
    – Mike Earnest
    Aug 8 at 17:21












up vote
0
down vote

favorite









up vote
0
down vote

favorite











How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?







share|cite|improve this question












How many times would you need to lose a game with 50% odds before knowing that something was wrong with the game? I found the odds of losing 59 times in a row to be 1 in 5.764 x 10^17 which is still technically feasible; so at what point can you draw the line and know that there must be something awry?









share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Aug 8 at 17:15









cjh

1




1







  • 1




    You can never know.
    – Mike Earnest
    Aug 8 at 17:21












  • 1




    You can never know.
    – Mike Earnest
    Aug 8 at 17:21







1




1




You can never know.
– Mike Earnest
Aug 8 at 17:21




You can never know.
– Mike Earnest
Aug 8 at 17:21










2 Answers
2






active

oldest

votes

















up vote
1
down vote













This is the question that statistical hypothesis testing tries to answer.



First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."



Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.



Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.



An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.






share|cite|improve this answer
















  • 1




    It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
    – hardmath
    Aug 8 at 18:25

















up vote
0
down vote













The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).



Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05



$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$



$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$



$z = -2.0$ and from a $Z$ table $p = .0455$



We therefore have evidence that the coin is not fair as $.0455 < .05$



We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.



We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.






share|cite|improve this answer




















  • So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
    – cjh
    Aug 8 at 19:07










  • Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
    – Phil H
    Aug 8 at 19:20










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876347%2fpoint-in-probability-that-something-is-known-to-be-wrong%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













This is the question that statistical hypothesis testing tries to answer.



First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."



Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.



Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.



An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.






share|cite|improve this answer
















  • 1




    It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
    – hardmath
    Aug 8 at 18:25














up vote
1
down vote













This is the question that statistical hypothesis testing tries to answer.



First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."



Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.



Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.



An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.






share|cite|improve this answer
















  • 1




    It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
    – hardmath
    Aug 8 at 18:25












up vote
1
down vote










up vote
1
down vote









This is the question that statistical hypothesis testing tries to answer.



First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."



Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.



Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.



An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.






share|cite|improve this answer












This is the question that statistical hypothesis testing tries to answer.



First, you formulate a hypothesis: "This is a fair game with a 50% chance of winning each round."



Next, you collect some data, perhaps by playing 59 rounds of the game and finding that you lose every time.



Finally, you perform a test in order to make a "best guess" as to whether or not the hypothesis is correct.



An appropriate testing method to use here would be a "one-proportion $z$-test". I don't have a good reference for this; if anyone could edit this answer to point at a good reference, that would be really helpful.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Aug 8 at 17:34









Tanner Swett

3,6701537




3,6701537







  • 1




    It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
    – hardmath
    Aug 8 at 18:25












  • 1




    It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
    – hardmath
    Aug 8 at 18:25







1




1




It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
– hardmath
Aug 8 at 18:25




It is an important tenet of experimental design that one chooses the statistical test in advance of doing the experiment, including a declaration of what significance level will be set to reject the null hypothesis. The z-test is commonly used in connection with experiments which are modelled as Bernoulli trials. I can add some links to support Tanner's Answer.
– hardmath
Aug 8 at 18:25










up vote
0
down vote













The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).



Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05



$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$



$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$



$z = -2.0$ and from a $Z$ table $p = .0455$



We therefore have evidence that the coin is not fair as $.0455 < .05$



We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.



We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.






share|cite|improve this answer




















  • So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
    – cjh
    Aug 8 at 19:07










  • Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
    – Phil H
    Aug 8 at 19:20














up vote
0
down vote













The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).



Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05



$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$



$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$



$z = -2.0$ and from a $Z$ table $p = .0455$



We therefore have evidence that the coin is not fair as $.0455 < .05$



We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.



We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.






share|cite|improve this answer




















  • So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
    – cjh
    Aug 8 at 19:07










  • Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
    – Phil H
    Aug 8 at 19:20












up vote
0
down vote










up vote
0
down vote









The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).



Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05



$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$



$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$



$z = -2.0$ and from a $Z$ table $p = .0455$



We therefore have evidence that the coin is not fair as $.0455 < .05$



We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.



We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.






share|cite|improve this answer












The expected outcome for a game with a $50%$ chance of winning is naturally $50%$ of $n$ the number of trials. Normally a margin of error surrounds the expected value at a given significance level outside of which is considered evidence that something is wrong (not a 50% chance).



Let's assume we will toss a supposedly fair coin 100 times and we get 40 heads. Is this significant at the 95% confidence level which means statistically significant if the probability p of the outcome is less than .05



$$z = fracwidehat p - p_0sqrtfracp_0(1-p_0)n$$



$$z = frac.40 - .50sqrtfrac(.5cdot .5)100$$



$z = -2.0$ and from a $Z$ table $p = .0455$



We therefore have evidence that the coin is not fair as $.0455 < .05$



We can never be $100%$ confident that this is the case as the possibility of getting this result by chance, given the coin is fair, is $.05$ or less.



We can be more certain that the coin is not fair by increasing $n$ and the level of significance but there is no definitive point at which you can draw an line and say anything beyond this is $100%$ certain. All we have is a spectrum of ever increasing certainties all less than $100%$.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Aug 8 at 19:01









Phil H

1,8442311




1,8442311











  • So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
    – cjh
    Aug 8 at 19:07










  • Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
    – Phil H
    Aug 8 at 19:20
















  • So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
    – cjh
    Aug 8 at 19:07










  • Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
    – Phil H
    Aug 8 at 19:20















So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
– cjh
Aug 8 at 19:07




So is there a point when you can conclude that the coin is not fair? Or is it when the probability is outside of the Z-table reading for 95% confidence that you can say that the coin is not fair
– cjh
Aug 8 at 19:07












Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
– Phil H
Aug 8 at 19:20




Being outside the 95% confidence interval is evidence. You can repeat the test again and improve the evidence and hence the probability of it not being fair and be almost 100% sure, but like I said, there is no definitive point where this occurs. 95%, 99%, 99.5%..........etc etc.
– Phil H
Aug 8 at 19:20












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876347%2fpoint-in-probability-that-something-is-known-to-be-wrong%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

How to combine Bézier curves to a surface?

Mutual Information Always Non-negative

Why am i infinitely getting the same tweet with the Twitter Search API?