What if both null hypothesis and alternative hypothesis are wrong? [duplicate]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
8
down vote

favorite
4













This question already has an answer here:



  • Do null and alternative hypotheses have to be exhaustive or not?

    2 answers



In hypothesis testing, alternative hypothesis doesn't have to be the opposite of null hypothesis. For example, for $H_0: mu=0$, $H_a$ is allowed to be $mu>1$, or $mu=1$. My question: Why is this allowed? What if in reality, $mu=-1$ or $mu=2$, in which case if one applies, say, likelihood ratio test, one may (wrongly) conclude that $H_0$ is accepted, or $H_0$ is rejected and hence $H_a$ is accepted?



What about this proposal: $H_a$ should always be the opposite of $H_0$? That is, $H_a: H_0$ is not true. This way, we are effectively testing only a single hypothesis $H_0$, rejecting it if the p-value is below a predefined significance level, and not have to test two hypotheses at the same time that can be both wrong.










share|cite|improve this question













marked as duplicate by Firebug, mdewey, Michael Chernick, Alexis, whuber♦ Sep 6 at 21:01


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.














  • Related (particularly the accepted answer): stats.stackexchange.com/questions/232665/…
    – JDL
    Sep 6 at 13:14










  • The statement "$H_0$ is not true" is undefined until you describe the space of all possible hypotheses.
    – whuber♦
    Sep 6 at 20:59
















up vote
8
down vote

favorite
4













This question already has an answer here:



  • Do null and alternative hypotheses have to be exhaustive or not?

    2 answers



In hypothesis testing, alternative hypothesis doesn't have to be the opposite of null hypothesis. For example, for $H_0: mu=0$, $H_a$ is allowed to be $mu>1$, or $mu=1$. My question: Why is this allowed? What if in reality, $mu=-1$ or $mu=2$, in which case if one applies, say, likelihood ratio test, one may (wrongly) conclude that $H_0$ is accepted, or $H_0$ is rejected and hence $H_a$ is accepted?



What about this proposal: $H_a$ should always be the opposite of $H_0$? That is, $H_a: H_0$ is not true. This way, we are effectively testing only a single hypothesis $H_0$, rejecting it if the p-value is below a predefined significance level, and not have to test two hypotheses at the same time that can be both wrong.










share|cite|improve this question













marked as duplicate by Firebug, mdewey, Michael Chernick, Alexis, whuber♦ Sep 6 at 21:01


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.














  • Related (particularly the accepted answer): stats.stackexchange.com/questions/232665/…
    – JDL
    Sep 6 at 13:14










  • The statement "$H_0$ is not true" is undefined until you describe the space of all possible hypotheses.
    – whuber♦
    Sep 6 at 20:59












up vote
8
down vote

favorite
4









up vote
8
down vote

favorite
4






4






This question already has an answer here:



  • Do null and alternative hypotheses have to be exhaustive or not?

    2 answers



In hypothesis testing, alternative hypothesis doesn't have to be the opposite of null hypothesis. For example, for $H_0: mu=0$, $H_a$ is allowed to be $mu>1$, or $mu=1$. My question: Why is this allowed? What if in reality, $mu=-1$ or $mu=2$, in which case if one applies, say, likelihood ratio test, one may (wrongly) conclude that $H_0$ is accepted, or $H_0$ is rejected and hence $H_a$ is accepted?



What about this proposal: $H_a$ should always be the opposite of $H_0$? That is, $H_a: H_0$ is not true. This way, we are effectively testing only a single hypothesis $H_0$, rejecting it if the p-value is below a predefined significance level, and not have to test two hypotheses at the same time that can be both wrong.










share|cite|improve this question














This question already has an answer here:



  • Do null and alternative hypotheses have to be exhaustive or not?

    2 answers



In hypothesis testing, alternative hypothesis doesn't have to be the opposite of null hypothesis. For example, for $H_0: mu=0$, $H_a$ is allowed to be $mu>1$, or $mu=1$. My question: Why is this allowed? What if in reality, $mu=-1$ or $mu=2$, in which case if one applies, say, likelihood ratio test, one may (wrongly) conclude that $H_0$ is accepted, or $H_0$ is rejected and hence $H_a$ is accepted?



What about this proposal: $H_a$ should always be the opposite of $H_0$? That is, $H_a: H_0$ is not true. This way, we are effectively testing only a single hypothesis $H_0$, rejecting it if the p-value is below a predefined significance level, and not have to test two hypotheses at the same time that can be both wrong.





This question already has an answer here:



  • Do null and alternative hypotheses have to be exhaustive or not?

    2 answers







hypothesis-testing






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Sep 6 at 4:14









Lei Huang

1325




1325




marked as duplicate by Firebug, mdewey, Michael Chernick, Alexis, whuber♦ Sep 6 at 21:01


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.






marked as duplicate by Firebug, mdewey, Michael Chernick, Alexis, whuber♦ Sep 6 at 21:01


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.













  • Related (particularly the accepted answer): stats.stackexchange.com/questions/232665/…
    – JDL
    Sep 6 at 13:14










  • The statement "$H_0$ is not true" is undefined until you describe the space of all possible hypotheses.
    – whuber♦
    Sep 6 at 20:59
















  • Related (particularly the accepted answer): stats.stackexchange.com/questions/232665/…
    – JDL
    Sep 6 at 13:14










  • The statement "$H_0$ is not true" is undefined until you describe the space of all possible hypotheses.
    – whuber♦
    Sep 6 at 20:59















Related (particularly the accepted answer): stats.stackexchange.com/questions/232665/…
– JDL
Sep 6 at 13:14




Related (particularly the accepted answer): stats.stackexchange.com/questions/232665/…
– JDL
Sep 6 at 13:14












The statement "$H_0$ is not true" is undefined until you describe the space of all possible hypotheses.
– whuber♦
Sep 6 at 20:59




The statement "$H_0$ is not true" is undefined until you describe the space of all possible hypotheses.
– whuber♦
Sep 6 at 20:59










4 Answers
4






active

oldest

votes

















up vote
18
down vote



accepted










What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.



In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing.






share|cite|improve this answer





























    up vote
    10
    down vote













    $H_a$ is, properly the complement of $H_0$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_0: mu ge c$ (for some number $c$), with $H_a: mu < c$ (or vice versa: $H_0: mu le c$, with $H_a: mu > c$), for precisely the reason you allude to: if the null hypothesis in a one-sided test is specified as $H_0: mu = 0$, then a one-sided alternative hypothesis cannot express the complement of $H_0$. I (and others) therefore disagree with those who use the confusing nomenclature you describe.



    See my answer here for a similar question and issue.






    share|cite|improve this answer


















    • 3




      Thanks. What about $H_0: mu=mu_1$ and $H_a: mu=mu_2$ as in Neyman–Pearson lemma?
      – Lei Huang
      Sep 6 at 5:08






    • 3




      No you can do a classic likelihood ratio test involving just two points. But as Jack so well stated before, hypothesis testing was never a good idea except in very special, narrow, circumstances. Bayesians would say "get me evidence that the unknown parameter is in any interval you specify".
      – Frank Harrell
      Sep 6 at 11:50






    • 1




      @LeiHuang Darn it! I muff that spelling all the time! Thank you. I personally would not use the language "alternative hypothesis" to describe the two hypotheses in the Neyman–Pearson lemma, for more or less the reason I give in my answer. The NPL is also, to my mind, using a different logic than Wald-type test statistics, by asking which of these two values of a parameter the data provide more evidence of.
      – Alexis
      Sep 6 at 15:36






    • 1




      @LeiHuang Also: the NPL type hypothesis in your first comment is a different form of hypothesis, than the one-sided hypothesis in your original question (and, indeed, from $H_0: mu = mu_1, H_a: mu ne mu_1$).
      – Alexis
      Sep 6 at 15:39


















    up vote
    1
    down vote













    This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon NHST, and adopt the Bayesian framework. If you have some prior information available, just incorporate it into your model through prior distribution. Unfortunately most statistics consumers are simply too indoctrinated, obsessed and entrenched with the old school of thinking. See more discussion here.






    share|cite|improve this answer





























      up vote
      0
      down vote













      Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.



      We actually test whether there is, or is not, enough evidence to accept some "new"/"novel"/"not-default" hypothesis H. We do this by



      1. Taking into account what we know (Bayesian style if appropriate);

      2. Choosing a test we think is applicable to the data and hypothesis we are probing, and

      3. Stipulating a point which will be deemed "significant".

      The significance level



      This last item, the "significamce level", is often a source of confusion. What we actually say is, "If the hypothesis is wrong, then how exceptional would our results be?" So, suppose we set a significance level of 0.1% (P=0.001), what we are saying is:



      "If our hypothesis is wrong, we just got a 1 in 1000 result by pure chance. That's so unlikely that we conclude the hypothesis is probably correct."



      So you can "draw the line" where you like - for some research such as particle physics, you'd want 2 separate (independent) experiments both with a significance level of 1 in some millions, before concluding the hypothesis is probably correct. For a rigged dice game, a 1 in 3 level might be enough to persuade you not to play that game :)



      But either way it is crucial to pick the level beforehand, otherwise you're probably just make a self serving statement using 'whatever level you like".






      share|cite|improve this answer



























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        18
        down vote



        accepted










        What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.



        In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing.






        share|cite|improve this answer


























          up vote
          18
          down vote



          accepted










          What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.



          In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing.






          share|cite|improve this answer
























            up vote
            18
            down vote



            accepted







            up vote
            18
            down vote



            accepted






            What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.



            In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing.






            share|cite|improve this answer














            What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.



            In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Sep 6 at 10:55

























            answered Sep 6 at 9:30









            Jack Aidley

            321110




            321110






















                up vote
                10
                down vote













                $H_a$ is, properly the complement of $H_0$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_0: mu ge c$ (for some number $c$), with $H_a: mu < c$ (or vice versa: $H_0: mu le c$, with $H_a: mu > c$), for precisely the reason you allude to: if the null hypothesis in a one-sided test is specified as $H_0: mu = 0$, then a one-sided alternative hypothesis cannot express the complement of $H_0$. I (and others) therefore disagree with those who use the confusing nomenclature you describe.



                See my answer here for a similar question and issue.






                share|cite|improve this answer


















                • 3




                  Thanks. What about $H_0: mu=mu_1$ and $H_a: mu=mu_2$ as in Neyman–Pearson lemma?
                  – Lei Huang
                  Sep 6 at 5:08






                • 3




                  No you can do a classic likelihood ratio test involving just two points. But as Jack so well stated before, hypothesis testing was never a good idea except in very special, narrow, circumstances. Bayesians would say "get me evidence that the unknown parameter is in any interval you specify".
                  – Frank Harrell
                  Sep 6 at 11:50






                • 1




                  @LeiHuang Darn it! I muff that spelling all the time! Thank you. I personally would not use the language "alternative hypothesis" to describe the two hypotheses in the Neyman–Pearson lemma, for more or less the reason I give in my answer. The NPL is also, to my mind, using a different logic than Wald-type test statistics, by asking which of these two values of a parameter the data provide more evidence of.
                  – Alexis
                  Sep 6 at 15:36






                • 1




                  @LeiHuang Also: the NPL type hypothesis in your first comment is a different form of hypothesis, than the one-sided hypothesis in your original question (and, indeed, from $H_0: mu = mu_1, H_a: mu ne mu_1$).
                  – Alexis
                  Sep 6 at 15:39















                up vote
                10
                down vote













                $H_a$ is, properly the complement of $H_0$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_0: mu ge c$ (for some number $c$), with $H_a: mu < c$ (or vice versa: $H_0: mu le c$, with $H_a: mu > c$), for precisely the reason you allude to: if the null hypothesis in a one-sided test is specified as $H_0: mu = 0$, then a one-sided alternative hypothesis cannot express the complement of $H_0$. I (and others) therefore disagree with those who use the confusing nomenclature you describe.



                See my answer here for a similar question and issue.






                share|cite|improve this answer


















                • 3




                  Thanks. What about $H_0: mu=mu_1$ and $H_a: mu=mu_2$ as in Neyman–Pearson lemma?
                  – Lei Huang
                  Sep 6 at 5:08






                • 3




                  No you can do a classic likelihood ratio test involving just two points. But as Jack so well stated before, hypothesis testing was never a good idea except in very special, narrow, circumstances. Bayesians would say "get me evidence that the unknown parameter is in any interval you specify".
                  – Frank Harrell
                  Sep 6 at 11:50






                • 1




                  @LeiHuang Darn it! I muff that spelling all the time! Thank you. I personally would not use the language "alternative hypothesis" to describe the two hypotheses in the Neyman–Pearson lemma, for more or less the reason I give in my answer. The NPL is also, to my mind, using a different logic than Wald-type test statistics, by asking which of these two values of a parameter the data provide more evidence of.
                  – Alexis
                  Sep 6 at 15:36






                • 1




                  @LeiHuang Also: the NPL type hypothesis in your first comment is a different form of hypothesis, than the one-sided hypothesis in your original question (and, indeed, from $H_0: mu = mu_1, H_a: mu ne mu_1$).
                  – Alexis
                  Sep 6 at 15:39













                up vote
                10
                down vote










                up vote
                10
                down vote









                $H_a$ is, properly the complement of $H_0$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_0: mu ge c$ (for some number $c$), with $H_a: mu < c$ (or vice versa: $H_0: mu le c$, with $H_a: mu > c$), for precisely the reason you allude to: if the null hypothesis in a one-sided test is specified as $H_0: mu = 0$, then a one-sided alternative hypothesis cannot express the complement of $H_0$. I (and others) therefore disagree with those who use the confusing nomenclature you describe.



                See my answer here for a similar question and issue.






                share|cite|improve this answer














                $H_a$ is, properly the complement of $H_0$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_0: mu ge c$ (for some number $c$), with $H_a: mu < c$ (or vice versa: $H_0: mu le c$, with $H_a: mu > c$), for precisely the reason you allude to: if the null hypothesis in a one-sided test is specified as $H_0: mu = 0$, then a one-sided alternative hypothesis cannot express the complement of $H_0$. I (and others) therefore disagree with those who use the confusing nomenclature you describe.



                See my answer here for a similar question and issue.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Sep 6 at 15:41

























                answered Sep 6 at 4:51









                Alexis

                15.3k34494




                15.3k34494







                • 3




                  Thanks. What about $H_0: mu=mu_1$ and $H_a: mu=mu_2$ as in Neyman–Pearson lemma?
                  – Lei Huang
                  Sep 6 at 5:08






                • 3




                  No you can do a classic likelihood ratio test involving just two points. But as Jack so well stated before, hypothesis testing was never a good idea except in very special, narrow, circumstances. Bayesians would say "get me evidence that the unknown parameter is in any interval you specify".
                  – Frank Harrell
                  Sep 6 at 11:50






                • 1




                  @LeiHuang Darn it! I muff that spelling all the time! Thank you. I personally would not use the language "alternative hypothesis" to describe the two hypotheses in the Neyman–Pearson lemma, for more or less the reason I give in my answer. The NPL is also, to my mind, using a different logic than Wald-type test statistics, by asking which of these two values of a parameter the data provide more evidence of.
                  – Alexis
                  Sep 6 at 15:36






                • 1




                  @LeiHuang Also: the NPL type hypothesis in your first comment is a different form of hypothesis, than the one-sided hypothesis in your original question (and, indeed, from $H_0: mu = mu_1, H_a: mu ne mu_1$).
                  – Alexis
                  Sep 6 at 15:39













                • 3




                  Thanks. What about $H_0: mu=mu_1$ and $H_a: mu=mu_2$ as in Neyman–Pearson lemma?
                  – Lei Huang
                  Sep 6 at 5:08






                • 3




                  No you can do a classic likelihood ratio test involving just two points. But as Jack so well stated before, hypothesis testing was never a good idea except in very special, narrow, circumstances. Bayesians would say "get me evidence that the unknown parameter is in any interval you specify".
                  – Frank Harrell
                  Sep 6 at 11:50






                • 1




                  @LeiHuang Darn it! I muff that spelling all the time! Thank you. I personally would not use the language "alternative hypothesis" to describe the two hypotheses in the Neyman–Pearson lemma, for more or less the reason I give in my answer. The NPL is also, to my mind, using a different logic than Wald-type test statistics, by asking which of these two values of a parameter the data provide more evidence of.
                  – Alexis
                  Sep 6 at 15:36






                • 1




                  @LeiHuang Also: the NPL type hypothesis in your first comment is a different form of hypothesis, than the one-sided hypothesis in your original question (and, indeed, from $H_0: mu = mu_1, H_a: mu ne mu_1$).
                  – Alexis
                  Sep 6 at 15:39








                3




                3




                Thanks. What about $H_0: mu=mu_1$ and $H_a: mu=mu_2$ as in Neyman–Pearson lemma?
                – Lei Huang
                Sep 6 at 5:08




                Thanks. What about $H_0: mu=mu_1$ and $H_a: mu=mu_2$ as in Neyman–Pearson lemma?
                – Lei Huang
                Sep 6 at 5:08




                3




                3




                No you can do a classic likelihood ratio test involving just two points. But as Jack so well stated before, hypothesis testing was never a good idea except in very special, narrow, circumstances. Bayesians would say "get me evidence that the unknown parameter is in any interval you specify".
                – Frank Harrell
                Sep 6 at 11:50




                No you can do a classic likelihood ratio test involving just two points. But as Jack so well stated before, hypothesis testing was never a good idea except in very special, narrow, circumstances. Bayesians would say "get me evidence that the unknown parameter is in any interval you specify".
                – Frank Harrell
                Sep 6 at 11:50




                1




                1




                @LeiHuang Darn it! I muff that spelling all the time! Thank you. I personally would not use the language "alternative hypothesis" to describe the two hypotheses in the Neyman–Pearson lemma, for more or less the reason I give in my answer. The NPL is also, to my mind, using a different logic than Wald-type test statistics, by asking which of these two values of a parameter the data provide more evidence of.
                – Alexis
                Sep 6 at 15:36




                @LeiHuang Darn it! I muff that spelling all the time! Thank you. I personally would not use the language "alternative hypothesis" to describe the two hypotheses in the Neyman–Pearson lemma, for more or less the reason I give in my answer. The NPL is also, to my mind, using a different logic than Wald-type test statistics, by asking which of these two values of a parameter the data provide more evidence of.
                – Alexis
                Sep 6 at 15:36




                1




                1




                @LeiHuang Also: the NPL type hypothesis in your first comment is a different form of hypothesis, than the one-sided hypothesis in your original question (and, indeed, from $H_0: mu = mu_1, H_a: mu ne mu_1$).
                – Alexis
                Sep 6 at 15:39





                @LeiHuang Also: the NPL type hypothesis in your first comment is a different form of hypothesis, than the one-sided hypothesis in your original question (and, indeed, from $H_0: mu = mu_1, H_a: mu ne mu_1$).
                – Alexis
                Sep 6 at 15:39











                up vote
                1
                down vote













                This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon NHST, and adopt the Bayesian framework. If you have some prior information available, just incorporate it into your model through prior distribution. Unfortunately most statistics consumers are simply too indoctrinated, obsessed and entrenched with the old school of thinking. See more discussion here.






                share|cite|improve this answer


























                  up vote
                  1
                  down vote













                  This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon NHST, and adopt the Bayesian framework. If you have some prior information available, just incorporate it into your model through prior distribution. Unfortunately most statistics consumers are simply too indoctrinated, obsessed and entrenched with the old school of thinking. See more discussion here.






                  share|cite|improve this answer
























                    up vote
                    1
                    down vote










                    up vote
                    1
                    down vote









                    This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon NHST, and adopt the Bayesian framework. If you have some prior information available, just incorporate it into your model through prior distribution. Unfortunately most statistics consumers are simply too indoctrinated, obsessed and entrenched with the old school of thinking. See more discussion here.






                    share|cite|improve this answer














                    This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon NHST, and adopt the Bayesian framework. If you have some prior information available, just incorporate it into your model through prior distribution. Unfortunately most statistics consumers are simply too indoctrinated, obsessed and entrenched with the old school of thinking. See more discussion here.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Sep 7 at 14:12

























                    answered Sep 6 at 16:25









                    bluepole

                    1,28711526




                    1,28711526




















                        up vote
                        0
                        down vote













                        Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.



                        We actually test whether there is, or is not, enough evidence to accept some "new"/"novel"/"not-default" hypothesis H. We do this by



                        1. Taking into account what we know (Bayesian style if appropriate);

                        2. Choosing a test we think is applicable to the data and hypothesis we are probing, and

                        3. Stipulating a point which will be deemed "significant".

                        The significance level



                        This last item, the "significamce level", is often a source of confusion. What we actually say is, "If the hypothesis is wrong, then how exceptional would our results be?" So, suppose we set a significance level of 0.1% (P=0.001), what we are saying is:



                        "If our hypothesis is wrong, we just got a 1 in 1000 result by pure chance. That's so unlikely that we conclude the hypothesis is probably correct."



                        So you can "draw the line" where you like - for some research such as particle physics, you'd want 2 separate (independent) experiments both with a significance level of 1 in some millions, before concluding the hypothesis is probably correct. For a rigged dice game, a 1 in 3 level might be enough to persuade you not to play that game :)



                        But either way it is crucial to pick the level beforehand, otherwise you're probably just make a self serving statement using 'whatever level you like".






                        share|cite|improve this answer
























                          up vote
                          0
                          down vote













                          Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.



                          We actually test whether there is, or is not, enough evidence to accept some "new"/"novel"/"not-default" hypothesis H. We do this by



                          1. Taking into account what we know (Bayesian style if appropriate);

                          2. Choosing a test we think is applicable to the data and hypothesis we are probing, and

                          3. Stipulating a point which will be deemed "significant".

                          The significance level



                          This last item, the "significamce level", is often a source of confusion. What we actually say is, "If the hypothesis is wrong, then how exceptional would our results be?" So, suppose we set a significance level of 0.1% (P=0.001), what we are saying is:



                          "If our hypothesis is wrong, we just got a 1 in 1000 result by pure chance. That's so unlikely that we conclude the hypothesis is probably correct."



                          So you can "draw the line" where you like - for some research such as particle physics, you'd want 2 separate (independent) experiments both with a significance level of 1 in some millions, before concluding the hypothesis is probably correct. For a rigged dice game, a 1 in 3 level might be enough to persuade you not to play that game :)



                          But either way it is crucial to pick the level beforehand, otherwise you're probably just make a self serving statement using 'whatever level you like".






                          share|cite|improve this answer






















                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.



                            We actually test whether there is, or is not, enough evidence to accept some "new"/"novel"/"not-default" hypothesis H. We do this by



                            1. Taking into account what we know (Bayesian style if appropriate);

                            2. Choosing a test we think is applicable to the data and hypothesis we are probing, and

                            3. Stipulating a point which will be deemed "significant".

                            The significance level



                            This last item, the "significamce level", is often a source of confusion. What we actually say is, "If the hypothesis is wrong, then how exceptional would our results be?" So, suppose we set a significance level of 0.1% (P=0.001), what we are saying is:



                            "If our hypothesis is wrong, we just got a 1 in 1000 result by pure chance. That's so unlikely that we conclude the hypothesis is probably correct."



                            So you can "draw the line" where you like - for some research such as particle physics, you'd want 2 separate (independent) experiments both with a significance level of 1 in some millions, before concluding the hypothesis is probably correct. For a rigged dice game, a 1 in 3 level might be enough to persuade you not to play that game :)



                            But either way it is crucial to pick the level beforehand, otherwise you're probably just make a self serving statement using 'whatever level you like".






                            share|cite|improve this answer












                            Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.



                            We actually test whether there is, or is not, enough evidence to accept some "new"/"novel"/"not-default" hypothesis H. We do this by



                            1. Taking into account what we know (Bayesian style if appropriate);

                            2. Choosing a test we think is applicable to the data and hypothesis we are probing, and

                            3. Stipulating a point which will be deemed "significant".

                            The significance level



                            This last item, the "significamce level", is often a source of confusion. What we actually say is, "If the hypothesis is wrong, then how exceptional would our results be?" So, suppose we set a significance level of 0.1% (P=0.001), what we are saying is:



                            "If our hypothesis is wrong, we just got a 1 in 1000 result by pure chance. That's so unlikely that we conclude the hypothesis is probably correct."



                            So you can "draw the line" where you like - for some research such as particle physics, you'd want 2 separate (independent) experiments both with a significance level of 1 in some millions, before concluding the hypothesis is probably correct. For a rigged dice game, a 1 in 3 level might be enough to persuade you not to play that game :)



                            But either way it is crucial to pick the level beforehand, otherwise you're probably just make a self serving statement using 'whatever level you like".







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered Sep 6 at 20:56









                            Stilez

                            24914




                            24914












                                這個網誌中的熱門文章

                                How to combine Bézier curves to a surface?

                                Carbon dioxide

                                Why am i infinitely getting the same tweet with the Twitter Search API?