MP test construction for shifted exponential distribution

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.



I am facing difficulty as the parameter here is range dependent
However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
Please help!










share|cite|improve this question

























    up vote
    1
    down vote

    favorite












    For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.



    I am facing difficulty as the parameter here is range dependent
    However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
    But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
    Please help!










    share|cite|improve this question























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.



      I am facing difficulty as the parameter here is range dependent
      However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
      But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
      Please help!










      share|cite|improve this question













      For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.



      I am facing difficulty as the parameter here is range dependent
      However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
      But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
      Please help!







      statistics statistical-inference hypothesis-testing






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Aug 30 at 4:13









      Legend Killer

      1,500523




      1,500523




















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          Joint density of the sample $(X_1,X_2,ldots,X_n)$ is



          $$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$



          By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$



          , where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$



          and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



          Now,



          beginalign
          lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
          \\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
          \\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
          endalign



          So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means



          $$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



          We thus have



          $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$



          Again,



          beginalign
          E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
          \&=left(P_theta_0(X_1>c)right)^n
          \&=e^n(theta_0-c)quad,,c>theta_0
          endalign



          So from the size condition we get $$c=theta_0-fraclnalphan$$



          Finally, the test function is



          $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$






          share|cite|improve this answer




















          • Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
            – StubbornAtom
            Sep 3 at 20:55

















          up vote
          1
          down vote













          If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
          $$
          clefrac
          expn theta_1 - sum x_i
          expn theta_0 - sum x_i
          =
          exp n(theta_1 - theta_0 ,
          $$
          which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
          $$
          alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
          $$
          i.e., the MP is
          $$
          IX_(1) >theta_1-fracln alphan , .
          $$
          for $X_(1) ge theta_1$, and $0$ otherwise.






          share|cite|improve this answer






















          • I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
            – Legend Killer
            Aug 31 at 3:21











          • Missed the $c$. Corrected.
            – V. Vancak
            Aug 31 at 10:55










          • Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
            – V. Vancak
            Aug 31 at 11:07

















          up vote
          0
          down vote













          Comment: This is a tricky problem--pretty much for the reason you mention.



          It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
          Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
          against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
          Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$



          Can you write the LR in this case? When you understand the problem for $n = 1,$
          then go on the the general case.



          enter image description here






          share|cite|improve this answer




















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2899114%2fmp-test-construction-for-shifted-exponential-distribution%23new-answer', 'question_page');

            );

            Post as a guest






























            3 Answers
            3






            active

            oldest

            votes








            3 Answers
            3






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            1
            down vote



            accepted










            Joint density of the sample $(X_1,X_2,ldots,X_n)$ is



            $$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$



            By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$



            , where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$



            and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            Now,



            beginalign
            lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
            \\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
            \\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
            endalign



            So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means



            $$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            We thus have



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$



            Again,



            beginalign
            E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
            \&=left(P_theta_0(X_1>c)right)^n
            \&=e^n(theta_0-c)quad,,c>theta_0
            endalign



            So from the size condition we get $$c=theta_0-fraclnalphan$$



            Finally, the test function is



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$






            share|cite|improve this answer




















            • Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
              – StubbornAtom
              Sep 3 at 20:55














            up vote
            1
            down vote



            accepted










            Joint density of the sample $(X_1,X_2,ldots,X_n)$ is



            $$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$



            By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$



            , where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$



            and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            Now,



            beginalign
            lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
            \\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
            \\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
            endalign



            So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means



            $$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            We thus have



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$



            Again,



            beginalign
            E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
            \&=left(P_theta_0(X_1>c)right)^n
            \&=e^n(theta_0-c)quad,,c>theta_0
            endalign



            So from the size condition we get $$c=theta_0-fraclnalphan$$



            Finally, the test function is



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$






            share|cite|improve this answer




















            • Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
              – StubbornAtom
              Sep 3 at 20:55












            up vote
            1
            down vote



            accepted







            up vote
            1
            down vote



            accepted






            Joint density of the sample $(X_1,X_2,ldots,X_n)$ is



            $$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$



            By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$



            , where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$



            and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            Now,



            beginalign
            lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
            \\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
            \\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
            endalign



            So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means



            $$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            We thus have



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$



            Again,



            beginalign
            E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
            \&=left(P_theta_0(X_1>c)right)^n
            \&=e^n(theta_0-c)quad,,c>theta_0
            endalign



            So from the size condition we get $$c=theta_0-fraclnalphan$$



            Finally, the test function is



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$






            share|cite|improve this answer












            Joint density of the sample $(X_1,X_2,ldots,X_n)$ is



            $$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$



            By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$



            , where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$



            and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            Now,



            beginalign
            lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
            \\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
            \\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
            endalign



            So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means



            $$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$



            We thus have



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$



            Again,



            beginalign
            E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
            \&=left(P_theta_0(X_1>c)right)^n
            \&=e^n(theta_0-c)quad,,c>theta_0
            endalign



            So from the size condition we get $$c=theta_0-fraclnalphan$$



            Finally, the test function is



            $$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Sep 3 at 17:36









            StubbornAtom

            4,08511135




            4,08511135











            • Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
              – StubbornAtom
              Sep 3 at 20:55
















            • Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
              – StubbornAtom
              Sep 3 at 20:55















            Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
            – StubbornAtom
            Sep 3 at 20:55




            Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
            – StubbornAtom
            Sep 3 at 20:55










            up vote
            1
            down vote













            If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
            $$
            clefrac
            expn theta_1 - sum x_i
            expn theta_0 - sum x_i
            =
            exp n(theta_1 - theta_0 ,
            $$
            which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
            $$
            alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
            $$
            i.e., the MP is
            $$
            IX_(1) >theta_1-fracln alphan , .
            $$
            for $X_(1) ge theta_1$, and $0$ otherwise.






            share|cite|improve this answer






















            • I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
              – Legend Killer
              Aug 31 at 3:21











            • Missed the $c$. Corrected.
              – V. Vancak
              Aug 31 at 10:55










            • Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
              – V. Vancak
              Aug 31 at 11:07














            up vote
            1
            down vote













            If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
            $$
            clefrac
            expn theta_1 - sum x_i
            expn theta_0 - sum x_i
            =
            exp n(theta_1 - theta_0 ,
            $$
            which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
            $$
            alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
            $$
            i.e., the MP is
            $$
            IX_(1) >theta_1-fracln alphan , .
            $$
            for $X_(1) ge theta_1$, and $0$ otherwise.






            share|cite|improve this answer






















            • I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
              – Legend Killer
              Aug 31 at 3:21











            • Missed the $c$. Corrected.
              – V. Vancak
              Aug 31 at 10:55










            • Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
              – V. Vancak
              Aug 31 at 11:07












            up vote
            1
            down vote










            up vote
            1
            down vote









            If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
            $$
            clefrac
            expn theta_1 - sum x_i
            expn theta_0 - sum x_i
            =
            exp n(theta_1 - theta_0 ,
            $$
            which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
            $$
            alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
            $$
            i.e., the MP is
            $$
            IX_(1) >theta_1-fracln alphan , .
            $$
            for $X_(1) ge theta_1$, and $0$ otherwise.






            share|cite|improve this answer














            If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
            $$
            clefrac
            expn theta_1 - sum x_i
            expn theta_0 - sum x_i
            =
            exp n(theta_1 - theta_0 ,
            $$
            which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
            $$
            alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
            $$
            i.e., the MP is
            $$
            IX_(1) >theta_1-fracln alphan , .
            $$
            for $X_(1) ge theta_1$, and $0$ otherwise.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Aug 31 at 11:03

























            answered Aug 30 at 21:43









            V. Vancak

            10.1k2926




            10.1k2926











            • I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
              – Legend Killer
              Aug 31 at 3:21











            • Missed the $c$. Corrected.
              – V. Vancak
              Aug 31 at 10:55










            • Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
              – V. Vancak
              Aug 31 at 11:07
















            • I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
              – Legend Killer
              Aug 31 at 3:21











            • Missed the $c$. Corrected.
              – V. Vancak
              Aug 31 at 10:55










            • Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
              – V. Vancak
              Aug 31 at 11:07















            I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
            – Legend Killer
            Aug 31 at 3:21





            I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
            – Legend Killer
            Aug 31 at 3:21













            Missed the $c$. Corrected.
            – V. Vancak
            Aug 31 at 10:55




            Missed the $c$. Corrected.
            – V. Vancak
            Aug 31 at 10:55












            Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
            – V. Vancak
            Aug 31 at 11:07




            Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
            – V. Vancak
            Aug 31 at 11:07










            up vote
            0
            down vote













            Comment: This is a tricky problem--pretty much for the reason you mention.



            It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
            Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
            against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
            Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$



            Can you write the LR in this case? When you understand the problem for $n = 1,$
            then go on the the general case.



            enter image description here






            share|cite|improve this answer
























              up vote
              0
              down vote













              Comment: This is a tricky problem--pretty much for the reason you mention.



              It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
              Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
              against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
              Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$



              Can you write the LR in this case? When you understand the problem for $n = 1,$
              then go on the the general case.



              enter image description here






              share|cite|improve this answer






















                up vote
                0
                down vote










                up vote
                0
                down vote









                Comment: This is a tricky problem--pretty much for the reason you mention.



                It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
                Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
                against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
                Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$



                Can you write the LR in this case? When you understand the problem for $n = 1,$
                then go on the the general case.



                enter image description here






                share|cite|improve this answer












                Comment: This is a tricky problem--pretty much for the reason you mention.



                It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
                Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
                against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
                Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$



                Can you write the LR in this case? When you understand the problem for $n = 1,$
                then go on the the general case.



                enter image description here







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Aug 30 at 21:31









                BruceET

                33.8k71440




                33.8k71440



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2899114%2fmp-test-construction-for-shifted-exponential-distribution%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    這個網誌中的熱門文章

                    How to combine Bézier curves to a surface?

                    Mutual Information Always Non-negative

                    Why am i infinitely getting the same tweet with the Twitter Search API?