MP test construction for shifted exponential distribution
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.
I am facing difficulty as the parameter here is range dependent
However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
Please help!
statistics statistical-inference hypothesis-testing
add a comment |Â
up vote
1
down vote
favorite
For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.
I am facing difficulty as the parameter here is range dependent
However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
Please help!
statistics statistical-inference hypothesis-testing
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.
I am facing difficulty as the parameter here is range dependent
However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
Please help!
statistics statistical-inference hypothesis-testing
For the pdf $f_theta(x)=e^-(x-theta) , x ge theta$, find a most powerful test of size $alpha$, using Neyman Pearson Lemma to test $theta=theta_0$ against $theta=theta_1(> theta_0)$, based on a sample of size $n$.
I am facing difficulty as the parameter here is range dependent
However, if $X_(1)>theta_1$, then $f_1(x)>lambda f_0(x)$ if $e^n(theta_1- theta_0)> lambda$ would mean rejection of null hypothesis.
But how will I make this test a size $alpha$ test? The ratio is coming to be constant.
Please help!
statistics statistical-inference hypothesis-testing
statistics statistical-inference hypothesis-testing
asked Aug 30 at 4:13
Legend Killer
1,500523
1,500523
add a comment |Â
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
1
down vote
accepted
Joint density of the sample $(X_1,X_2,ldots,X_n)$ is
$$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$
By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$
, where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$
and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
Now,
beginalign
lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
\\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
\\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
endalign
So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means
$$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
We thus have
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$
Again,
beginalign
E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
\&=left(P_theta_0(X_1>c)right)^n
\&=e^n(theta_0-c)quad,,c>theta_0
endalign
So from the size condition we get $$c=theta_0-fraclnalphan$$
Finally, the test function is
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$
Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
â StubbornAtom
Sep 3 at 20:55
add a comment |Â
up vote
1
down vote
If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
$$
clefrac
expn theta_1 - sum x_i
expn theta_0 - sum x_i
=
exp n(theta_1 - theta_0 ,
$$
which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
$$
alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
$$
i.e., the MP is
$$
IX_(1) >theta_1-fracln alphan , .
$$
for $X_(1) ge theta_1$, and $0$ otherwise.
I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
â Legend Killer
Aug 31 at 3:21
Missed the $c$. Corrected.
â V. Vancak
Aug 31 at 10:55
Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
â V. Vancak
Aug 31 at 11:07
add a comment |Â
up vote
0
down vote
Comment: This is a tricky problem--pretty much for the reason you mention.
It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$
Can you write the LR in this case? When you understand the problem for $n = 1,$
then go on the the general case.
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Joint density of the sample $(X_1,X_2,ldots,X_n)$ is
$$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$
By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$
, where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$
and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
Now,
beginalign
lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
\\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
\\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
endalign
So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means
$$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
We thus have
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$
Again,
beginalign
E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
\&=left(P_theta_0(X_1>c)right)^n
\&=e^n(theta_0-c)quad,,c>theta_0
endalign
So from the size condition we get $$c=theta_0-fraclnalphan$$
Finally, the test function is
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$
Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
â StubbornAtom
Sep 3 at 20:55
add a comment |Â
up vote
1
down vote
accepted
Joint density of the sample $(X_1,X_2,ldots,X_n)$ is
$$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$
By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$
, where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$
and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
Now,
beginalign
lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
\\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
\\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
endalign
So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means
$$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
We thus have
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$
Again,
beginalign
E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
\&=left(P_theta_0(X_1>c)right)^n
\&=e^n(theta_0-c)quad,,c>theta_0
endalign
So from the size condition we get $$c=theta_0-fraclnalphan$$
Finally, the test function is
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$
Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
â StubbornAtom
Sep 3 at 20:55
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Joint density of the sample $(X_1,X_2,ldots,X_n)$ is
$$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$
By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$
, where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$
and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
Now,
beginalign
lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
\\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
\\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
endalign
So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means
$$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
We thus have
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$
Again,
beginalign
E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
\&=left(P_theta_0(X_1>c)right)^n
\&=e^n(theta_0-c)quad,,c>theta_0
endalign
So from the size condition we get $$c=theta_0-fraclnalphan$$
Finally, the test function is
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$
Joint density of the sample $(X_1,X_2,ldots,X_n)$ is
$$f_theta(x_1,ldots,x_n)=expleft(-sum_i=1^n(x_i-theta)right)mathbf1_x_(1)>thetaquad,,theta>0$$
By N-P lemma, a most powerful test of size $alpha$ for testing $H_0:theta=theta_0$ against $H_1:theta=theta_1(>theta_0)$ is given by $$varphi(x_1,ldots,x_n)=begincases1&,text if lambda(x_1,ldots,x_n)>k\0&,text if lambda(x_1,ldots,x_n)<kendcases$$
, where $$lambda(x_1,ldots,x_n)=fracf_theta_1(x_1,ldots,x_n)f_theta_0(x_1,ldots,x_n)$$
and $k(>0)$ is such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
Now,
beginalign
lambda(x_1,ldots,x_n)&=fracexpleft(-sum_i=1^n(x_i-theta_1)right)mathbf1_x_(1)>theta_1expleft(-sum_i=1^n(x_i-theta_0)right)mathbf1_x_(1)>theta_0
\\&=e^n(theta_1-theta_0)fracmathbf1_x_(1)>theta_1mathbf1_x_(1)>theta_0
\\&=begincasese^n(theta_1-theta_0)&,text if x_(1)>theta_1\0&,text if theta_0<x_(1)le theta_1endcases
endalign
So $lambda(x_1,ldots,x_n)$ is a monotone non-decreasing function of $x_(1)$, which means
$$lambda(x_1,ldots,x_n)gtrless k iff x_(1)gtrless c$$, for some $c$ such that $$E_theta_0varphi(X_1,ldots,X_n)=alpha$$
We thus have
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>c\0&,text if x_(1)<cendcases$$
Again,
beginalign
E_theta_0varphi(X_1,ldots,X_n)&=P_theta_0(X_(1)>c)
\&=left(P_theta_0(X_1>c)right)^n
\&=e^n(theta_0-c)quad,,c>theta_0
endalign
So from the size condition we get $$c=theta_0-fraclnalphan$$
Finally, the test function is
$$varphi(x_1,ldots,x_n)=begincases1&,text if x_(1)>theta_0-fraclnalphan\0&,text if x_(1)<theta_0-fraclnalphanendcases$$
answered Sep 3 at 17:36
StubbornAtom
4,08511135
4,08511135
Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
â StubbornAtom
Sep 3 at 20:55
add a comment |Â
Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
â StubbornAtom
Sep 3 at 20:55
Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
â StubbornAtom
Sep 3 at 20:55
Note that I have written $lambda(x_1,ldots,x_n)$ for $theta_1>theta_0$.
â StubbornAtom
Sep 3 at 20:55
add a comment |Â
up vote
1
down vote
If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
$$
clefrac
expn theta_1 - sum x_i
expn theta_0 - sum x_i
=
exp n(theta_1 - theta_0 ,
$$
which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
$$
alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
$$
i.e., the MP is
$$
IX_(1) >theta_1-fracln alphan , .
$$
for $X_(1) ge theta_1$, and $0$ otherwise.
I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
â Legend Killer
Aug 31 at 3:21
Missed the $c$. Corrected.
â V. Vancak
Aug 31 at 10:55
Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
â V. Vancak
Aug 31 at 11:07
add a comment |Â
up vote
1
down vote
If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
$$
clefrac
expn theta_1 - sum x_i
expn theta_0 - sum x_i
=
exp n(theta_1 - theta_0 ,
$$
which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
$$
alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
$$
i.e., the MP is
$$
IX_(1) >theta_1-fracln alphan , .
$$
for $X_(1) ge theta_1$, and $0$ otherwise.
I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
â Legend Killer
Aug 31 at 3:21
Missed the $c$. Corrected.
â V. Vancak
Aug 31 at 10:55
Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
â V. Vancak
Aug 31 at 11:07
add a comment |Â
up vote
1
down vote
up vote
1
down vote
If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
$$
clefrac
expn theta_1 - sum x_i
expn theta_0 - sum x_i
=
exp n(theta_1 - theta_0 ,
$$
which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
$$
alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
$$
i.e., the MP is
$$
IX_(1) >theta_1-fracln alphan , .
$$
for $X_(1) ge theta_1$, and $0$ otherwise.
If $X_(1) in (theta_0, theta_1)$, then there is no uncertainty and you sure that $H_0$ right. If $X_(1) ge theta_1$, then the MP test of size $alpha$ is: reject $H_0$ if
$$
clefrac
expn theta_1 - sum x_i
expn theta_0 - sum x_i
=
exp n(theta_1 - theta_0 ,
$$
which is clearly not helpful as it constant for evey $n$. However, note that the LR is monotone increasing function of $theta_1$, hence using the fact that $X_(1) sim mathcalExp_theta_1(n)$, the genral form of the MP is
$$
alpha = mathbbE_theta_1IX_(1) > c =mathbbP_theta_1(X_(1) > c) = expn(theta_1 - c),
$$
i.e., the MP is
$$
IX_(1) >theta_1-fracln alphan , .
$$
for $X_(1) ge theta_1$, and $0$ otherwise.
edited Aug 31 at 11:03
answered Aug 30 at 21:43
V. Vancak
10.1k2926
10.1k2926
I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
â Legend Killer
Aug 31 at 3:21
Missed the $c$. Corrected.
â V. Vancak
Aug 31 at 10:55
Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
â V. Vancak
Aug 31 at 11:07
add a comment |Â
I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
â Legend Killer
Aug 31 at 3:21
Missed the $c$. Corrected.
â V. Vancak
Aug 31 at 10:55
Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
â V. Vancak
Aug 31 at 11:07
I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
â Legend Killer
Aug 31 at 3:21
I don't understand what you wrote here:$$ alpha = mathbbE_theta_1IX_(1) > =expn(theta_1 - c), $$ i.e., the MP is $$ IX_(1) >theta_1-fracln alphan , . $$
â Legend Killer
Aug 31 at 3:21
Missed the $c$. Corrected.
â V. Vancak
Aug 31 at 10:55
Missed the $c$. Corrected.
â V. Vancak
Aug 31 at 10:55
Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
â V. Vancak
Aug 31 at 11:07
Basically, as the LR is monotone and you have one sided hypothesis, the MP is just of the form "reject $H_0$ if the minimal sufficient statistic for $theta$ is large enough"., and then you just find how much is "enough". Same idea you have in uniform distributions with $U[0,theta]$. In both cases the support depends on the unknown parameter, hence the LR "as is" is not helpful.
â V. Vancak
Aug 31 at 11:07
add a comment |Â
up vote
0
down vote
Comment: This is a tricky problem--pretty much for the reason you mention.
It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$
Can you write the LR in this case? When you understand the problem for $n = 1,$
then go on the the general case.
add a comment |Â
up vote
0
down vote
Comment: This is a tricky problem--pretty much for the reason you mention.
It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$
Can you write the LR in this case? When you understand the problem for $n = 1,$
then go on the the general case.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Comment: This is a tricky problem--pretty much for the reason you mention.
It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$
Can you write the LR in this case? When you understand the problem for $n = 1,$
then go on the the general case.
Comment: This is a tricky problem--pretty much for the reason you mention.
It may help to consider the case $n = 1$ for $theta_0 = 1,,theta_1 = 5.$
Then plots of the PDF are shown below. Suppose we agree to Reject $H_0: theta = 1$
against $H_a: theta= 5$ when the single observation (also the smallest) $X > 5,$ otherwise fail to reject.
Then it is easy to see that the significance level of the test is $alpha approx 0.0025.$
Can you write the LR in this case? When you understand the problem for $n = 1,$
then go on the the general case.
answered Aug 30 at 21:31
BruceET
33.8k71440
33.8k71440
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2899114%2fmp-test-construction-for-shifted-exponential-distribution%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password