MLE of continuous uniform distribution
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.
Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.
Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.
I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?
statistics uniform-distribution maximum-likelihood
add a comment |Â
up vote
1
down vote
favorite
A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.
Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.
Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.
I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?
statistics uniform-distribution maximum-likelihood
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.
Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.
Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.
I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?
statistics uniform-distribution maximum-likelihood
A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.
Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.
Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.
I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?
statistics uniform-distribution maximum-likelihood
edited Aug 10 at 20:07
Raskolnikov
12.2k23370
12.2k23370
asked Aug 10 at 19:22
cooldudeman
22619
22619
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is
$$
mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
$$
Setting the derivative with respect to $p=frac1theta$ to zero yields
$$
frac mp-fracn-m1-p=0;,
$$
with solution $p=frac mn$ and thus $theta=frac nm$.
I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
â cooldudeman
Aug 10 at 20:06
@cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
â joriki
Aug 10 at 21:25
add a comment |Â
up vote
1
down vote
My guess is that this is being done along the following lines:
Mainly, we are using the invariance property of maximum likelihood estimators.
Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.
Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.
Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$
If $bar y$ is the sample mean of the $Y_i$'s, then we can show that
beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
endalign
As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is
$$
mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
$$
Setting the derivative with respect to $p=frac1theta$ to zero yields
$$
frac mp-fracn-m1-p=0;,
$$
with solution $p=frac mn$ and thus $theta=frac nm$.
I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
â cooldudeman
Aug 10 at 20:06
@cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
â joriki
Aug 10 at 21:25
add a comment |Â
up vote
2
down vote
Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is
$$
mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
$$
Setting the derivative with respect to $p=frac1theta$ to zero yields
$$
frac mp-fracn-m1-p=0;,
$$
with solution $p=frac mn$ and thus $theta=frac nm$.
I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
â cooldudeman
Aug 10 at 20:06
@cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
â joriki
Aug 10 at 21:25
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is
$$
mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
$$
Setting the derivative with respect to $p=frac1theta$ to zero yields
$$
frac mp-fracn-m1-p=0;,
$$
with solution $p=frac mn$ and thus $theta=frac nm$.
Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is
$$
mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
$$
Setting the derivative with respect to $p=frac1theta$ to zero yields
$$
frac mp-fracn-m1-p=0;,
$$
with solution $p=frac mn$ and thus $theta=frac nm$.
answered Aug 10 at 20:03
joriki
165k10180329
165k10180329
I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
â cooldudeman
Aug 10 at 20:06
@cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
â joriki
Aug 10 at 21:25
add a comment |Â
I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
â cooldudeman
Aug 10 at 20:06
@cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
â joriki
Aug 10 at 21:25
I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
â cooldudeman
Aug 10 at 20:06
I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
â cooldudeman
Aug 10 at 20:06
@cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
â joriki
Aug 10 at 21:25
@cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
â joriki
Aug 10 at 21:25
add a comment |Â
up vote
1
down vote
My guess is that this is being done along the following lines:
Mainly, we are using the invariance property of maximum likelihood estimators.
Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.
Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.
Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$
If $bar y$ is the sample mean of the $Y_i$'s, then we can show that
beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
endalign
As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$
add a comment |Â
up vote
1
down vote
My guess is that this is being done along the following lines:
Mainly, we are using the invariance property of maximum likelihood estimators.
Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.
Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.
Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$
If $bar y$ is the sample mean of the $Y_i$'s, then we can show that
beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
endalign
As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$
add a comment |Â
up vote
1
down vote
up vote
1
down vote
My guess is that this is being done along the following lines:
Mainly, we are using the invariance property of maximum likelihood estimators.
Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.
Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.
Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$
If $bar y$ is the sample mean of the $Y_i$'s, then we can show that
beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
endalign
As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$
My guess is that this is being done along the following lines:
Mainly, we are using the invariance property of maximum likelihood estimators.
Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.
Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.
Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$
If $bar y$ is the sample mean of the $Y_i$'s, then we can show that
beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
endalign
As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$
answered Aug 10 at 20:01
StubbornAtom
3,87411134
3,87411134
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2878735%2fmle-of-continuous-uniform-distribution%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password