MLE of continuous uniform distribution

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.



Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.



Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.




I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?







share|cite|improve this question


























    up vote
    1
    down vote

    favorite
    1












    A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.



    Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.



    Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.




    I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?







    share|cite|improve this question
























      up vote
      1
      down vote

      favorite
      1









      up vote
      1
      down vote

      favorite
      1






      1





      A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.



      Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.



      Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.




      I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?







      share|cite|improve this question














      A series of $n$ geomagnetic readings are taken from a meter, but the readings are judged to be approximate and unreliable. The chief scientist does know however that the true values are all positive and she suggests that an approximate model for the readings is that they are independent observations of a random variable which is uniformly distributed on $(0,theta)$, where $theta >1$.



      Suppose that the chief scientist knows only that the number, $M$, of the readings which are less than $1$ is $m$, with the remaining $n-m$ being greater than $1$ and that she adopts the model as suggested above.



      Demonstrate that the maximum likelihood estimate of $theta$ is $n/m$.




      I don't understand why the binomial distribution is used to find the MLE. That is what the method is on the answers. It even says "she adopts the model as suggested above" so the uniform should be used shouldn't it?









      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 10 at 20:07









      Raskolnikov

      12.2k23370




      12.2k23370










      asked Aug 10 at 19:22









      cooldudeman

      22619




      22619




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          2
          down vote













          Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is



          $$
          mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
          $$



          Setting the derivative with respect to $p=frac1theta$ to zero yields



          $$
          frac mp-fracn-m1-p=0;,
          $$



          with solution $p=frac mn$ and thus $theta=frac nm$.






          share|cite|improve this answer




















          • I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
            – cooldudeman
            Aug 10 at 20:06











          • @cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
            – joriki
            Aug 10 at 21:25

















          up vote
          1
          down vote













          My guess is that this is being done along the following lines:



          Mainly, we are using the invariance property of maximum likelihood estimators.



          Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.



          Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.



          Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$



          If $bar y$ is the sample mean of the $Y_i$'s, then we can show that



          beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
          endalign



          As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$






          share|cite|improve this answer




















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "69"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );








             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2878735%2fmle-of-continuous-uniform-distribution%23new-answer', 'question_page');

            );

            Post as a guest






























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            2
            down vote













            Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is



            $$
            mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
            $$



            Setting the derivative with respect to $p=frac1theta$ to zero yields



            $$
            frac mp-fracn-m1-p=0;,
            $$



            with solution $p=frac mn$ and thus $theta=frac nm$.






            share|cite|improve this answer




















            • I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
              – cooldudeman
              Aug 10 at 20:06











            • @cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
              – joriki
              Aug 10 at 21:25














            up vote
            2
            down vote













            Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is



            $$
            mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
            $$



            Setting the derivative with respect to $p=frac1theta$ to zero yields



            $$
            frac mp-fracn-m1-p=0;,
            $$



            with solution $p=frac mn$ and thus $theta=frac nm$.






            share|cite|improve this answer




















            • I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
              – cooldudeman
              Aug 10 at 20:06











            • @cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
              – joriki
              Aug 10 at 21:25












            up vote
            2
            down vote










            up vote
            2
            down vote









            Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is



            $$
            mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
            $$



            Setting the derivative with respect to $p=frac1theta$ to zero yields



            $$
            frac mp-fracn-m1-p=0;,
            $$



            with solution $p=frac mn$ and thus $theta=frac nm$.






            share|cite|improve this answer












            Given $thetagt1$, the probability for a measurement to be less than $1$ is $frac1theta$. Thus the likelihood for $m$ out of $n$ measurements to be less than $1$ is



            $$
            mathsf P(M=mmidtheta)=binom nmleft(frac1thetaright)^mleft(1-frac1thetaright)^n-m;.
            $$



            Setting the derivative with respect to $p=frac1theta$ to zero yields



            $$
            frac mp-fracn-m1-p=0;,
            $$



            with solution $p=frac mn$ and thus $theta=frac nm$.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Aug 10 at 20:03









            joriki

            165k10180329




            165k10180329











            • I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
              – cooldudeman
              Aug 10 at 20:06











            • @cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
              – joriki
              Aug 10 at 21:25
















            • I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
              – cooldudeman
              Aug 10 at 20:06











            • @cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
              – joriki
              Aug 10 at 21:25















            I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
            – cooldudeman
            Aug 10 at 20:06





            I dont understand how you knew that they were asking for what you wrote on your second sentence. Please explain?
            – cooldudeman
            Aug 10 at 20:06













            @cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
            – joriki
            Aug 10 at 21:25




            @cooldudeman: You're looking for a maximum likelihood estimate. Thus we need the likelihood of the observed data as a function of the unknown parameter $theta$. The observed data is the number $M$ of readings which are less than $1$. Since the readings are assumed to be independently uniformly distributed on $(0,theta)$, with $thetagt1$, there are two possibilities for each reading. With probability $frac1theta$ it's less than $1$, and with probability $1-frac1theta$ it's greater than $1$. Thus $M$ is binomially distributed with parameters $n$ and $frac1theta$.
            – joriki
            Aug 10 at 21:25










            up vote
            1
            down vote













            My guess is that this is being done along the following lines:



            Mainly, we are using the invariance property of maximum likelihood estimators.



            Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.



            Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.



            Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$



            If $bar y$ is the sample mean of the $Y_i$'s, then we can show that



            beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
            endalign



            As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$






            share|cite|improve this answer
























              up vote
              1
              down vote













              My guess is that this is being done along the following lines:



              Mainly, we are using the invariance property of maximum likelihood estimators.



              Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.



              Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.



              Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$



              If $bar y$ is the sample mean of the $Y_i$'s, then we can show that



              beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
              endalign



              As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$






              share|cite|improve this answer






















                up vote
                1
                down vote










                up vote
                1
                down vote









                My guess is that this is being done along the following lines:



                Mainly, we are using the invariance property of maximum likelihood estimators.



                Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.



                Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.



                Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$



                If $bar y$ is the sample mean of the $Y_i$'s, then we can show that



                beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
                endalign



                As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$






                share|cite|improve this answer












                My guess is that this is being done along the following lines:



                Mainly, we are using the invariance property of maximum likelihood estimators.



                Let $X_i$ be i.i.d $U(0,theta)$ variables for $i=1,2,cdots,n$, i.e. our original sample of size $n$.



                Define a $Y_i=begincases1&,text if X_i<1\0&,text otherwise endcases$ for each $i=1,2,cdots,n$.



                Then the $Y_i$'s are i.i.d $textBer(p)$ for each $i$ with $p=P(X_1<1)=1/theta,,quadtheta>1$



                If $bar y$ is the sample mean of the $Y_i$'s, then we can show that



                beginalignhat p_MLE&=overliney\implies frac1hattheta_MLE&=overline y\implieshattheta_MLE&=frac1overline y
                endalign



                As the observed value of $overline y$ is $m/n$, we get $$hattheta_MLE=fracnm$$







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Aug 10 at 20:01









                StubbornAtom

                3,87411134




                3,87411134






















                     

                    draft saved


                    draft discarded


























                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2878735%2fmle-of-continuous-uniform-distribution%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    這個網誌中的熱門文章

                    How to combine Bézier curves to a surface?

                    Carbon dioxide

                    Why am i infinitely getting the same tweet with the Twitter Search API?