Maximizing expected value with constrained 2nd moment

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite












$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$



A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$



And then Euler Lagrange drops $f$ completely and gives



$$x+lambda_1 +lambda_2 x^2=0$$



If the slack constraints are functional



$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$



gives



$$x+lambda_1(x) +lambda_2(x) x^2=0$$



But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...



Conceptually there should be a solution. I'd appreciate some tips on this refresher.



Edit:



Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.







share|cite|improve this question






















  • Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
    – szatkosa
    Aug 8 at 17:41











  • One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
    – Connor Harris
    Aug 8 at 17:45










  • I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
    – Connor Harris
    Aug 8 at 17:48










  • Which forcing discontinuous $f$ would make this formulation unsolveable...
    – szatkosa
    Aug 8 at 17:51











  • When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
    – Vectorizer
    Aug 8 at 17:53














up vote
4
down vote

favorite












$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$



A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$



And then Euler Lagrange drops $f$ completely and gives



$$x+lambda_1 +lambda_2 x^2=0$$



If the slack constraints are functional



$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$



gives



$$x+lambda_1(x) +lambda_2(x) x^2=0$$



But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...



Conceptually there should be a solution. I'd appreciate some tips on this refresher.



Edit:



Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.







share|cite|improve this question






















  • Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
    – szatkosa
    Aug 8 at 17:41











  • One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
    – Connor Harris
    Aug 8 at 17:45










  • I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
    – Connor Harris
    Aug 8 at 17:48










  • Which forcing discontinuous $f$ would make this formulation unsolveable...
    – szatkosa
    Aug 8 at 17:51











  • When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
    – Vectorizer
    Aug 8 at 17:53












up vote
4
down vote

favorite









up vote
4
down vote

favorite











$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$



A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$



And then Euler Lagrange drops $f$ completely and gives



$$x+lambda_1 +lambda_2 x^2=0$$



If the slack constraints are functional



$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$



gives



$$x+lambda_1(x) +lambda_2(x) x^2=0$$



But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...



Conceptually there should be a solution. I'd appreciate some tips on this refresher.



Edit:



Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.







share|cite|improve this question














$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$



A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$



And then Euler Lagrange drops $f$ completely and gives



$$x+lambda_1 +lambda_2 x^2=0$$



If the slack constraints are functional



$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$



gives



$$x+lambda_1(x) +lambda_2(x) x^2=0$$



But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...



Conceptually there should be a solution. I'd appreciate some tips on this refresher.



Edit:



Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.









share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 8 at 17:56

























asked Aug 8 at 17:25









szatkosa

563




563











  • Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
    – szatkosa
    Aug 8 at 17:41











  • One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
    – Connor Harris
    Aug 8 at 17:45










  • I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
    – Connor Harris
    Aug 8 at 17:48










  • Which forcing discontinuous $f$ would make this formulation unsolveable...
    – szatkosa
    Aug 8 at 17:51











  • When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
    – Vectorizer
    Aug 8 at 17:53
















  • Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
    – szatkosa
    Aug 8 at 17:41











  • One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
    – Connor Harris
    Aug 8 at 17:45










  • I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
    – Connor Harris
    Aug 8 at 17:48










  • Which forcing discontinuous $f$ would make this formulation unsolveable...
    – szatkosa
    Aug 8 at 17:51











  • When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
    – Vectorizer
    Aug 8 at 17:53















Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
– szatkosa
Aug 8 at 17:41





Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
– szatkosa
Aug 8 at 17:41













One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
– Connor Harris
Aug 8 at 17:45




One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
– Connor Harris
Aug 8 at 17:45












I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
– Connor Harris
Aug 8 at 17:48




I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
– Connor Harris
Aug 8 at 17:48












Which forcing discontinuous $f$ would make this formulation unsolveable...
– szatkosa
Aug 8 at 17:51





Which forcing discontinuous $f$ would make this formulation unsolveable...
– szatkosa
Aug 8 at 17:51













When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
– Vectorizer
Aug 8 at 17:53




When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
– Vectorizer
Aug 8 at 17:53










1 Answer
1






active

oldest

votes

















up vote
0
down vote













The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.



If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
&= 1 - 2 int_0^1 x F(x), dx \
0 &= int_0^1 x F(x), dx endalign*
but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).






share|cite|improve this answer




















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876360%2fmaximizing-expected-value-with-constrained-2nd-moment%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.



    If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
    &= 1 - 2 int_0^1 x F(x), dx \
    0 &= int_0^1 x F(x), dx endalign*
    but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).






    share|cite|improve this answer
























      up vote
      0
      down vote













      The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.



      If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
      &= 1 - 2 int_0^1 x F(x), dx \
      0 &= int_0^1 x F(x), dx endalign*
      but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).






      share|cite|improve this answer






















        up vote
        0
        down vote










        up vote
        0
        down vote









        The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.



        If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
        &= 1 - 2 int_0^1 x F(x), dx \
        0 &= int_0^1 x F(x), dx endalign*
        but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).






        share|cite|improve this answer












        The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.



        If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
        &= 1 - 2 int_0^1 x F(x), dx \
        0 &= int_0^1 x F(x), dx endalign*
        but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Aug 8 at 18:04









        Connor Harris

        3,709421




        3,709421






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876360%2fmaximizing-expected-value-with-constrained-2nd-moment%23new-answer', 'question_page');

            );

            Post as a guest













































































            這個網誌中的熱門文章

            How to combine Bézier curves to a surface?

            Mutual Information Always Non-negative

            Why am i infinitely getting the same tweet with the Twitter Search API?