Maximizing expected value with constrained 2nd moment
Clash Royale CLAN TAG#URR8PPP
up vote
4
down vote
favorite
$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$
A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$
And then Euler Lagrange drops $f$ completely and gives
$$x+lambda_1 +lambda_2 x^2=0$$
If the slack constraints are functional
$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$
gives
$$x+lambda_1(x) +lambda_2(x) x^2=0$$
But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...
Conceptually there should be a solution. I'd appreciate some tips on this refresher.
Edit:
Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.
statistics optimization calculus-of-variations moment-problem
 |Â
show 2 more comments
up vote
4
down vote
favorite
$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$
A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$
And then Euler Lagrange drops $f$ completely and gives
$$x+lambda_1 +lambda_2 x^2=0$$
If the slack constraints are functional
$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$
gives
$$x+lambda_1(x) +lambda_2(x) x^2=0$$
But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...
Conceptually there should be a solution. I'd appreciate some tips on this refresher.
Edit:
Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.
statistics optimization calculus-of-variations moment-problem
Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
â szatkosa
Aug 8 at 17:41
One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
â Connor Harris
Aug 8 at 17:45
I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
â Connor Harris
Aug 8 at 17:48
Which forcing discontinuous $f$ would make this formulation unsolveable...
â szatkosa
Aug 8 at 17:51
When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
â Vectorizer
Aug 8 at 17:53
 |Â
show 2 more comments
up vote
4
down vote
favorite
up vote
4
down vote
favorite
$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$
A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$
And then Euler Lagrange drops $f$ completely and gives
$$x+lambda_1 +lambda_2 x^2=0$$
If the slack constraints are functional
$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$
gives
$$x+lambda_1(x) +lambda_2(x) x^2=0$$
But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...
Conceptually there should be a solution. I'd appreciate some tips on this refresher.
Edit:
Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.
statistics optimization calculus-of-variations moment-problem
$$beginarrayll textmaximize & displaystyleint_0^1 x , f(x) , mathrm dx\ textsubject to & displaystyleint_0^1 f(x) , mathrm dx = 1\ & displaystyleint_0^1 x^2 f(x) , mathrm dx = 1\ & f(x) geq 0 quad forall x in [0,1]endarray$$
A few years ago I studied calculus of variations but for some reason I keep chasing my tail on this problem. If my recollection is even close to the mark we start with
$$L=int_0^1 left( f(x) cdot x right) dx + lambda_1 cdot left( int_0^1 left( f(x) right) dx - 1right) + lambda_2 cdot left( int_0^1 left( f(x) cdot x^2 right) dx - 1right)$$
And then Euler Lagrange drops $f$ completely and gives
$$x+lambda_1 +lambda_2 x^2=0$$
If the slack constraints are functional
$$L=int_0^1 left( f(x) cdot x right) + lambda_1(x) cdot left( left( f(x) right) - 1right) + lambda_2(x) cdot left( left( f(x) cdot x^2 right) - 1right) dx$$
gives
$$x+lambda_1(x) +lambda_2(x) x^2=0$$
But I'm not sure if from here how we enforce the $lambda$ partials, which if they're just taken directly seem to contradict each other...
Conceptually there should be a solution. I'd appreciate some tips on this refresher.
Edit:
Only solution to constraints is discontinuous. Poorly posed. What about... The version where the expected value of X is to be minimized such that X>0 and the second Central moment (variance) is 1? The PDF of that I think hits the same roadblocks I hit above but is a nontrivial computation. Goal to find the PDF $f(x)$ on $x>0$.
statistics optimization calculus-of-variations moment-problem
edited Aug 8 at 17:56
asked Aug 8 at 17:25
szatkosa
563
563
Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
â szatkosa
Aug 8 at 17:41
One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
â Connor Harris
Aug 8 at 17:45
I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
â Connor Harris
Aug 8 at 17:48
Which forcing discontinuous $f$ would make this formulation unsolveable...
â szatkosa
Aug 8 at 17:51
When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
â Vectorizer
Aug 8 at 17:53
 |Â
show 2 more comments
Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
â szatkosa
Aug 8 at 17:41
One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
â Connor Harris
Aug 8 at 17:45
I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
â Connor Harris
Aug 8 at 17:48
Which forcing discontinuous $f$ would make this formulation unsolveable...
â szatkosa
Aug 8 at 17:51
When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
â Vectorizer
Aug 8 at 17:53
Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
â szatkosa
Aug 8 at 17:41
Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
â szatkosa
Aug 8 at 17:41
One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
â Connor Harris
Aug 8 at 17:45
One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
â Connor Harris
Aug 8 at 17:45
I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
â Connor Harris
Aug 8 at 17:48
I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
â Connor Harris
Aug 8 at 17:48
Which forcing discontinuous $f$ would make this formulation unsolveable...
â szatkosa
Aug 8 at 17:51
Which forcing discontinuous $f$ would make this formulation unsolveable...
â szatkosa
Aug 8 at 17:51
When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
â Vectorizer
Aug 8 at 17:53
When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
â Vectorizer
Aug 8 at 17:53
 |Â
show 2 more comments
1 Answer
1
active
oldest
votes
up vote
0
down vote
The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.
If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
&= 1 - 2 int_0^1 x F(x), dx \
0 &= int_0^1 x F(x), dx endalign*
but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.
If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
&= 1 - 2 int_0^1 x F(x), dx \
0 &= int_0^1 x F(x), dx endalign*
but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).
add a comment |Â
up vote
0
down vote
The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.
If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
&= 1 - 2 int_0^1 x F(x), dx \
0 &= int_0^1 x F(x), dx endalign*
but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).
add a comment |Â
up vote
0
down vote
up vote
0
down vote
The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.
If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
&= 1 - 2 int_0^1 x F(x), dx \
0 &= int_0^1 x F(x), dx endalign*
but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).
The problem is impossible as stated with $f geq 0$. The constraints $int_0^1 f(x), dx = int_0^1 x^2 f(x), dx = 1$ may be combined as $int_0^1 (1-x^2) f(x) = 0$. As $1-x^2$ is nonnegative on $[0, 1]$ and positive except at $1$, this constraint is impossible except if $f$ is the Dirac distribution $delta(x-1)$.
If the restriction $f geq 0$ is relaxed, there is no maximum. Let $F$ be an antiderivative of $f$ such that $F(0) = 0$, $F(1) = 1$. In this case, integrating by parts gives the maximized quantity as beginalign* int_0^1 x f(x), dx &= left. x F(x)right|_0^1 - int_0^1 F(x), dx \ &= 1 - int_0^1 F(x), dxendalign* so we have to minimize $int_0^1 F(x), dx$ with the constraints $F(0) = 0$, $F(1) = 1$, and beginalign* 1 &= int_0^1 x^2 f(x), dx \ &= left.x^2 F(x)right|_0^1 - 2 int_0^1 x F(x), dx \
&= 1 - 2 int_0^1 x F(x), dx \
0 &= int_0^1 x F(x), dx endalign*
but we can always decrease $int_0^1 F(x)$ and preserve $int_0^1 x F(x)$ by decreasing the values of $F$ near $x=0$ and increasing them a lesser amount near $x = 1$ (construction of an explicit example is left as an exercise to the reader).
answered Aug 8 at 18:04
Connor Harris
3,709421
3,709421
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2876360%2fmaximizing-expected-value-with-constrained-2nd-moment%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Yes. Thank you. Posted extra constraint. I planned to impose that afterwards, but it would probably be best to include that term in the functional as well.
â szatkosa
Aug 8 at 17:41
One simple observation is that $0 leq int_0^1 (x-1)^2 f(x), dx = 2 - 2 int_0^1 x f(x), dx$, so the value of $int_0^1 x f(x), dx$ cannot exceed $1$.
â Connor Harris
Aug 8 at 17:45
I believe that the maximum is given by $f(x) = delta(x-1)$ where $delta$ is the Dirac delta function.
â Connor Harris
Aug 8 at 17:48
Which forcing discontinuous $f$ would make this formulation unsolveable...
â szatkosa
Aug 8 at 17:51
When the probability density function is the Dirac delta, the associated random variable(!) becomes a constant(not so much random anymore), right? Is the problem you are tackling precisely this or you're just showing the part you got stuck?
â Vectorizer
Aug 8 at 17:53