How to minimize $frac1y_1 + frac1y_2 + cdots + frac1y_k$ subject to $y_1+y_2+ cdots + y_k = a$?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.
A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.
We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.
Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.
optimization
add a comment |Â
up vote
1
down vote
favorite
We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.
A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.
We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.
Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.
optimization
The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
â Batominovski
Aug 31 at 3:10
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.
A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.
We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.
Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.
optimization
We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.
A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.
We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.
Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.
optimization
optimization
asked Aug 31 at 3:06
Rebecca J. Stones
20.7k22680
20.7k22680
The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
â Batominovski
Aug 31 at 3:10
add a comment |Â
The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
â Batominovski
Aug 31 at 3:10
The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
â Batominovski
Aug 31 at 3:10
The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
â Batominovski
Aug 31 at 3:10
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
1
down vote
Note that by AM-HM inequality,
$$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.
add a comment |Â
up vote
1
down vote
It's just the Cauchy Schwartz inequality written
$$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$
Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$
add a comment |Â
up vote
1
down vote
Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:
$$
fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
$$
and thus
$$
y_k=lambda^-frac12;,
$$
a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.
add a comment |Â
up vote
0
down vote
Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).
You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.
This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Note that by AM-HM inequality,
$$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.
add a comment |Â
up vote
1
down vote
Note that by AM-HM inequality,
$$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Note that by AM-HM inequality,
$$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.
Note that by AM-HM inequality,
$$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.
answered Aug 31 at 3:13
Math Lover
12.7k21232
12.7k21232
add a comment |Â
add a comment |Â
up vote
1
down vote
It's just the Cauchy Schwartz inequality written
$$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$
Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$
add a comment |Â
up vote
1
down vote
It's just the Cauchy Schwartz inequality written
$$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$
Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$
add a comment |Â
up vote
1
down vote
up vote
1
down vote
It's just the Cauchy Schwartz inequality written
$$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$
Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$
It's just the Cauchy Schwartz inequality written
$$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$
Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$
answered Aug 31 at 3:15
Manthanein
6,3931439
6,3931439
add a comment |Â
add a comment |Â
up vote
1
down vote
Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:
$$
fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
$$
and thus
$$
y_k=lambda^-frac12;,
$$
a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.
add a comment |Â
up vote
1
down vote
Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:
$$
fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
$$
and thus
$$
y_k=lambda^-frac12;,
$$
a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:
$$
fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
$$
and thus
$$
y_k=lambda^-frac12;,
$$
a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.
Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:
$$
fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
$$
and thus
$$
y_k=lambda^-frac12;,
$$
a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.
edited Aug 31 at 3:24
answered Aug 31 at 3:19
joriki
167k10180333
167k10180333
add a comment |Â
add a comment |Â
up vote
0
down vote
Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).
You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.
This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$
add a comment |Â
up vote
0
down vote
Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).
You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.
This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).
You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.
This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$
Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).
You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.
This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$
answered Aug 31 at 3:29
Stefan Lafon
18614
18614
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900268%2fhow-to-minimize-frac1y-1-frac1y-2-cdots-frac1y-k-subject%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
â Batominovski
Aug 31 at 3:10