How to minimize $frac1y_1 + frac1y_2 + cdots + frac1y_k$ subject to $y_1+y_2+ cdots + y_k = a$?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.



A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.



We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.



Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.










share|cite|improve this question





















  • The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
    – Batominovski
    Aug 31 at 3:10















up vote
1
down vote

favorite
1












We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.



A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.



We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.



Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.










share|cite|improve this question





















  • The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
    – Batominovski
    Aug 31 at 3:10













up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.



A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.



We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.



Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.










share|cite|improve this question













We seek how to minimize
$$
frac11-x_1 + frac11-x_2 + cdots + frac11-x_k
$$
for $x_1,ldots,x_k in (0,1)$ subject to
$$
x_1+x_2+ cdots + x_k = c
$$
for some constant $c in (0,k)$. We expect this minimum occurs when $x_1=x_2=cdots=x_k=fracck$.



A first step seems to be to transform it to the problem of minimizing
$$
frac1y_1 + frac1y_2 + cdots + frac1y_k
$$
where $y_1,ldots,y_k in (0,1)$ subject to
$$
y_1+y_2+ cdots + y_k = a
$$
where $a=k-c$. Here, we expect this minimum occurs when $y_1=y_2=cdots=y_k=fracak$.



We can prove this when $k=2$, since
$$
frac1y_1+frac1y_2=fracy_1+y_2y_1y_2=fracy_1+(a-y_1)y_1(a-y_1)=fracay_1(a-y_1)
$$
and taking the derivative of the denominator and equating it to zero gives the minimum when $y_1=y_2=a/2$. Generalizing this for $3$ or more terms does not seem straightforward. It seems possible there's a nice method for proving this that I'm unaware of.



Other questions on similar topics do not seem to solve this problem, e.g. minimizing the sum of reciprocals is equivalent to maximizing the sum doesn't have the constraint that $y_1+cdots+y_k$ is fixed. The question Minimizing Sum of Reciprocals has an additional constraint.







optimization






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Aug 31 at 3:06









Rebecca J. Stones

20.7k22680




20.7k22680











  • The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
    – Batominovski
    Aug 31 at 3:10

















  • The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
    – Batominovski
    Aug 31 at 3:10
















The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
– Batominovski
Aug 31 at 3:10





The AM-HM Inequality should solve this problem very quickly. You can also use AM-GM, Cauchy-Schwarz, etc to solve this problem.
– Batominovski
Aug 31 at 3:10











4 Answers
4






active

oldest

votes

















up vote
1
down vote













Note that by AM-HM inequality,
$$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.






share|cite|improve this answer



























    up vote
    1
    down vote













    It's just the Cauchy Schwartz inequality written



    $$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$



    Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$






    share|cite|improve this answer



























      up vote
      1
      down vote













      Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:



      $$
      fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
      $$



      and thus



      $$
      y_k=lambda^-frac12;,
      $$



      a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.






      share|cite|improve this answer





























        up vote
        0
        down vote













        Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).



        You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.



        This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
        In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
        That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
        so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$






        share|cite|improve this answer




















          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900268%2fhow-to-minimize-frac1y-1-frac1y-2-cdots-frac1y-k-subject%23new-answer', 'question_page');

          );

          Post as a guest






























          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote













          Note that by AM-HM inequality,
          $$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
          The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.






          share|cite|improve this answer
























            up vote
            1
            down vote













            Note that by AM-HM inequality,
            $$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
            The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.






            share|cite|improve this answer






















              up vote
              1
              down vote










              up vote
              1
              down vote









              Note that by AM-HM inequality,
              $$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
              The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.






              share|cite|improve this answer












              Note that by AM-HM inequality,
              $$fracfrac11-x_1+frac11-x_2+cdots+frac11-x_kk ge frack1-x_1+1-x_2+cdots+1-x_k = frackk-c.$$
              The minimum is achieved when $x_1=x_2=cdots=x_k=fracck$.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Aug 31 at 3:13









              Math Lover

              12.7k21232




              12.7k21232




















                  up vote
                  1
                  down vote













                  It's just the Cauchy Schwartz inequality written



                  $$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$



                  Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$






                  share|cite|improve this answer
























                    up vote
                    1
                    down vote













                    It's just the Cauchy Schwartz inequality written



                    $$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$



                    Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$






                    share|cite|improve this answer






















                      up vote
                      1
                      down vote










                      up vote
                      1
                      down vote









                      It's just the Cauchy Schwartz inequality written



                      $$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$



                      Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$






                      share|cite|improve this answer












                      It's just the Cauchy Schwartz inequality written



                      $$sum_i=1^k x_i= cRightarrow sum_i=1^k (1-x_i)=k-c$$



                      Hence by Cauchy Schwartz inequality $$left(sum_i=1^k (1-x_i)right) left(sum_i=1^k left(frac 11-x_iright) right) ge k^2 Rightarrow sum_i=1^k left(frac 11-x_iright)ge frac k^2k-c$$







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered Aug 31 at 3:15









                      Manthanein

                      6,3931439




                      6,3931439




















                          up vote
                          1
                          down vote













                          Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:



                          $$
                          fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
                          $$



                          and thus



                          $$
                          y_k=lambda^-frac12;,
                          $$



                          a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.






                          share|cite|improve this answer


























                            up vote
                            1
                            down vote













                            Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:



                            $$
                            fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
                            $$



                            and thus



                            $$
                            y_k=lambda^-frac12;,
                            $$



                            a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.






                            share|cite|improve this answer
























                              up vote
                              1
                              down vote










                              up vote
                              1
                              down vote









                              Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:



                              $$
                              fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
                              $$



                              and thus



                              $$
                              y_k=lambda^-frac12;,
                              $$



                              a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.






                              share|cite|improve this answer














                              Two answers with specific inequalities have already been given. If you want to prove it from first principles, you can use a Lagrange multiplier:



                              $$
                              fracpartialpartial y_kleft(sum_iy_i^-1+lambdasum_iy_iright)=-y_k^-2+lambda=0
                              $$



                              and thus



                              $$
                              y_k=lambda^-frac12;,
                              $$



                              a constant. Then you just have to compare with values on the boundary, where any number of the $y_i$ could be $1$ and the rest again equal inside $(0,1)$; you can prove as in the two-variable case that this leads to a higher value of the objective function than if all of them are equal.







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited Aug 31 at 3:24

























                              answered Aug 31 at 3:19









                              joriki

                              167k10180333




                              167k10180333




















                                  up vote
                                  0
                                  down vote













                                  Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).



                                  You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.



                                  This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
                                  In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
                                  That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
                                  so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$






                                  share|cite|improve this answer
























                                    up vote
                                    0
                                    down vote













                                    Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).



                                    You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.



                                    This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
                                    In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
                                    That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
                                    so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$






                                    share|cite|improve this answer






















                                      up vote
                                      0
                                      down vote










                                      up vote
                                      0
                                      down vote









                                      Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).



                                      You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.



                                      This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
                                      In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
                                      That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
                                      so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$






                                      share|cite|improve this answer












                                      Let $f(y_1,...,y_k)=frac 1 y_1 + ... + frac 1 y_k$ be the function to minimize, and $g(y_1,...,y_k)=y_1+...+y_k -1$. Clearly, a minimum exists (hint: look at the behavior of $f$ on its domain).



                                      You're aiming to minimize $f$ on $(0,1)^k$ under the linear constraint that $g(y_1,...,y_k)=0$.



                                      This is what the Lagrange multiplier technique is for. I recommend you read about it, it's pretty cool (there is a geometric interpretation to that technique, it's great to build intuition)!
                                      In this case, it will state that the minimum, if it exists, can only be at points where the gradients of $f$ and $g$ are collinear. In other words, a necessary condition for the minimum to be achieved at $y=(y_1,...,y_k)$ is that there exists $lambda in mathbb R$ such that, for all $i$, $$frac partial fpartial y_i(y) = lambda frac partial gpartial y_i(y)$$
                                      That means that such a $lambda$ must be negative, and that for all $i$, $$-frac 1 y_i^2 = lambda $$
                                      so $y_1=y_2=...=y_k=-frac 1 sqrt-lambda$







                                      share|cite|improve this answer












                                      share|cite|improve this answer



                                      share|cite|improve this answer










                                      answered Aug 31 at 3:29









                                      Stefan Lafon

                                      18614




                                      18614



























                                           

                                          draft saved


                                          draft discarded















































                                           


                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2900268%2fhow-to-minimize-frac1y-1-frac1y-2-cdots-frac1y-k-subject%23new-answer', 'question_page');

                                          );

                                          Post as a guest













































































                                          這個網誌中的熱門文章

                                          How to combine Bézier curves to a surface?

                                          Mutual Information Always Non-negative

                                          Why am i infinitely getting the same tweet with the Twitter Search API?