Lagrangian multiplier finds local optima for non-convex function?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.



I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.



The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.



So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?










share|cite|improve this question























  • it can find point that are not locally optimal too
    – LinAlg
    Sep 4 at 12:05










  • @LinAlg, well then I don't understand why wikipedia article only says local optima?
    – loganecolss
    Sep 4 at 13:58










  • In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
    – littleO
    Sep 7 at 4:10














up vote
1
down vote

favorite












As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.



I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.



The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.



So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?










share|cite|improve this question























  • it can find point that are not locally optimal too
    – LinAlg
    Sep 4 at 12:05










  • @LinAlg, well then I don't understand why wikipedia article only says local optima?
    – loganecolss
    Sep 4 at 13:58










  • In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
    – littleO
    Sep 7 at 4:10












up vote
1
down vote

favorite









up vote
1
down vote

favorite











As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.



I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.



The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.



So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?










share|cite|improve this question















As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.



I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.



The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.



So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?







optimization lagrange-multiplier






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 4 at 7:42









Deepesh Meena

3,7892825




3,7892825










asked Sep 4 at 7:39









loganecolss

4071123




4071123











  • it can find point that are not locally optimal too
    – LinAlg
    Sep 4 at 12:05










  • @LinAlg, well then I don't understand why wikipedia article only says local optima?
    – loganecolss
    Sep 4 at 13:58










  • In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
    – littleO
    Sep 7 at 4:10
















  • it can find point that are not locally optimal too
    – LinAlg
    Sep 4 at 12:05










  • @LinAlg, well then I don't understand why wikipedia article only says local optima?
    – loganecolss
    Sep 4 at 13:58










  • In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
    – littleO
    Sep 7 at 4:10















it can find point that are not locally optimal too
– LinAlg
Sep 4 at 12:05




it can find point that are not locally optimal too
– LinAlg
Sep 4 at 12:05












@LinAlg, well then I don't understand why wikipedia article only says local optima?
– loganecolss
Sep 4 at 13:58




@LinAlg, well then I don't understand why wikipedia article only says local optima?
– loganecolss
Sep 4 at 13:58












In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
– littleO
Sep 7 at 4:10




In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
– littleO
Sep 7 at 4:10










2 Answers
2






active

oldest

votes

















up vote
2
down vote



+25










The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.






share|cite|improve this answer




















  • Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
    – LinAlg
    Sep 7 at 14:49











  • The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
    – LinAlg
    Sep 7 at 14:54










  • @LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
    – joriki
    Sep 7 at 19:28










  • KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
    – LinAlg
    Sep 7 at 22:14


















up vote
0
down vote













None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$



The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.



Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).






share|cite|improve this answer




















  • If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
    – loganecolss
    2 days ago










  • @loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
    – LinAlg
    2 days ago











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2904746%2flagrangian-multiplier-finds-local-optima-for-non-convex-function%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote



+25










The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.






share|cite|improve this answer




















  • Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
    – LinAlg
    Sep 7 at 14:49











  • The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
    – LinAlg
    Sep 7 at 14:54










  • @LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
    – joriki
    Sep 7 at 19:28










  • KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
    – LinAlg
    Sep 7 at 22:14















up vote
2
down vote



+25










The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.






share|cite|improve this answer




















  • Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
    – LinAlg
    Sep 7 at 14:49











  • The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
    – LinAlg
    Sep 7 at 14:54










  • @LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
    – joriki
    Sep 7 at 19:28










  • KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
    – LinAlg
    Sep 7 at 22:14













up vote
2
down vote



+25







up vote
2
down vote



+25




+25




The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.






share|cite|improve this answer












The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Sep 7 at 4:01









joriki

168k10180334




168k10180334











  • Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
    – LinAlg
    Sep 7 at 14:49











  • The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
    – LinAlg
    Sep 7 at 14:54










  • @LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
    – joriki
    Sep 7 at 19:28










  • KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
    – LinAlg
    Sep 7 at 22:14

















  • Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
    – LinAlg
    Sep 7 at 14:49











  • The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
    – LinAlg
    Sep 7 at 14:54










  • @LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
    – joriki
    Sep 7 at 19:28










  • KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
    – LinAlg
    Sep 7 at 22:14
















Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
– LinAlg
Sep 7 at 14:49





Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
– LinAlg
Sep 7 at 14:49













The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
– LinAlg
Sep 7 at 14:54




The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
– LinAlg
Sep 7 at 14:54












@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
– joriki
Sep 7 at 19:28




@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
– joriki
Sep 7 at 19:28












KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
– LinAlg
Sep 7 at 22:14





KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
– LinAlg
Sep 7 at 22:14











up vote
0
down vote













None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$



The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.



Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).






share|cite|improve this answer




















  • If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
    – loganecolss
    2 days ago










  • @loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
    – LinAlg
    2 days ago















up vote
0
down vote













None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$



The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.



Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).






share|cite|improve this answer




















  • If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
    – loganecolss
    2 days ago










  • @loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
    – LinAlg
    2 days ago













up vote
0
down vote










up vote
0
down vote









None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$



The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.



Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).






share|cite|improve this answer












None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$



The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.



Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Sep 13 at 21:36









LinAlg

6,0421319




6,0421319











  • If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
    – loganecolss
    2 days ago










  • @loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
    – LinAlg
    2 days ago

















  • If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
    – loganecolss
    2 days ago










  • @loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
    – LinAlg
    2 days ago
















If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
– loganecolss
2 days ago




If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
– loganecolss
2 days ago












@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
– LinAlg
2 days ago





@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
– LinAlg
2 days ago


















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2904746%2flagrangian-multiplier-finds-local-optima-for-non-convex-function%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

Is there any way to eliminate the singular point to solve this integral by hand or by approximations?

Why am i infinitely getting the same tweet with the Twitter Search API?

Carbon dioxide