Lagrangian multiplier finds local optima for non-convex function?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.
I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.
The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.
So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?
optimization lagrange-multiplier
add a comment |Â
up vote
1
down vote
favorite
As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.
I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.
The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.
So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?
optimization lagrange-multiplier
it can find point that are not locally optimal too
â LinAlg
Sep 4 at 12:05
@LinAlg, well then I don't understand why wikipedia article only says local optima?
â loganecolss
Sep 4 at 13:58
In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
â littleO
Sep 7 at 4:10
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.
I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.
The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.
So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?
optimization lagrange-multiplier
As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.
I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.
The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.
So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?
optimization lagrange-multiplier
optimization lagrange-multiplier
edited Sep 4 at 7:42
Deepesh Meena
3,7892825
3,7892825
asked Sep 4 at 7:39
loganecolss
4071123
4071123
it can find point that are not locally optimal too
â LinAlg
Sep 4 at 12:05
@LinAlg, well then I don't understand why wikipedia article only says local optima?
â loganecolss
Sep 4 at 13:58
In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
â littleO
Sep 7 at 4:10
add a comment |Â
it can find point that are not locally optimal too
â LinAlg
Sep 4 at 12:05
@LinAlg, well then I don't understand why wikipedia article only says local optima?
â loganecolss
Sep 4 at 13:58
In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
â littleO
Sep 7 at 4:10
it can find point that are not locally optimal too
â LinAlg
Sep 4 at 12:05
it can find point that are not locally optimal too
â LinAlg
Sep 4 at 12:05
@LinAlg, well then I don't understand why wikipedia article only says local optima?
â loganecolss
Sep 4 at 13:58
@LinAlg, well then I don't understand why wikipedia article only says local optima?
â loganecolss
Sep 4 at 13:58
In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
â littleO
Sep 7 at 4:10
In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
â littleO
Sep 7 at 4:10
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.
Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
â LinAlg
Sep 7 at 14:49
The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
â LinAlg
Sep 7 at 14:54
@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
â joriki
Sep 7 at 19:28
KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
â LinAlg
Sep 7 at 22:14
add a comment |Â
up vote
0
down vote
None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$
The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.
Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).
If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
â loganecolss
2 days ago
@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
â LinAlg
2 days ago
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.
Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
â LinAlg
Sep 7 at 14:49
The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
â LinAlg
Sep 7 at 14:54
@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
â joriki
Sep 7 at 19:28
KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
â LinAlg
Sep 7 at 22:14
add a comment |Â
up vote
2
down vote
The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.
Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
â LinAlg
Sep 7 at 14:49
The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
â LinAlg
Sep 7 at 14:54
@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
â joriki
Sep 7 at 19:28
KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
â LinAlg
Sep 7 at 22:14
add a comment |Â
up vote
2
down vote
up vote
2
down vote
The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.
The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.
answered Sep 7 at 4:01
joriki
168k10180334
168k10180334
Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
â LinAlg
Sep 7 at 14:49
The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
â LinAlg
Sep 7 at 14:54
@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
â joriki
Sep 7 at 19:28
KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
â LinAlg
Sep 7 at 22:14
add a comment |Â
Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
â LinAlg
Sep 7 at 14:49
The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
â LinAlg
Sep 7 at 14:54
@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
â joriki
Sep 7 at 19:28
KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
â LinAlg
Sep 7 at 22:14
Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
â LinAlg
Sep 7 at 14:49
Your first link leads to an explanation of stationary points of unconstrained functions. How do you visualize stationary points of the Lagrangian?
â LinAlg
Sep 7 at 14:49
The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
â LinAlg
Sep 7 at 14:54
The part "If the global optimum is on the boundary of the feasible region, it won't be found" is not correct: under certain regularity conditions (such as requiring that the gradients of the equality constraints are linearly independent at the optimum), the KKT conditions are necessary.
â LinAlg
Sep 7 at 14:54
@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
â joriki
Sep 7 at 19:28
@LinAlg: On your first comment: I wrote that in the second have of that very same sentence, after the semicolon. On your second comment: I don't know what you refer to by "KKT", but certainly no first-order conditions are necessary for the global optimum to be attained at the boundary.
â joriki
Sep 7 at 19:28
KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
â LinAlg
Sep 7 at 22:14
KKT is Karush Kuhn Tucker. The Lagrange Multiplier method (or KKT conditions) finds points on the boundary if regularity conditions are met.
â LinAlg
Sep 7 at 22:14
add a comment |Â
up vote
0
down vote
None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$
The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.
Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).
If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
â loganecolss
2 days ago
@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
â LinAlg
2 days ago
add a comment |Â
up vote
0
down vote
None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$
The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.
Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).
If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
â loganecolss
2 days ago
@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
â LinAlg
2 days ago
add a comment |Â
up vote
0
down vote
up vote
0
down vote
None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$
The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.
Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).
None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $minf(x) : g(x)=0$, Lagrange pointed out that the following condition is necessary for optimality:
$$nabla f(x) + lambda nabla g(x)=0, quad g(x)=0.$$
The condition is not a sufficient condition, even if the objective is convex. Take for example $minx^2 : cos(x) = 0$. Although $x^2$ is convex, any $x=0.5pi + k pi$, $kinmathbb Z$ satisfies the Lagrange equations (with $lambda=2x$ or $lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.
Problems with more than one constraint (i.e., when $g(x) in mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).
answered Sep 13 at 21:36
LinAlg
6,0421319
6,0421319
If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
â loganecolss
2 days ago
@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
â LinAlg
2 days ago
add a comment |Â
If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
â loganecolss
2 days ago
@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
â LinAlg
2 days ago
If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
â loganecolss
2 days ago
If objective is non-convex and equality constraint affine, then I still think that the Lagrange multiplier might not find the global optima, since $nabla f(x) + lambda nabla g(x) = 0$ only works for $x$ where $f(x)$ contour tangentially touches $g(x)$, isn't it?
â loganecolss
2 days ago
@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
â LinAlg
2 days ago
@loganecolss please provide a small counterexample. I can already guarantee that you will not find one, because Lagrange's conditions are necessary if there is only one constraint (affine or not). Shame that the bounty went to an incorrect answer.
â LinAlg
2 days ago
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2904746%2flagrangian-multiplier-finds-local-optima-for-non-convex-function%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
it can find point that are not locally optimal too
â LinAlg
Sep 4 at 12:05
@LinAlg, well then I don't understand why wikipedia article only says local optima?
â loganecolss
Sep 4 at 13:58
In calculus, when you set the derivative equal to zero, you find not just local minima but also local maxima and saddle points. The situation is similar when using Lagrange multipliers.
â littleO
Sep 7 at 4:10