Can a numerical optimization algorithm get stuck into local maxima?
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I've designed a cost function of the form
$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$
which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).
More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.
I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.
Again, is this something that can happen?
numerical-methods numerical-optimization
add a comment |Â
up vote
0
down vote
favorite
I've designed a cost function of the form
$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$
which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).
More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.
I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.
Again, is this something that can happen?
numerical-methods numerical-optimization
It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
â Rahul
Sep 5 at 11:49
Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
â user8469759
Sep 5 at 11:52
@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
â Chinny84
Sep 5 at 12:00
It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
â user8469759
Sep 5 at 12:13
If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
â Rahul
Sep 6 at 5:01
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I've designed a cost function of the form
$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$
which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).
More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.
I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.
Again, is this something that can happen?
numerical-methods numerical-optimization
I've designed a cost function of the form
$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$
which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).
More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.
I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.
Again, is this something that can happen?
numerical-methods numerical-optimization
numerical-methods numerical-optimization
edited Sep 5 at 11:06
asked Sep 5 at 10:55
user8469759
1,1901515
1,1901515
It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
â Rahul
Sep 5 at 11:49
Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
â user8469759
Sep 5 at 11:52
@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
â Chinny84
Sep 5 at 12:00
It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
â user8469759
Sep 5 at 12:13
If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
â Rahul
Sep 6 at 5:01
add a comment |Â
It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
â Rahul
Sep 5 at 11:49
Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
â user8469759
Sep 5 at 11:52
@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
â Chinny84
Sep 5 at 12:00
It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
â user8469759
Sep 5 at 12:13
If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
â Rahul
Sep 6 at 5:01
It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
â Rahul
Sep 5 at 11:49
It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
â Rahul
Sep 5 at 11:49
Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
â user8469759
Sep 5 at 11:52
Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
â user8469759
Sep 5 at 11:52
@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
â Chinny84
Sep 5 at 12:00
@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
â Chinny84
Sep 5 at 12:00
It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
â user8469759
Sep 5 at 12:13
It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
â user8469759
Sep 5 at 12:13
If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
â Rahul
Sep 6 at 5:01
If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
â Rahul
Sep 6 at 5:01
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2906130%2fcan-a-numerical-optimization-algorithm-get-stuck-into-local-maxima%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
â Rahul
Sep 5 at 11:49
Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
â user8469759
Sep 5 at 11:52
@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
â Chinny84
Sep 5 at 12:00
It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
â user8469759
Sep 5 at 12:13
If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
â Rahul
Sep 6 at 5:01