Can a numerical optimization algorithm get stuck into local maxima?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I've designed a cost function of the form



$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$



which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).



More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.



I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.



Again, is this something that can happen?










share|cite|improve this question























  • It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
    – Rahul
    Sep 5 at 11:49










  • Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
    – user8469759
    Sep 5 at 11:52










  • @user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
    – Chinny84
    Sep 5 at 12:00










  • It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
    – user8469759
    Sep 5 at 12:13










  • If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
    – Rahul
    Sep 6 at 5:01














up vote
0
down vote

favorite












I've designed a cost function of the form



$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$



which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).



More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.



I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.



Again, is this something that can happen?










share|cite|improve this question























  • It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
    – Rahul
    Sep 5 at 11:49










  • Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
    – user8469759
    Sep 5 at 11:52










  • @user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
    – Chinny84
    Sep 5 at 12:00










  • It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
    – user8469759
    Sep 5 at 12:13










  • If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
    – Rahul
    Sep 6 at 5:01












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I've designed a cost function of the form



$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$



which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).



More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.



I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.



Again, is this something that can happen?










share|cite|improve this question















I've designed a cost function of the form



$$
c(x) = sum_j f_j(x) + lambda sum_j g_j(x)
$$



which I'm trying to minimize (it's more specifically a non linear least square problem). When I run my algorithm (which is provided by a Matlab toolbox) the final result essentially seems to get stuck in some "local maxima" (when $lambda$ is high).



More specifically my setup was a relatively high $lambda$. and some configurations of the $x$ maximize some $f_j$.



I know in general algorithms can get stuck in local minima when minimizing, but I'm surprised an optimization problem can get stuck in local maxima.



Again, is this something that can happen?







numerical-methods numerical-optimization






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 5 at 11:06

























asked Sep 5 at 10:55









user8469759

1,1901515




1,1901515











  • It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
    – Rahul
    Sep 5 at 11:49










  • Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
    – user8469759
    Sep 5 at 11:52










  • @user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
    – Chinny84
    Sep 5 at 12:00










  • It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
    – user8469759
    Sep 5 at 12:13










  • If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
    – Rahul
    Sep 6 at 5:01
















  • It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
    – Rahul
    Sep 5 at 11:49










  • Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
    – user8469759
    Sep 5 at 11:52










  • @user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
    – Chinny84
    Sep 5 at 12:00










  • It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
    – user8469759
    Sep 5 at 12:13










  • If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
    – Rahul
    Sep 6 at 5:01















It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
– Rahul
Sep 5 at 11:49




It's not necessarily a red flag if the result maximizes some $f_j$. Is the computed result a local maximum of the actual objective $c$?
– Rahul
Sep 5 at 11:49












Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
– user8469759
Sep 5 at 11:52




Yep, it is. But still it's just an odd situations, and I was wondering if this is something that can happen.
– user8469759
Sep 5 at 11:52












@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
– Chinny84
Sep 5 at 12:00




@user8469759 if you have a particularly flat local maxima, then depending on convergence conditions that are defined i.e change between steps, then you could. But I agree that I have not really experienced this. How are you testing? you have plotted the function you want to maximise?
– Chinny84
Sep 5 at 12:00












It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
– user8469759
Sep 5 at 12:13




It's algorithm to process a triangular mesh. So I'm plotting the result and I've noticed some configurations are totally opposite. For each triangle I have an $f_j$ and that configuration maximize the $f_j$, literally I have compute the partial derivative and that is $0$ for those triangles. I mean overall is a local minima, but for those $f_j$ they're a local maxima. The algorithm used is a quasi-newton scheme, though I think I'll switch to Levenberg-Marquadt later.
– user8469759
Sep 5 at 12:13












If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
– Rahul
Sep 6 at 5:01




If the solution is a local minimum of the overall objective, that is definitely something that can happen. For example let $f(x)=x^4-x^2$ and $g(x)=10x^2$.
– Rahul
Sep 6 at 5:01















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2906130%2fcan-a-numerical-optimization-algorithm-get-stuck-into-local-maxima%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2906130%2fcan-a-numerical-optimization-algorithm-get-stuck-into-local-maxima%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

How to combine Bézier curves to a surface?

Mutual Information Always Non-negative

Why am i infinitely getting the same tweet with the Twitter Search API?