Lagrange multiplier for sum of products with sum constraint
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Is it possible to use Lagrange multipliers (or another technique) to easily find a maximum of a function like
$$
f:
begincases
mathbbR^3_ge0&to mathbbR_ge0\
(x_1,x_2,x_3)&mapsto x_1x_2x_3+ 2x_1x_2+3x_1x_3+x_2
endcases
$$ with the constraint that the sum of the arguments $sum_i=1^3x_i=s$ for some $sinmathbbR_>0$. I know that the product $x_1x_2x_3$ is maximized when all variables are the same but here, we have a weighted sum of partial products. (The function is given only as an example, I actually want to find a maximum of a function with more variables but of the same kind, i.e. weighted sum of partial products where each variable has either power 0 or 1 in the exponent)
optimization lagrange-multiplier
add a comment |Â
up vote
0
down vote
favorite
Is it possible to use Lagrange multipliers (or another technique) to easily find a maximum of a function like
$$
f:
begincases
mathbbR^3_ge0&to mathbbR_ge0\
(x_1,x_2,x_3)&mapsto x_1x_2x_3+ 2x_1x_2+3x_1x_3+x_2
endcases
$$ with the constraint that the sum of the arguments $sum_i=1^3x_i=s$ for some $sinmathbbR_>0$. I know that the product $x_1x_2x_3$ is maximized when all variables are the same but here, we have a weighted sum of partial products. (The function is given only as an example, I actually want to find a maximum of a function with more variables but of the same kind, i.e. weighted sum of partial products where each variable has either power 0 or 1 in the exponent)
optimization lagrange-multiplier
Yes, it is possible to solve this constrained optimization problem using Lagrange multipliers.
â molarmass
Aug 19 at 10:47
@molarmass Can you please indicate how? Setting the differential of the Lagrangian to zero gives a non-linear problem where I do not see how it can be solved.
â phinz
Aug 19 at 11:14
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Is it possible to use Lagrange multipliers (or another technique) to easily find a maximum of a function like
$$
f:
begincases
mathbbR^3_ge0&to mathbbR_ge0\
(x_1,x_2,x_3)&mapsto x_1x_2x_3+ 2x_1x_2+3x_1x_3+x_2
endcases
$$ with the constraint that the sum of the arguments $sum_i=1^3x_i=s$ for some $sinmathbbR_>0$. I know that the product $x_1x_2x_3$ is maximized when all variables are the same but here, we have a weighted sum of partial products. (The function is given only as an example, I actually want to find a maximum of a function with more variables but of the same kind, i.e. weighted sum of partial products where each variable has either power 0 or 1 in the exponent)
optimization lagrange-multiplier
Is it possible to use Lagrange multipliers (or another technique) to easily find a maximum of a function like
$$
f:
begincases
mathbbR^3_ge0&to mathbbR_ge0\
(x_1,x_2,x_3)&mapsto x_1x_2x_3+ 2x_1x_2+3x_1x_3+x_2
endcases
$$ with the constraint that the sum of the arguments $sum_i=1^3x_i=s$ for some $sinmathbbR_>0$. I know that the product $x_1x_2x_3$ is maximized when all variables are the same but here, we have a weighted sum of partial products. (The function is given only as an example, I actually want to find a maximum of a function with more variables but of the same kind, i.e. weighted sum of partial products where each variable has either power 0 or 1 in the exponent)
optimization lagrange-multiplier
asked Aug 19 at 10:29
phinz
1326
1326
Yes, it is possible to solve this constrained optimization problem using Lagrange multipliers.
â molarmass
Aug 19 at 10:47
@molarmass Can you please indicate how? Setting the differential of the Lagrangian to zero gives a non-linear problem where I do not see how it can be solved.
â phinz
Aug 19 at 11:14
add a comment |Â
Yes, it is possible to solve this constrained optimization problem using Lagrange multipliers.
â molarmass
Aug 19 at 10:47
@molarmass Can you please indicate how? Setting the differential of the Lagrangian to zero gives a non-linear problem where I do not see how it can be solved.
â phinz
Aug 19 at 11:14
Yes, it is possible to solve this constrained optimization problem using Lagrange multipliers.
â molarmass
Aug 19 at 10:47
Yes, it is possible to solve this constrained optimization problem using Lagrange multipliers.
â molarmass
Aug 19 at 10:47
@molarmass Can you please indicate how? Setting the differential of the Lagrangian to zero gives a non-linear problem where I do not see how it can be solved.
â phinz
Aug 19 at 11:14
@molarmass Can you please indicate how? Setting the differential of the Lagrangian to zero gives a non-linear problem where I do not see how it can be solved.
â phinz
Aug 19 at 11:14
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
According to the Lagrange's Multipliers technique the formulation could be
Given
$$
f(x) = x_1 x_2 x_3 +2x_2 x_2+3x_1 x_3+x_2\
g_1(x) = x_1+x_2+x_3 - s = 0\
g_2(x,epsilon) = x_1 - epsilon_1^2 = 0\
g_3(x,epsilon) = x_2 - epsilon_2^2 = 0\
g_4(x,epsilon) = x_3 - epsilon_3^2 = 0\
$$
Determine the stationary points for
$$
L(x,lambda,epsilon) = f(x)+lambda _1g_1(x)+sum_k=1^3lambda_k+1g_k+1(x_k,epsilon_k)
$$
Here $lambda_k$ are the so called Lagrange multipliers and $epsilon_k$ are slack variables to transform the restrictions $x_k ge 0$ into equivalent equality restrictions.
Now the stationary points are the solutions for
$$
nabla L = left{
beginarrayrcl
lambda_1+lambda_2+2 x_2+x_2x_3+3 z=0 \
lambda_1+lambda_3+2 x_1+x_1 x_3+1=0 \
lambda_1+lambda_4+3 x_1+x_1 x_2=0 \
x_1+x_2+x_3-s=0 \
x_1-epsilon_1^2=0 \
x_2-epsilon_2^2=0 \
x_3-epsilon_3^2=0 \
-2 epsilon_1 lambda_2=0 \
-2 epsilon_2 lambda_3=0 \
-2 epsilon_3 lambda_4=0 \
endarray
right.
$$
After that, the solutions should be qualified as local minimum, local maximum or saddle point. This is done with the Hessian from
$$
(fcirc g_1)(x) = f(x_1,x_2,s-x_1-x_2)
$$
NOTE
For $s = 2$ we have
$$
left(
beginarrayccccccccccc
x_1 & x_ 2 & x_3 & lambda_1 & lambda_2 & lambda_3 & lambda_4 & epsilon_1 & epsilon_2 & epsilon_3 & f(x)\
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & -1.41421 & 0. \
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & 1.41421 & 0. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & -1.41421 & 0 & 2. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & 1.41421 & 0 & 2. \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & 1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & 1.11803 & 0 & 3.125 \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & 1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & 1. & 3. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & -1.41421 & 0 & 0 & 0. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & 1.41421 & 0 & 0 & 0. \
endarray
right)
$$
The superabundant solutions are due to $epsilon_k^2$
A solution with $epsilon_k = 0$ means that the restriction $g_k+1(x,epsilon)$ is active.
Thank you very much! But can you also explain how one determines these solutions (since one cannot use Linear Algebra)? Is there a software (i.e. NumPy) that can determine these solutions (since my setting is bigger)
â phinz
Aug 19 at 14:13
Is it correct with in the definitions of $g_2,...,g_4$ without a square of the $epsilon$ variables?
â phinz
Aug 19 at 14:15
@phinz I use MATHEMATICA which handles well those equations. You can formulate also the problem without the slack $epsilon_k$ I did this way to show one of the variants when handling inequality restrictions. The slack variables are convenient in more complicated situations. MATHEMATICA is especially useful in those simple problems.
â Cesareo
Aug 19 at 15:48
@phinz Please verify the corrections introduced in the definition of $g_2,cdots, g_4$ The correct formulation is with $epsilon_k^2$ as it is now.
â Cesareo
Aug 19 at 15:52
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
According to the Lagrange's Multipliers technique the formulation could be
Given
$$
f(x) = x_1 x_2 x_3 +2x_2 x_2+3x_1 x_3+x_2\
g_1(x) = x_1+x_2+x_3 - s = 0\
g_2(x,epsilon) = x_1 - epsilon_1^2 = 0\
g_3(x,epsilon) = x_2 - epsilon_2^2 = 0\
g_4(x,epsilon) = x_3 - epsilon_3^2 = 0\
$$
Determine the stationary points for
$$
L(x,lambda,epsilon) = f(x)+lambda _1g_1(x)+sum_k=1^3lambda_k+1g_k+1(x_k,epsilon_k)
$$
Here $lambda_k$ are the so called Lagrange multipliers and $epsilon_k$ are slack variables to transform the restrictions $x_k ge 0$ into equivalent equality restrictions.
Now the stationary points are the solutions for
$$
nabla L = left{
beginarrayrcl
lambda_1+lambda_2+2 x_2+x_2x_3+3 z=0 \
lambda_1+lambda_3+2 x_1+x_1 x_3+1=0 \
lambda_1+lambda_4+3 x_1+x_1 x_2=0 \
x_1+x_2+x_3-s=0 \
x_1-epsilon_1^2=0 \
x_2-epsilon_2^2=0 \
x_3-epsilon_3^2=0 \
-2 epsilon_1 lambda_2=0 \
-2 epsilon_2 lambda_3=0 \
-2 epsilon_3 lambda_4=0 \
endarray
right.
$$
After that, the solutions should be qualified as local minimum, local maximum or saddle point. This is done with the Hessian from
$$
(fcirc g_1)(x) = f(x_1,x_2,s-x_1-x_2)
$$
NOTE
For $s = 2$ we have
$$
left(
beginarrayccccccccccc
x_1 & x_ 2 & x_3 & lambda_1 & lambda_2 & lambda_3 & lambda_4 & epsilon_1 & epsilon_2 & epsilon_3 & f(x)\
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & -1.41421 & 0. \
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & 1.41421 & 0. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & -1.41421 & 0 & 2. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & 1.41421 & 0 & 2. \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & 1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & 1.11803 & 0 & 3.125 \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & 1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & 1. & 3. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & -1.41421 & 0 & 0 & 0. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & 1.41421 & 0 & 0 & 0. \
endarray
right)
$$
The superabundant solutions are due to $epsilon_k^2$
A solution with $epsilon_k = 0$ means that the restriction $g_k+1(x,epsilon)$ is active.
Thank you very much! But can you also explain how one determines these solutions (since one cannot use Linear Algebra)? Is there a software (i.e. NumPy) that can determine these solutions (since my setting is bigger)
â phinz
Aug 19 at 14:13
Is it correct with in the definitions of $g_2,...,g_4$ without a square of the $epsilon$ variables?
â phinz
Aug 19 at 14:15
@phinz I use MATHEMATICA which handles well those equations. You can formulate also the problem without the slack $epsilon_k$ I did this way to show one of the variants when handling inequality restrictions. The slack variables are convenient in more complicated situations. MATHEMATICA is especially useful in those simple problems.
â Cesareo
Aug 19 at 15:48
@phinz Please verify the corrections introduced in the definition of $g_2,cdots, g_4$ The correct formulation is with $epsilon_k^2$ as it is now.
â Cesareo
Aug 19 at 15:52
add a comment |Â
up vote
2
down vote
accepted
According to the Lagrange's Multipliers technique the formulation could be
Given
$$
f(x) = x_1 x_2 x_3 +2x_2 x_2+3x_1 x_3+x_2\
g_1(x) = x_1+x_2+x_3 - s = 0\
g_2(x,epsilon) = x_1 - epsilon_1^2 = 0\
g_3(x,epsilon) = x_2 - epsilon_2^2 = 0\
g_4(x,epsilon) = x_3 - epsilon_3^2 = 0\
$$
Determine the stationary points for
$$
L(x,lambda,epsilon) = f(x)+lambda _1g_1(x)+sum_k=1^3lambda_k+1g_k+1(x_k,epsilon_k)
$$
Here $lambda_k$ are the so called Lagrange multipliers and $epsilon_k$ are slack variables to transform the restrictions $x_k ge 0$ into equivalent equality restrictions.
Now the stationary points are the solutions for
$$
nabla L = left{
beginarrayrcl
lambda_1+lambda_2+2 x_2+x_2x_3+3 z=0 \
lambda_1+lambda_3+2 x_1+x_1 x_3+1=0 \
lambda_1+lambda_4+3 x_1+x_1 x_2=0 \
x_1+x_2+x_3-s=0 \
x_1-epsilon_1^2=0 \
x_2-epsilon_2^2=0 \
x_3-epsilon_3^2=0 \
-2 epsilon_1 lambda_2=0 \
-2 epsilon_2 lambda_3=0 \
-2 epsilon_3 lambda_4=0 \
endarray
right.
$$
After that, the solutions should be qualified as local minimum, local maximum or saddle point. This is done with the Hessian from
$$
(fcirc g_1)(x) = f(x_1,x_2,s-x_1-x_2)
$$
NOTE
For $s = 2$ we have
$$
left(
beginarrayccccccccccc
x_1 & x_ 2 & x_3 & lambda_1 & lambda_2 & lambda_3 & lambda_4 & epsilon_1 & epsilon_2 & epsilon_3 & f(x)\
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & -1.41421 & 0. \
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & 1.41421 & 0. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & -1.41421 & 0 & 2. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & 1.41421 & 0 & 2. \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & 1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & 1.11803 & 0 & 3.125 \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & 1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & 1. & 3. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & -1.41421 & 0 & 0 & 0. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & 1.41421 & 0 & 0 & 0. \
endarray
right)
$$
The superabundant solutions are due to $epsilon_k^2$
A solution with $epsilon_k = 0$ means that the restriction $g_k+1(x,epsilon)$ is active.
Thank you very much! But can you also explain how one determines these solutions (since one cannot use Linear Algebra)? Is there a software (i.e. NumPy) that can determine these solutions (since my setting is bigger)
â phinz
Aug 19 at 14:13
Is it correct with in the definitions of $g_2,...,g_4$ without a square of the $epsilon$ variables?
â phinz
Aug 19 at 14:15
@phinz I use MATHEMATICA which handles well those equations. You can formulate also the problem without the slack $epsilon_k$ I did this way to show one of the variants when handling inequality restrictions. The slack variables are convenient in more complicated situations. MATHEMATICA is especially useful in those simple problems.
â Cesareo
Aug 19 at 15:48
@phinz Please verify the corrections introduced in the definition of $g_2,cdots, g_4$ The correct formulation is with $epsilon_k^2$ as it is now.
â Cesareo
Aug 19 at 15:52
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
According to the Lagrange's Multipliers technique the formulation could be
Given
$$
f(x) = x_1 x_2 x_3 +2x_2 x_2+3x_1 x_3+x_2\
g_1(x) = x_1+x_2+x_3 - s = 0\
g_2(x,epsilon) = x_1 - epsilon_1^2 = 0\
g_3(x,epsilon) = x_2 - epsilon_2^2 = 0\
g_4(x,epsilon) = x_3 - epsilon_3^2 = 0\
$$
Determine the stationary points for
$$
L(x,lambda,epsilon) = f(x)+lambda _1g_1(x)+sum_k=1^3lambda_k+1g_k+1(x_k,epsilon_k)
$$
Here $lambda_k$ are the so called Lagrange multipliers and $epsilon_k$ are slack variables to transform the restrictions $x_k ge 0$ into equivalent equality restrictions.
Now the stationary points are the solutions for
$$
nabla L = left{
beginarrayrcl
lambda_1+lambda_2+2 x_2+x_2x_3+3 z=0 \
lambda_1+lambda_3+2 x_1+x_1 x_3+1=0 \
lambda_1+lambda_4+3 x_1+x_1 x_2=0 \
x_1+x_2+x_3-s=0 \
x_1-epsilon_1^2=0 \
x_2-epsilon_2^2=0 \
x_3-epsilon_3^2=0 \
-2 epsilon_1 lambda_2=0 \
-2 epsilon_2 lambda_3=0 \
-2 epsilon_3 lambda_4=0 \
endarray
right.
$$
After that, the solutions should be qualified as local minimum, local maximum or saddle point. This is done with the Hessian from
$$
(fcirc g_1)(x) = f(x_1,x_2,s-x_1-x_2)
$$
NOTE
For $s = 2$ we have
$$
left(
beginarrayccccccccccc
x_1 & x_ 2 & x_3 & lambda_1 & lambda_2 & lambda_3 & lambda_4 & epsilon_1 & epsilon_2 & epsilon_3 & f(x)\
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & -1.41421 & 0. \
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & 1.41421 & 0. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & -1.41421 & 0 & 2. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & 1.41421 & 0 & 2. \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & 1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & 1.11803 & 0 & 3.125 \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & 1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & 1. & 3. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & -1.41421 & 0 & 0 & 0. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & 1.41421 & 0 & 0 & 0. \
endarray
right)
$$
The superabundant solutions are due to $epsilon_k^2$
A solution with $epsilon_k = 0$ means that the restriction $g_k+1(x,epsilon)$ is active.
According to the Lagrange's Multipliers technique the formulation could be
Given
$$
f(x) = x_1 x_2 x_3 +2x_2 x_2+3x_1 x_3+x_2\
g_1(x) = x_1+x_2+x_3 - s = 0\
g_2(x,epsilon) = x_1 - epsilon_1^2 = 0\
g_3(x,epsilon) = x_2 - epsilon_2^2 = 0\
g_4(x,epsilon) = x_3 - epsilon_3^2 = 0\
$$
Determine the stationary points for
$$
L(x,lambda,epsilon) = f(x)+lambda _1g_1(x)+sum_k=1^3lambda_k+1g_k+1(x_k,epsilon_k)
$$
Here $lambda_k$ are the so called Lagrange multipliers and $epsilon_k$ are slack variables to transform the restrictions $x_k ge 0$ into equivalent equality restrictions.
Now the stationary points are the solutions for
$$
nabla L = left{
beginarrayrcl
lambda_1+lambda_2+2 x_2+x_2x_3+3 z=0 \
lambda_1+lambda_3+2 x_1+x_1 x_3+1=0 \
lambda_1+lambda_4+3 x_1+x_1 x_2=0 \
x_1+x_2+x_3-s=0 \
x_1-epsilon_1^2=0 \
x_2-epsilon_2^2=0 \
x_3-epsilon_3^2=0 \
-2 epsilon_1 lambda_2=0 \
-2 epsilon_2 lambda_3=0 \
-2 epsilon_3 lambda_4=0 \
endarray
right.
$$
After that, the solutions should be qualified as local minimum, local maximum or saddle point. This is done with the Hessian from
$$
(fcirc g_1)(x) = f(x_1,x_2,s-x_1-x_2)
$$
NOTE
For $s = 2$ we have
$$
left(
beginarrayccccccccccc
x_1 & x_ 2 & x_3 & lambda_1 & lambda_2 & lambda_3 & lambda_4 & epsilon_1 & epsilon_2 & epsilon_3 & f(x)\
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & -1.41421 & 0. \
0 & 0 & 2. & 0 & -6. & -1. & 0 & 0 & 0 & 1.41421 & 0. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & -1.41421 & 0 & 2. \
0 & 2. & 0 & -1. & -3. & 0 & 1. & 0 & 1.41421 & 0 & 2. \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & -0.866025 & 1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & -1.11803 & 0 & 3.125 \
0.75 & 1.25 & 0 & -2.5 & 0 & 0 & -0.6875 & 0.866025 & 1.11803 & 0 & 3.125 \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & -1. & 0 & 1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & -1. & 3. \
1. & 0 & 1. & -3. & 0 & -1. & 0 & 1. & 0 & 1. & 3. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & -1.41421 & 0 & 0 & 0. \
2. & 0 & 0 & 0 & 0 & -5. & -6. & 1.41421 & 0 & 0 & 0. \
endarray
right)
$$
The superabundant solutions are due to $epsilon_k^2$
A solution with $epsilon_k = 0$ means that the restriction $g_k+1(x,epsilon)$ is active.
edited Aug 19 at 15:49
answered Aug 19 at 11:55
Cesareo
5,9052412
5,9052412
Thank you very much! But can you also explain how one determines these solutions (since one cannot use Linear Algebra)? Is there a software (i.e. NumPy) that can determine these solutions (since my setting is bigger)
â phinz
Aug 19 at 14:13
Is it correct with in the definitions of $g_2,...,g_4$ without a square of the $epsilon$ variables?
â phinz
Aug 19 at 14:15
@phinz I use MATHEMATICA which handles well those equations. You can formulate also the problem without the slack $epsilon_k$ I did this way to show one of the variants when handling inequality restrictions. The slack variables are convenient in more complicated situations. MATHEMATICA is especially useful in those simple problems.
â Cesareo
Aug 19 at 15:48
@phinz Please verify the corrections introduced in the definition of $g_2,cdots, g_4$ The correct formulation is with $epsilon_k^2$ as it is now.
â Cesareo
Aug 19 at 15:52
add a comment |Â
Thank you very much! But can you also explain how one determines these solutions (since one cannot use Linear Algebra)? Is there a software (i.e. NumPy) that can determine these solutions (since my setting is bigger)
â phinz
Aug 19 at 14:13
Is it correct with in the definitions of $g_2,...,g_4$ without a square of the $epsilon$ variables?
â phinz
Aug 19 at 14:15
@phinz I use MATHEMATICA which handles well those equations. You can formulate also the problem without the slack $epsilon_k$ I did this way to show one of the variants when handling inequality restrictions. The slack variables are convenient in more complicated situations. MATHEMATICA is especially useful in those simple problems.
â Cesareo
Aug 19 at 15:48
@phinz Please verify the corrections introduced in the definition of $g_2,cdots, g_4$ The correct formulation is with $epsilon_k^2$ as it is now.
â Cesareo
Aug 19 at 15:52
Thank you very much! But can you also explain how one determines these solutions (since one cannot use Linear Algebra)? Is there a software (i.e. NumPy) that can determine these solutions (since my setting is bigger)
â phinz
Aug 19 at 14:13
Thank you very much! But can you also explain how one determines these solutions (since one cannot use Linear Algebra)? Is there a software (i.e. NumPy) that can determine these solutions (since my setting is bigger)
â phinz
Aug 19 at 14:13
Is it correct with in the definitions of $g_2,...,g_4$ without a square of the $epsilon$ variables?
â phinz
Aug 19 at 14:15
Is it correct with in the definitions of $g_2,...,g_4$ without a square of the $epsilon$ variables?
â phinz
Aug 19 at 14:15
@phinz I use MATHEMATICA which handles well those equations. You can formulate also the problem without the slack $epsilon_k$ I did this way to show one of the variants when handling inequality restrictions. The slack variables are convenient in more complicated situations. MATHEMATICA is especially useful in those simple problems.
â Cesareo
Aug 19 at 15:48
@phinz I use MATHEMATICA which handles well those equations. You can formulate also the problem without the slack $epsilon_k$ I did this way to show one of the variants when handling inequality restrictions. The slack variables are convenient in more complicated situations. MATHEMATICA is especially useful in those simple problems.
â Cesareo
Aug 19 at 15:48
@phinz Please verify the corrections introduced in the definition of $g_2,cdots, g_4$ The correct formulation is with $epsilon_k^2$ as it is now.
â Cesareo
Aug 19 at 15:52
@phinz Please verify the corrections introduced in the definition of $g_2,cdots, g_4$ The correct formulation is with $epsilon_k^2$ as it is now.
â Cesareo
Aug 19 at 15:52
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2887574%2flagrange-multiplier-for-sum-of-products-with-sum-constraint%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Yes, it is possible to solve this constrained optimization problem using Lagrange multipliers.
â molarmass
Aug 19 at 10:47
@molarmass Can you please indicate how? Setting the differential of the Lagrangian to zero gives a non-linear problem where I do not see how it can be solved.
â phinz
Aug 19 at 11:14