Minimum Mahalanobis Distance on a Linear Regression Problem.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I am reading a paper, "A Morphable Model For The Synthesis Of 3D Faces" paper



I have a question about eq(3).



Solving a linear regression problem with data $D=(S_i, T_i)$ and target values $mu_i$, some vector can be calculated like eq(3)



$$
Delta S = sum_i=1^m mu_i (S_i - barS), qquad Delta T = sum_i=1^m mu_i (T_i - barT)
$$



In this paper eq(3) defines the direction with minimum mahalanobis distance to achieve a specific change $Delta mu$.



Author said that " It can be shown that Equation (3) defines the direction with minimal variance-normalized length $lVert Delta S rVert^2_M = leftlangle Delta S, C^-1_S Delta S rightrangle$, $lVert Delta T rVert^2_M = leftlangle Delta T, C^-1_T Delta T rightrangle$", but the proof is not shown.



Actually, I had solved following optimization problem on a regression problem using data some 2 dimensional data to verify numerically.



First of all, I had solved a regression problem on following data.
The first column is $mu$, the second and third are $S$, $T$ respectively.



$$
mathcalD=beginbmatrix
132 & 52 & 173 \
143 & 59 & 184 \
153 & 67 & 194 \
162 & 73 & 211 \
154 & 64 & 196 \
168 & 74 & 220 \
137 & 54 & 188 \
149 & 61 & 188 \
159 & 65 & 207 \
128 & 46 & 167 \
166 & 72 & 217
endbmatrix
$$



And optimization problem is formulated as follow



$$
beginaligned
&textmin. qquad p_11 Delta s^2 + 2 p_12 Delta s Delta t + p_22 Delta t^2 \[5pt]
&texts.t. qquad w_1 Delta s + w_2Delta t + w_0 = mu_0 + Delta mu - w_1 s_0 - w_2 t_0
endaligned
$$



where objective function is mahalanobis distance between $(s_0, t_0)$, and a point having objective function value $mu_0 + Delta mu$.
$mu_0$ is regression function value at $(s_0, t_0)$.
$p_11$, $p_12$, $p_22$ are components of inverse covariance matrix.
$w_0$, $w_1$, and $w_2$ are linear regression coefficients.



As a result of solving the above optimization problem, the $(Delta S, Delta T)=(1215.5455, 2290.)$ by eq(3) and the solution vector $(Delta s^*, Delta t^*)=(5.361, 10.0997)$ have the same direction.



How can this result be algebraically proven?










share|cite|improve this question



























    up vote
    0
    down vote

    favorite












    I am reading a paper, "A Morphable Model For The Synthesis Of 3D Faces" paper



    I have a question about eq(3).



    Solving a linear regression problem with data $D=(S_i, T_i)$ and target values $mu_i$, some vector can be calculated like eq(3)



    $$
    Delta S = sum_i=1^m mu_i (S_i - barS), qquad Delta T = sum_i=1^m mu_i (T_i - barT)
    $$



    In this paper eq(3) defines the direction with minimum mahalanobis distance to achieve a specific change $Delta mu$.



    Author said that " It can be shown that Equation (3) defines the direction with minimal variance-normalized length $lVert Delta S rVert^2_M = leftlangle Delta S, C^-1_S Delta S rightrangle$, $lVert Delta T rVert^2_M = leftlangle Delta T, C^-1_T Delta T rightrangle$", but the proof is not shown.



    Actually, I had solved following optimization problem on a regression problem using data some 2 dimensional data to verify numerically.



    First of all, I had solved a regression problem on following data.
    The first column is $mu$, the second and third are $S$, $T$ respectively.



    $$
    mathcalD=beginbmatrix
    132 & 52 & 173 \
    143 & 59 & 184 \
    153 & 67 & 194 \
    162 & 73 & 211 \
    154 & 64 & 196 \
    168 & 74 & 220 \
    137 & 54 & 188 \
    149 & 61 & 188 \
    159 & 65 & 207 \
    128 & 46 & 167 \
    166 & 72 & 217
    endbmatrix
    $$



    And optimization problem is formulated as follow



    $$
    beginaligned
    &textmin. qquad p_11 Delta s^2 + 2 p_12 Delta s Delta t + p_22 Delta t^2 \[5pt]
    &texts.t. qquad w_1 Delta s + w_2Delta t + w_0 = mu_0 + Delta mu - w_1 s_0 - w_2 t_0
    endaligned
    $$



    where objective function is mahalanobis distance between $(s_0, t_0)$, and a point having objective function value $mu_0 + Delta mu$.
    $mu_0$ is regression function value at $(s_0, t_0)$.
    $p_11$, $p_12$, $p_22$ are components of inverse covariance matrix.
    $w_0$, $w_1$, and $w_2$ are linear regression coefficients.



    As a result of solving the above optimization problem, the $(Delta S, Delta T)=(1215.5455, 2290.)$ by eq(3) and the solution vector $(Delta s^*, Delta t^*)=(5.361, 10.0997)$ have the same direction.



    How can this result be algebraically proven?










    share|cite|improve this question

























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I am reading a paper, "A Morphable Model For The Synthesis Of 3D Faces" paper



      I have a question about eq(3).



      Solving a linear regression problem with data $D=(S_i, T_i)$ and target values $mu_i$, some vector can be calculated like eq(3)



      $$
      Delta S = sum_i=1^m mu_i (S_i - barS), qquad Delta T = sum_i=1^m mu_i (T_i - barT)
      $$



      In this paper eq(3) defines the direction with minimum mahalanobis distance to achieve a specific change $Delta mu$.



      Author said that " It can be shown that Equation (3) defines the direction with minimal variance-normalized length $lVert Delta S rVert^2_M = leftlangle Delta S, C^-1_S Delta S rightrangle$, $lVert Delta T rVert^2_M = leftlangle Delta T, C^-1_T Delta T rightrangle$", but the proof is not shown.



      Actually, I had solved following optimization problem on a regression problem using data some 2 dimensional data to verify numerically.



      First of all, I had solved a regression problem on following data.
      The first column is $mu$, the second and third are $S$, $T$ respectively.



      $$
      mathcalD=beginbmatrix
      132 & 52 & 173 \
      143 & 59 & 184 \
      153 & 67 & 194 \
      162 & 73 & 211 \
      154 & 64 & 196 \
      168 & 74 & 220 \
      137 & 54 & 188 \
      149 & 61 & 188 \
      159 & 65 & 207 \
      128 & 46 & 167 \
      166 & 72 & 217
      endbmatrix
      $$



      And optimization problem is formulated as follow



      $$
      beginaligned
      &textmin. qquad p_11 Delta s^2 + 2 p_12 Delta s Delta t + p_22 Delta t^2 \[5pt]
      &texts.t. qquad w_1 Delta s + w_2Delta t + w_0 = mu_0 + Delta mu - w_1 s_0 - w_2 t_0
      endaligned
      $$



      where objective function is mahalanobis distance between $(s_0, t_0)$, and a point having objective function value $mu_0 + Delta mu$.
      $mu_0$ is regression function value at $(s_0, t_0)$.
      $p_11$, $p_12$, $p_22$ are components of inverse covariance matrix.
      $w_0$, $w_1$, and $w_2$ are linear regression coefficients.



      As a result of solving the above optimization problem, the $(Delta S, Delta T)=(1215.5455, 2290.)$ by eq(3) and the solution vector $(Delta s^*, Delta t^*)=(5.361, 10.0997)$ have the same direction.



      How can this result be algebraically proven?










      share|cite|improve this question















      I am reading a paper, "A Morphable Model For The Synthesis Of 3D Faces" paper



      I have a question about eq(3).



      Solving a linear regression problem with data $D=(S_i, T_i)$ and target values $mu_i$, some vector can be calculated like eq(3)



      $$
      Delta S = sum_i=1^m mu_i (S_i - barS), qquad Delta T = sum_i=1^m mu_i (T_i - barT)
      $$



      In this paper eq(3) defines the direction with minimum mahalanobis distance to achieve a specific change $Delta mu$.



      Author said that " It can be shown that Equation (3) defines the direction with minimal variance-normalized length $lVert Delta S rVert^2_M = leftlangle Delta S, C^-1_S Delta S rightrangle$, $lVert Delta T rVert^2_M = leftlangle Delta T, C^-1_T Delta T rightrangle$", but the proof is not shown.



      Actually, I had solved following optimization problem on a regression problem using data some 2 dimensional data to verify numerically.



      First of all, I had solved a regression problem on following data.
      The first column is $mu$, the second and third are $S$, $T$ respectively.



      $$
      mathcalD=beginbmatrix
      132 & 52 & 173 \
      143 & 59 & 184 \
      153 & 67 & 194 \
      162 & 73 & 211 \
      154 & 64 & 196 \
      168 & 74 & 220 \
      137 & 54 & 188 \
      149 & 61 & 188 \
      159 & 65 & 207 \
      128 & 46 & 167 \
      166 & 72 & 217
      endbmatrix
      $$



      And optimization problem is formulated as follow



      $$
      beginaligned
      &textmin. qquad p_11 Delta s^2 + 2 p_12 Delta s Delta t + p_22 Delta t^2 \[5pt]
      &texts.t. qquad w_1 Delta s + w_2Delta t + w_0 = mu_0 + Delta mu - w_1 s_0 - w_2 t_0
      endaligned
      $$



      where objective function is mahalanobis distance between $(s_0, t_0)$, and a point having objective function value $mu_0 + Delta mu$.
      $mu_0$ is regression function value at $(s_0, t_0)$.
      $p_11$, $p_12$, $p_22$ are components of inverse covariance matrix.
      $w_0$, $w_1$, and $w_2$ are linear regression coefficients.



      As a result of solving the above optimization problem, the $(Delta S, Delta T)=(1215.5455, 2290.)$ by eq(3) and the solution vector $(Delta s^*, Delta t^*)=(5.361, 10.0997)$ have the same direction.



      How can this result be algebraically proven?







      mahalanobis-distance






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Sep 9 at 12:56

























      asked Sep 9 at 12:18









      metamath

      235




      235

























          active

          oldest

          votes











          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2910725%2fminimum-mahalanobis-distance-on-a-linear-regression-problem%23new-answer', 'question_page');

          );

          Post as a guest



































          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















           

          draft saved


          draft discarded















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2910725%2fminimum-mahalanobis-distance-on-a-linear-regression-problem%23new-answer', 'question_page');

          );

          Post as a guest













































































          這個網誌中的熱門文章

          How to combine Bézier curves to a surface?

          Why am i infinitely getting the same tweet with the Twitter Search API?

          Carbon dioxide