Amortized Approximate Map Recovery

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I want to implement Sparse Extended Information Filter Slam.I studied it from Probabilistic Robotics by Dr. Sebestian Thrun. I have some numerical doubt in chapter 12 page 321(Amortized Approximate Map Recovery) eq. (12.31,12.33,12.33).



I attached a picture of my doubt



Amortized Approximate Map Recovery



Here $Omega$ is a sparse information matrix and Xi is a information vector and $mu$ is mean. $Omega$ define the link between two robot pose or a link between robot pose and landmark position.



I fail to understand how could I get eq. 12.32 from 12.31? I understand eq.12.33 which is partial derivative of $mu$. But again lost on this line which implies $Omegamu=Xi$. From eq. 12.33 how could we say that $Omegamu=Xi$?



If some one explain me the mathematical derivation of 12.6 Amortized Approximate Map Recovery from chapter 12 it is easy for me to implement the code portion.










share|cite|improve this question





















  • I have no background on this topic so I do not know the model behind. From the material you posted, eq $(12.32)$ and $(12.31)$ are purely definition-type statement, and there is no logical deduction between them. Once you agreed with the derivative at the LHS of $(12.33)$, it is easy to see that it equals to $0$ if and only if $-Omegamu + xi = 0$, since the other factors, i.e. $eta$ and $expldots$ are non-zero.
    – BGM
    Sep 11 at 5:36














up vote
0
down vote

favorite












I want to implement Sparse Extended Information Filter Slam.I studied it from Probabilistic Robotics by Dr. Sebestian Thrun. I have some numerical doubt in chapter 12 page 321(Amortized Approximate Map Recovery) eq. (12.31,12.33,12.33).



I attached a picture of my doubt



Amortized Approximate Map Recovery



Here $Omega$ is a sparse information matrix and Xi is a information vector and $mu$ is mean. $Omega$ define the link between two robot pose or a link between robot pose and landmark position.



I fail to understand how could I get eq. 12.32 from 12.31? I understand eq.12.33 which is partial derivative of $mu$. But again lost on this line which implies $Omegamu=Xi$. From eq. 12.33 how could we say that $Omegamu=Xi$?



If some one explain me the mathematical derivation of 12.6 Amortized Approximate Map Recovery from chapter 12 it is easy for me to implement the code portion.










share|cite|improve this question





















  • I have no background on this topic so I do not know the model behind. From the material you posted, eq $(12.32)$ and $(12.31)$ are purely definition-type statement, and there is no logical deduction between them. Once you agreed with the derivative at the LHS of $(12.33)$, it is easy to see that it equals to $0$ if and only if $-Omegamu + xi = 0$, since the other factors, i.e. $eta$ and $expldots$ are non-zero.
    – BGM
    Sep 11 at 5:36












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I want to implement Sparse Extended Information Filter Slam.I studied it from Probabilistic Robotics by Dr. Sebestian Thrun. I have some numerical doubt in chapter 12 page 321(Amortized Approximate Map Recovery) eq. (12.31,12.33,12.33).



I attached a picture of my doubt



Amortized Approximate Map Recovery



Here $Omega$ is a sparse information matrix and Xi is a information vector and $mu$ is mean. $Omega$ define the link between two robot pose or a link between robot pose and landmark position.



I fail to understand how could I get eq. 12.32 from 12.31? I understand eq.12.33 which is partial derivative of $mu$. But again lost on this line which implies $Omegamu=Xi$. From eq. 12.33 how could we say that $Omegamu=Xi$?



If some one explain me the mathematical derivation of 12.6 Amortized Approximate Map Recovery from chapter 12 it is easy for me to implement the code portion.










share|cite|improve this question













I want to implement Sparse Extended Information Filter Slam.I studied it from Probabilistic Robotics by Dr. Sebestian Thrun. I have some numerical doubt in chapter 12 page 321(Amortized Approximate Map Recovery) eq. (12.31,12.33,12.33).



I attached a picture of my doubt



Amortized Approximate Map Recovery



Here $Omega$ is a sparse information matrix and Xi is a information vector and $mu$ is mean. $Omega$ define the link between two robot pose or a link between robot pose and landmark position.



I fail to understand how could I get eq. 12.32 from 12.31? I understand eq.12.33 which is partial derivative of $mu$. But again lost on this line which implies $Omegamu=Xi$. From eq. 12.33 how could we say that $Omegamu=Xi$?



If some one explain me the mathematical derivation of 12.6 Amortized Approximate Map Recovery from chapter 12 it is easy for me to implement the code portion.







probability probability-distributions partial-derivative numerical-optimization gaussian-elimination






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Sep 11 at 2:29









Encipher

246




246











  • I have no background on this topic so I do not know the model behind. From the material you posted, eq $(12.32)$ and $(12.31)$ are purely definition-type statement, and there is no logical deduction between them. Once you agreed with the derivative at the LHS of $(12.33)$, it is easy to see that it equals to $0$ if and only if $-Omegamu + xi = 0$, since the other factors, i.e. $eta$ and $expldots$ are non-zero.
    – BGM
    Sep 11 at 5:36
















  • I have no background on this topic so I do not know the model behind. From the material you posted, eq $(12.32)$ and $(12.31)$ are purely definition-type statement, and there is no logical deduction between them. Once you agreed with the derivative at the LHS of $(12.33)$, it is easy to see that it equals to $0$ if and only if $-Omegamu + xi = 0$, since the other factors, i.e. $eta$ and $expldots$ are non-zero.
    – BGM
    Sep 11 at 5:36















I have no background on this topic so I do not know the model behind. From the material you posted, eq $(12.32)$ and $(12.31)$ are purely definition-type statement, and there is no logical deduction between them. Once you agreed with the derivative at the LHS of $(12.33)$, it is easy to see that it equals to $0$ if and only if $-Omegamu + xi = 0$, since the other factors, i.e. $eta$ and $expldots$ are non-zero.
– BGM
Sep 11 at 5:36




I have no background on this topic so I do not know the model behind. From the material you posted, eq $(12.32)$ and $(12.31)$ are purely definition-type statement, and there is no logical deduction between them. Once you agreed with the derivative at the LHS of $(12.33)$, it is easy to see that it equals to $0$ if and only if $-Omegamu + xi = 0$, since the other factors, i.e. $eta$ and $expldots$ are non-zero.
– BGM
Sep 11 at 5:36















active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2912665%2famortized-approximate-map-recovery%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2912665%2famortized-approximate-map-recovery%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

How to combine Bézier curves to a surface?

Mutual Information Always Non-negative

Why am i infinitely getting the same tweet with the Twitter Search API?