Understanding vector interpretation of random variables.

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












I am reading some notes on Conditional Expectations as Orthogonal Projections (Page 60 - Section 3.5.2.3). There, they have an example of two biased coin, which I paraphrase.




Let $X$ denote a random variable that is $1$ if the first coin is heads (out of two unbiased coin tosses). I.e., the sample space is $langle TT, TH, HT, HHrangle$, and $X$ can be seen as a vector $langle 0, 0, 1, 1rangle$ in $mathbbR^4$ corresponding to these outcomes. Now, they say $E[X]$ is a projection of $X$ onto subspace of constants, that is, $E[X] = E[langle0, 0, 1, 1rangle] = langlefrac12, frac12, frac12, frac12rangle$. We can see that this is correct because $X - E[X]$ is orthogonal to the subspace of constants: $(X - E[X]) cdot langle 1, 1, 1, 1rangle = 0$.




I don't understand what the subspace of constants is, and how they arrive at the conclusion that $E[X] = langle frac12, frac12, frac12, frac12 rangle$. For example, when I don't understand how to generalize this example for two biased coin tosses with $Pr(textHeads) = p$.










share|cite|improve this question























  • P.38: “An even lower-dimensional subspace is the one-dimensional subspace of constants: vectors of the form $langle a,a,a,a rangle$.”
    – amd
    Sep 10 at 19:44










  • @amd Sure, but I cannot represent $E[X]$ in the case of biased coin tosses, such that $X-E[X]$ will be orthogonal to a constant vector. Because $X = langle 0, 0, 1, 1 rangle$ remains unchanged, and $E[X]$ must be $langle a, a, a, a rangle$ for some $a$. When you solve for $a$, you get $a = 1/2$, which is unrelated to the bias probability $p$.
    – taninamdar
    Sep 10 at 19:52















up vote
1
down vote

favorite
1












I am reading some notes on Conditional Expectations as Orthogonal Projections (Page 60 - Section 3.5.2.3). There, they have an example of two biased coin, which I paraphrase.




Let $X$ denote a random variable that is $1$ if the first coin is heads (out of two unbiased coin tosses). I.e., the sample space is $langle TT, TH, HT, HHrangle$, and $X$ can be seen as a vector $langle 0, 0, 1, 1rangle$ in $mathbbR^4$ corresponding to these outcomes. Now, they say $E[X]$ is a projection of $X$ onto subspace of constants, that is, $E[X] = E[langle0, 0, 1, 1rangle] = langlefrac12, frac12, frac12, frac12rangle$. We can see that this is correct because $X - E[X]$ is orthogonal to the subspace of constants: $(X - E[X]) cdot langle 1, 1, 1, 1rangle = 0$.




I don't understand what the subspace of constants is, and how they arrive at the conclusion that $E[X] = langle frac12, frac12, frac12, frac12 rangle$. For example, when I don't understand how to generalize this example for two biased coin tosses with $Pr(textHeads) = p$.










share|cite|improve this question























  • P.38: “An even lower-dimensional subspace is the one-dimensional subspace of constants: vectors of the form $langle a,a,a,a rangle$.”
    – amd
    Sep 10 at 19:44










  • @amd Sure, but I cannot represent $E[X]$ in the case of biased coin tosses, such that $X-E[X]$ will be orthogonal to a constant vector. Because $X = langle 0, 0, 1, 1 rangle$ remains unchanged, and $E[X]$ must be $langle a, a, a, a rangle$ for some $a$. When you solve for $a$, you get $a = 1/2$, which is unrelated to the bias probability $p$.
    – taninamdar
    Sep 10 at 19:52













up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





I am reading some notes on Conditional Expectations as Orthogonal Projections (Page 60 - Section 3.5.2.3). There, they have an example of two biased coin, which I paraphrase.




Let $X$ denote a random variable that is $1$ if the first coin is heads (out of two unbiased coin tosses). I.e., the sample space is $langle TT, TH, HT, HHrangle$, and $X$ can be seen as a vector $langle 0, 0, 1, 1rangle$ in $mathbbR^4$ corresponding to these outcomes. Now, they say $E[X]$ is a projection of $X$ onto subspace of constants, that is, $E[X] = E[langle0, 0, 1, 1rangle] = langlefrac12, frac12, frac12, frac12rangle$. We can see that this is correct because $X - E[X]$ is orthogonal to the subspace of constants: $(X - E[X]) cdot langle 1, 1, 1, 1rangle = 0$.




I don't understand what the subspace of constants is, and how they arrive at the conclusion that $E[X] = langle frac12, frac12, frac12, frac12 rangle$. For example, when I don't understand how to generalize this example for two biased coin tosses with $Pr(textHeads) = p$.










share|cite|improve this question















I am reading some notes on Conditional Expectations as Orthogonal Projections (Page 60 - Section 3.5.2.3). There, they have an example of two biased coin, which I paraphrase.




Let $X$ denote a random variable that is $1$ if the first coin is heads (out of two unbiased coin tosses). I.e., the sample space is $langle TT, TH, HT, HHrangle$, and $X$ can be seen as a vector $langle 0, 0, 1, 1rangle$ in $mathbbR^4$ corresponding to these outcomes. Now, they say $E[X]$ is a projection of $X$ onto subspace of constants, that is, $E[X] = E[langle0, 0, 1, 1rangle] = langlefrac12, frac12, frac12, frac12rangle$. We can see that this is correct because $X - E[X]$ is orthogonal to the subspace of constants: $(X - E[X]) cdot langle 1, 1, 1, 1rangle = 0$.




I don't understand what the subspace of constants is, and how they arrive at the conclusion that $E[X] = langle frac12, frac12, frac12, frac12 rangle$. For example, when I don't understand how to generalize this example for two biased coin tosses with $Pr(textHeads) = p$.







probability probability-theory vector-spaces random-variables expected-value






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 10 at 20:28









Yujie Zha

6,82811629




6,82811629










asked Sep 10 at 19:37









taninamdar

2,163722




2,163722











  • P.38: “An even lower-dimensional subspace is the one-dimensional subspace of constants: vectors of the form $langle a,a,a,a rangle$.”
    – amd
    Sep 10 at 19:44










  • @amd Sure, but I cannot represent $E[X]$ in the case of biased coin tosses, such that $X-E[X]$ will be orthogonal to a constant vector. Because $X = langle 0, 0, 1, 1 rangle$ remains unchanged, and $E[X]$ must be $langle a, a, a, a rangle$ for some $a$. When you solve for $a$, you get $a = 1/2$, which is unrelated to the bias probability $p$.
    – taninamdar
    Sep 10 at 19:52

















  • P.38: “An even lower-dimensional subspace is the one-dimensional subspace of constants: vectors of the form $langle a,a,a,a rangle$.”
    – amd
    Sep 10 at 19:44










  • @amd Sure, but I cannot represent $E[X]$ in the case of biased coin tosses, such that $X-E[X]$ will be orthogonal to a constant vector. Because $X = langle 0, 0, 1, 1 rangle$ remains unchanged, and $E[X]$ must be $langle a, a, a, a rangle$ for some $a$. When you solve for $a$, you get $a = 1/2$, which is unrelated to the bias probability $p$.
    – taninamdar
    Sep 10 at 19:52
















P.38: “An even lower-dimensional subspace is the one-dimensional subspace of constants: vectors of the form $langle a,a,a,a rangle$.”
– amd
Sep 10 at 19:44




P.38: “An even lower-dimensional subspace is the one-dimensional subspace of constants: vectors of the form $langle a,a,a,a rangle$.”
– amd
Sep 10 at 19:44












@amd Sure, but I cannot represent $E[X]$ in the case of biased coin tosses, such that $X-E[X]$ will be orthogonal to a constant vector. Because $X = langle 0, 0, 1, 1 rangle$ remains unchanged, and $E[X]$ must be $langle a, a, a, a rangle$ for some $a$. When you solve for $a$, you get $a = 1/2$, which is unrelated to the bias probability $p$.
– taninamdar
Sep 10 at 19:52





@amd Sure, but I cannot represent $E[X]$ in the case of biased coin tosses, such that $X-E[X]$ will be orthogonal to a constant vector. Because $X = langle 0, 0, 1, 1 rangle$ remains unchanged, and $E[X]$ must be $langle a, a, a, a rangle$ for some $a$. When you solve for $a$, you get $a = 1/2$, which is unrelated to the bias probability $p$.
– taninamdar
Sep 10 at 19:52











2 Answers
2






active

oldest

votes

















up vote
1
down vote



accepted










The key to extending the ideas in that section of the notes to other probability spaces is hinted at in the last paragraph, where the authors write “… for example, $mathbf E[XY]$ acts like an inner product for random variables…” In fact, you have to use this inner product to compute the orthogonal projections. Relative to the basis that consists of the indicator variables for the elementary events and treating the random variables as vectors now, this inner product can be computed as the matrix product $Y^TQX$, where $Q$ is a diagonal matrix with the probabilities of the elementary events along its main diagonal. If we represent these probabilities as $p_i$, this expression can be written as $sum_i p_ix_iy_i$.



The authors are a bit cavalier about the inner product used in their examples: they use the dot product of the vectors to define orthogonality, which can be misleading. Per the previous paragraph, the matrix of the inner product for this space is actually $Q=operatornamediagleft(frac14,frac14,frac14,frac14right)$, yielding the inner product $frac14(Xcdot Y)$. This is zero iff $Xcdot Y$ is zero, so the factor of $frac14$ can be ignored when checking orthogonality. (In fact, this factor cancels when computing the angle between two vectors.)



For your unfair coin with $Pr(textHeads)=p$, the matrix of the inner product is instead $$Q=beginbmatrix(1-p)^2 &0&0&0 \ 0& p(1-p) &0&0 \ 0&0& p(1-p) &0 \ 0&0&0& p^2 endbmatrix.$$ You can then compute the corresponding orthogonal projections using well-known formulas: $mathbf E[X]=langle p,p,p,p rangle$, as we’d expect, while $mathbf E[X+Y mid X] = langle p,p,p+1,p+1 rangle$, which jibes with the computation $mathbf E[X+Ymid X] = X+mathbf E[Y]$ from a previous section. Using this inner product, $X-mathbf E[X]$ is indeed orthogonal to $mathbf 1 = langle1,1,1,1rangle$.






share|cite|improve this answer






















  • Awesome! This was super helpful. What are keywords that I should Google if I want to read about this in detail?
    – taninamdar
    Sep 12 at 15:09






  • 1




    @taninamdar I tried “conditional expectation of random variable as projection” just now and got some promising-looking hits.
    – amd
    Sep 12 at 18:47


















up vote
1
down vote













You could think of a "constant" as a degenerated random variable, with values equaling to a constant number $a$ for all scenarios $omega$, and that's why you get $(a,a,a,a)=a(1,1,1,1)$.



And in general, you could think a random variable $X$ as a function/mapping, and those random variables could conform a linear space (as the book says, it is linear in terms of addition and scalar multiplication). Then you define inner product on this linear space by $X cdot Y = mathbb E[XY]$. Let constant $c=1$ for all $omega$, and you have $(X-mathbb EX) cdot c = mathbb E[(X-mathbb E X)cdot 1]=0$






share|cite|improve this answer




















  • But as I have mentioned in the comment above, this doesn't seem to work in the case of biased coin.
    – taninamdar
    Sep 10 at 20:50










  • @taninamdar not sure what you mean by ‘solve for a’ in your above comment. The form $(a,a,a,a)$ represents all constants, and they fall into the sub space generated by $(1,1,1,1)$, and in order to check for projection, you only need to deal with $(1,1,1,1)$, or maybe you do not know how $(1/2,1/2,1/2,1/2)$ is calculated? It is the expectation of random variable $X$, and it is also a constant. You could try probability other than $1/2$, and you’ll still get $0$ in the end
    – Yujie Zha
    Sep 10 at 20:54











  • Suppose I want to compute $E[X]$ in the case of biased coin. I don't know what it is, but it should equal $(a, a, a, a)$ for some constant $a$. So $X-E[X] = (-a, -a, 1-a, 1-a)$. Now, I also know that $(X-E[X]) cdot mathbb1 = 0$. Now if I solve this equation for $a$, it gives $a = frac12$, which means $E[X] = (1/2, 1/2, 1/2, 1/2)$. But intuitively, it should depend on the bias of the coin, $p$.
    – taninamdar
    Sep 10 at 20:59











  • (For example, I know that $E[X]$ in the 'normal' way is $2p$.)
    – taninamdar
    Sep 10 at 21:04










  • @taninamdar So you should not use the naive inner product on the vector space, but you should use the inner product defined in my answer above - then you will find the calculation would match. Basically, you would find this random variable could confirm a Hilbert Space. You could view Hilbert space as generalization of the vector space you see in the example. Just do a calculation and you’ll see
    – Yujie Zha
    Sep 11 at 1:01










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2912268%2funderstanding-vector-interpretation-of-random-variables%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted










The key to extending the ideas in that section of the notes to other probability spaces is hinted at in the last paragraph, where the authors write “… for example, $mathbf E[XY]$ acts like an inner product for random variables…” In fact, you have to use this inner product to compute the orthogonal projections. Relative to the basis that consists of the indicator variables for the elementary events and treating the random variables as vectors now, this inner product can be computed as the matrix product $Y^TQX$, where $Q$ is a diagonal matrix with the probabilities of the elementary events along its main diagonal. If we represent these probabilities as $p_i$, this expression can be written as $sum_i p_ix_iy_i$.



The authors are a bit cavalier about the inner product used in their examples: they use the dot product of the vectors to define orthogonality, which can be misleading. Per the previous paragraph, the matrix of the inner product for this space is actually $Q=operatornamediagleft(frac14,frac14,frac14,frac14right)$, yielding the inner product $frac14(Xcdot Y)$. This is zero iff $Xcdot Y$ is zero, so the factor of $frac14$ can be ignored when checking orthogonality. (In fact, this factor cancels when computing the angle between two vectors.)



For your unfair coin with $Pr(textHeads)=p$, the matrix of the inner product is instead $$Q=beginbmatrix(1-p)^2 &0&0&0 \ 0& p(1-p) &0&0 \ 0&0& p(1-p) &0 \ 0&0&0& p^2 endbmatrix.$$ You can then compute the corresponding orthogonal projections using well-known formulas: $mathbf E[X]=langle p,p,p,p rangle$, as we’d expect, while $mathbf E[X+Y mid X] = langle p,p,p+1,p+1 rangle$, which jibes with the computation $mathbf E[X+Ymid X] = X+mathbf E[Y]$ from a previous section. Using this inner product, $X-mathbf E[X]$ is indeed orthogonal to $mathbf 1 = langle1,1,1,1rangle$.






share|cite|improve this answer






















  • Awesome! This was super helpful. What are keywords that I should Google if I want to read about this in detail?
    – taninamdar
    Sep 12 at 15:09






  • 1




    @taninamdar I tried “conditional expectation of random variable as projection” just now and got some promising-looking hits.
    – amd
    Sep 12 at 18:47















up vote
1
down vote



accepted










The key to extending the ideas in that section of the notes to other probability spaces is hinted at in the last paragraph, where the authors write “… for example, $mathbf E[XY]$ acts like an inner product for random variables…” In fact, you have to use this inner product to compute the orthogonal projections. Relative to the basis that consists of the indicator variables for the elementary events and treating the random variables as vectors now, this inner product can be computed as the matrix product $Y^TQX$, where $Q$ is a diagonal matrix with the probabilities of the elementary events along its main diagonal. If we represent these probabilities as $p_i$, this expression can be written as $sum_i p_ix_iy_i$.



The authors are a bit cavalier about the inner product used in their examples: they use the dot product of the vectors to define orthogonality, which can be misleading. Per the previous paragraph, the matrix of the inner product for this space is actually $Q=operatornamediagleft(frac14,frac14,frac14,frac14right)$, yielding the inner product $frac14(Xcdot Y)$. This is zero iff $Xcdot Y$ is zero, so the factor of $frac14$ can be ignored when checking orthogonality. (In fact, this factor cancels when computing the angle between two vectors.)



For your unfair coin with $Pr(textHeads)=p$, the matrix of the inner product is instead $$Q=beginbmatrix(1-p)^2 &0&0&0 \ 0& p(1-p) &0&0 \ 0&0& p(1-p) &0 \ 0&0&0& p^2 endbmatrix.$$ You can then compute the corresponding orthogonal projections using well-known formulas: $mathbf E[X]=langle p,p,p,p rangle$, as we’d expect, while $mathbf E[X+Y mid X] = langle p,p,p+1,p+1 rangle$, which jibes with the computation $mathbf E[X+Ymid X] = X+mathbf E[Y]$ from a previous section. Using this inner product, $X-mathbf E[X]$ is indeed orthogonal to $mathbf 1 = langle1,1,1,1rangle$.






share|cite|improve this answer






















  • Awesome! This was super helpful. What are keywords that I should Google if I want to read about this in detail?
    – taninamdar
    Sep 12 at 15:09






  • 1




    @taninamdar I tried “conditional expectation of random variable as projection” just now and got some promising-looking hits.
    – amd
    Sep 12 at 18:47













up vote
1
down vote



accepted







up vote
1
down vote



accepted






The key to extending the ideas in that section of the notes to other probability spaces is hinted at in the last paragraph, where the authors write “… for example, $mathbf E[XY]$ acts like an inner product for random variables…” In fact, you have to use this inner product to compute the orthogonal projections. Relative to the basis that consists of the indicator variables for the elementary events and treating the random variables as vectors now, this inner product can be computed as the matrix product $Y^TQX$, where $Q$ is a diagonal matrix with the probabilities of the elementary events along its main diagonal. If we represent these probabilities as $p_i$, this expression can be written as $sum_i p_ix_iy_i$.



The authors are a bit cavalier about the inner product used in their examples: they use the dot product of the vectors to define orthogonality, which can be misleading. Per the previous paragraph, the matrix of the inner product for this space is actually $Q=operatornamediagleft(frac14,frac14,frac14,frac14right)$, yielding the inner product $frac14(Xcdot Y)$. This is zero iff $Xcdot Y$ is zero, so the factor of $frac14$ can be ignored when checking orthogonality. (In fact, this factor cancels when computing the angle between two vectors.)



For your unfair coin with $Pr(textHeads)=p$, the matrix of the inner product is instead $$Q=beginbmatrix(1-p)^2 &0&0&0 \ 0& p(1-p) &0&0 \ 0&0& p(1-p) &0 \ 0&0&0& p^2 endbmatrix.$$ You can then compute the corresponding orthogonal projections using well-known formulas: $mathbf E[X]=langle p,p,p,p rangle$, as we’d expect, while $mathbf E[X+Y mid X] = langle p,p,p+1,p+1 rangle$, which jibes with the computation $mathbf E[X+Ymid X] = X+mathbf E[Y]$ from a previous section. Using this inner product, $X-mathbf E[X]$ is indeed orthogonal to $mathbf 1 = langle1,1,1,1rangle$.






share|cite|improve this answer














The key to extending the ideas in that section of the notes to other probability spaces is hinted at in the last paragraph, where the authors write “… for example, $mathbf E[XY]$ acts like an inner product for random variables…” In fact, you have to use this inner product to compute the orthogonal projections. Relative to the basis that consists of the indicator variables for the elementary events and treating the random variables as vectors now, this inner product can be computed as the matrix product $Y^TQX$, where $Q$ is a diagonal matrix with the probabilities of the elementary events along its main diagonal. If we represent these probabilities as $p_i$, this expression can be written as $sum_i p_ix_iy_i$.



The authors are a bit cavalier about the inner product used in their examples: they use the dot product of the vectors to define orthogonality, which can be misleading. Per the previous paragraph, the matrix of the inner product for this space is actually $Q=operatornamediagleft(frac14,frac14,frac14,frac14right)$, yielding the inner product $frac14(Xcdot Y)$. This is zero iff $Xcdot Y$ is zero, so the factor of $frac14$ can be ignored when checking orthogonality. (In fact, this factor cancels when computing the angle between two vectors.)



For your unfair coin with $Pr(textHeads)=p$, the matrix of the inner product is instead $$Q=beginbmatrix(1-p)^2 &0&0&0 \ 0& p(1-p) &0&0 \ 0&0& p(1-p) &0 \ 0&0&0& p^2 endbmatrix.$$ You can then compute the corresponding orthogonal projections using well-known formulas: $mathbf E[X]=langle p,p,p,p rangle$, as we’d expect, while $mathbf E[X+Y mid X] = langle p,p,p+1,p+1 rangle$, which jibes with the computation $mathbf E[X+Ymid X] = X+mathbf E[Y]$ from a previous section. Using this inner product, $X-mathbf E[X]$ is indeed orthogonal to $mathbf 1 = langle1,1,1,1rangle$.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Sep 12 at 18:49

























answered Sep 11 at 20:26









amd

27k21046




27k21046











  • Awesome! This was super helpful. What are keywords that I should Google if I want to read about this in detail?
    – taninamdar
    Sep 12 at 15:09






  • 1




    @taninamdar I tried “conditional expectation of random variable as projection” just now and got some promising-looking hits.
    – amd
    Sep 12 at 18:47

















  • Awesome! This was super helpful. What are keywords that I should Google if I want to read about this in detail?
    – taninamdar
    Sep 12 at 15:09






  • 1




    @taninamdar I tried “conditional expectation of random variable as projection” just now and got some promising-looking hits.
    – amd
    Sep 12 at 18:47
















Awesome! This was super helpful. What are keywords that I should Google if I want to read about this in detail?
– taninamdar
Sep 12 at 15:09




Awesome! This was super helpful. What are keywords that I should Google if I want to read about this in detail?
– taninamdar
Sep 12 at 15:09




1




1




@taninamdar I tried “conditional expectation of random variable as projection” just now and got some promising-looking hits.
– amd
Sep 12 at 18:47





@taninamdar I tried “conditional expectation of random variable as projection” just now and got some promising-looking hits.
– amd
Sep 12 at 18:47











up vote
1
down vote













You could think of a "constant" as a degenerated random variable, with values equaling to a constant number $a$ for all scenarios $omega$, and that's why you get $(a,a,a,a)=a(1,1,1,1)$.



And in general, you could think a random variable $X$ as a function/mapping, and those random variables could conform a linear space (as the book says, it is linear in terms of addition and scalar multiplication). Then you define inner product on this linear space by $X cdot Y = mathbb E[XY]$. Let constant $c=1$ for all $omega$, and you have $(X-mathbb EX) cdot c = mathbb E[(X-mathbb E X)cdot 1]=0$






share|cite|improve this answer




















  • But as I have mentioned in the comment above, this doesn't seem to work in the case of biased coin.
    – taninamdar
    Sep 10 at 20:50










  • @taninamdar not sure what you mean by ‘solve for a’ in your above comment. The form $(a,a,a,a)$ represents all constants, and they fall into the sub space generated by $(1,1,1,1)$, and in order to check for projection, you only need to deal with $(1,1,1,1)$, or maybe you do not know how $(1/2,1/2,1/2,1/2)$ is calculated? It is the expectation of random variable $X$, and it is also a constant. You could try probability other than $1/2$, and you’ll still get $0$ in the end
    – Yujie Zha
    Sep 10 at 20:54











  • Suppose I want to compute $E[X]$ in the case of biased coin. I don't know what it is, but it should equal $(a, a, a, a)$ for some constant $a$. So $X-E[X] = (-a, -a, 1-a, 1-a)$. Now, I also know that $(X-E[X]) cdot mathbb1 = 0$. Now if I solve this equation for $a$, it gives $a = frac12$, which means $E[X] = (1/2, 1/2, 1/2, 1/2)$. But intuitively, it should depend on the bias of the coin, $p$.
    – taninamdar
    Sep 10 at 20:59











  • (For example, I know that $E[X]$ in the 'normal' way is $2p$.)
    – taninamdar
    Sep 10 at 21:04










  • @taninamdar So you should not use the naive inner product on the vector space, but you should use the inner product defined in my answer above - then you will find the calculation would match. Basically, you would find this random variable could confirm a Hilbert Space. You could view Hilbert space as generalization of the vector space you see in the example. Just do a calculation and you’ll see
    – Yujie Zha
    Sep 11 at 1:01














up vote
1
down vote













You could think of a "constant" as a degenerated random variable, with values equaling to a constant number $a$ for all scenarios $omega$, and that's why you get $(a,a,a,a)=a(1,1,1,1)$.



And in general, you could think a random variable $X$ as a function/mapping, and those random variables could conform a linear space (as the book says, it is linear in terms of addition and scalar multiplication). Then you define inner product on this linear space by $X cdot Y = mathbb E[XY]$. Let constant $c=1$ for all $omega$, and you have $(X-mathbb EX) cdot c = mathbb E[(X-mathbb E X)cdot 1]=0$






share|cite|improve this answer




















  • But as I have mentioned in the comment above, this doesn't seem to work in the case of biased coin.
    – taninamdar
    Sep 10 at 20:50










  • @taninamdar not sure what you mean by ‘solve for a’ in your above comment. The form $(a,a,a,a)$ represents all constants, and they fall into the sub space generated by $(1,1,1,1)$, and in order to check for projection, you only need to deal with $(1,1,1,1)$, or maybe you do not know how $(1/2,1/2,1/2,1/2)$ is calculated? It is the expectation of random variable $X$, and it is also a constant. You could try probability other than $1/2$, and you’ll still get $0$ in the end
    – Yujie Zha
    Sep 10 at 20:54











  • Suppose I want to compute $E[X]$ in the case of biased coin. I don't know what it is, but it should equal $(a, a, a, a)$ for some constant $a$. So $X-E[X] = (-a, -a, 1-a, 1-a)$. Now, I also know that $(X-E[X]) cdot mathbb1 = 0$. Now if I solve this equation for $a$, it gives $a = frac12$, which means $E[X] = (1/2, 1/2, 1/2, 1/2)$. But intuitively, it should depend on the bias of the coin, $p$.
    – taninamdar
    Sep 10 at 20:59











  • (For example, I know that $E[X]$ in the 'normal' way is $2p$.)
    – taninamdar
    Sep 10 at 21:04










  • @taninamdar So you should not use the naive inner product on the vector space, but you should use the inner product defined in my answer above - then you will find the calculation would match. Basically, you would find this random variable could confirm a Hilbert Space. You could view Hilbert space as generalization of the vector space you see in the example. Just do a calculation and you’ll see
    – Yujie Zha
    Sep 11 at 1:01












up vote
1
down vote










up vote
1
down vote









You could think of a "constant" as a degenerated random variable, with values equaling to a constant number $a$ for all scenarios $omega$, and that's why you get $(a,a,a,a)=a(1,1,1,1)$.



And in general, you could think a random variable $X$ as a function/mapping, and those random variables could conform a linear space (as the book says, it is linear in terms of addition and scalar multiplication). Then you define inner product on this linear space by $X cdot Y = mathbb E[XY]$. Let constant $c=1$ for all $omega$, and you have $(X-mathbb EX) cdot c = mathbb E[(X-mathbb E X)cdot 1]=0$






share|cite|improve this answer












You could think of a "constant" as a degenerated random variable, with values equaling to a constant number $a$ for all scenarios $omega$, and that's why you get $(a,a,a,a)=a(1,1,1,1)$.



And in general, you could think a random variable $X$ as a function/mapping, and those random variables could conform a linear space (as the book says, it is linear in terms of addition and scalar multiplication). Then you define inner product on this linear space by $X cdot Y = mathbb E[XY]$. Let constant $c=1$ for all $omega$, and you have $(X-mathbb EX) cdot c = mathbb E[(X-mathbb E X)cdot 1]=0$







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Sep 10 at 20:25









Yujie Zha

6,82811629




6,82811629











  • But as I have mentioned in the comment above, this doesn't seem to work in the case of biased coin.
    – taninamdar
    Sep 10 at 20:50










  • @taninamdar not sure what you mean by ‘solve for a’ in your above comment. The form $(a,a,a,a)$ represents all constants, and they fall into the sub space generated by $(1,1,1,1)$, and in order to check for projection, you only need to deal with $(1,1,1,1)$, or maybe you do not know how $(1/2,1/2,1/2,1/2)$ is calculated? It is the expectation of random variable $X$, and it is also a constant. You could try probability other than $1/2$, and you’ll still get $0$ in the end
    – Yujie Zha
    Sep 10 at 20:54











  • Suppose I want to compute $E[X]$ in the case of biased coin. I don't know what it is, but it should equal $(a, a, a, a)$ for some constant $a$. So $X-E[X] = (-a, -a, 1-a, 1-a)$. Now, I also know that $(X-E[X]) cdot mathbb1 = 0$. Now if I solve this equation for $a$, it gives $a = frac12$, which means $E[X] = (1/2, 1/2, 1/2, 1/2)$. But intuitively, it should depend on the bias of the coin, $p$.
    – taninamdar
    Sep 10 at 20:59











  • (For example, I know that $E[X]$ in the 'normal' way is $2p$.)
    – taninamdar
    Sep 10 at 21:04










  • @taninamdar So you should not use the naive inner product on the vector space, but you should use the inner product defined in my answer above - then you will find the calculation would match. Basically, you would find this random variable could confirm a Hilbert Space. You could view Hilbert space as generalization of the vector space you see in the example. Just do a calculation and you’ll see
    – Yujie Zha
    Sep 11 at 1:01
















  • But as I have mentioned in the comment above, this doesn't seem to work in the case of biased coin.
    – taninamdar
    Sep 10 at 20:50










  • @taninamdar not sure what you mean by ‘solve for a’ in your above comment. The form $(a,a,a,a)$ represents all constants, and they fall into the sub space generated by $(1,1,1,1)$, and in order to check for projection, you only need to deal with $(1,1,1,1)$, or maybe you do not know how $(1/2,1/2,1/2,1/2)$ is calculated? It is the expectation of random variable $X$, and it is also a constant. You could try probability other than $1/2$, and you’ll still get $0$ in the end
    – Yujie Zha
    Sep 10 at 20:54











  • Suppose I want to compute $E[X]$ in the case of biased coin. I don't know what it is, but it should equal $(a, a, a, a)$ for some constant $a$. So $X-E[X] = (-a, -a, 1-a, 1-a)$. Now, I also know that $(X-E[X]) cdot mathbb1 = 0$. Now if I solve this equation for $a$, it gives $a = frac12$, which means $E[X] = (1/2, 1/2, 1/2, 1/2)$. But intuitively, it should depend on the bias of the coin, $p$.
    – taninamdar
    Sep 10 at 20:59











  • (For example, I know that $E[X]$ in the 'normal' way is $2p$.)
    – taninamdar
    Sep 10 at 21:04










  • @taninamdar So you should not use the naive inner product on the vector space, but you should use the inner product defined in my answer above - then you will find the calculation would match. Basically, you would find this random variable could confirm a Hilbert Space. You could view Hilbert space as generalization of the vector space you see in the example. Just do a calculation and you’ll see
    – Yujie Zha
    Sep 11 at 1:01















But as I have mentioned in the comment above, this doesn't seem to work in the case of biased coin.
– taninamdar
Sep 10 at 20:50




But as I have mentioned in the comment above, this doesn't seem to work in the case of biased coin.
– taninamdar
Sep 10 at 20:50












@taninamdar not sure what you mean by ‘solve for a’ in your above comment. The form $(a,a,a,a)$ represents all constants, and they fall into the sub space generated by $(1,1,1,1)$, and in order to check for projection, you only need to deal with $(1,1,1,1)$, or maybe you do not know how $(1/2,1/2,1/2,1/2)$ is calculated? It is the expectation of random variable $X$, and it is also a constant. You could try probability other than $1/2$, and you’ll still get $0$ in the end
– Yujie Zha
Sep 10 at 20:54





@taninamdar not sure what you mean by ‘solve for a’ in your above comment. The form $(a,a,a,a)$ represents all constants, and they fall into the sub space generated by $(1,1,1,1)$, and in order to check for projection, you only need to deal with $(1,1,1,1)$, or maybe you do not know how $(1/2,1/2,1/2,1/2)$ is calculated? It is the expectation of random variable $X$, and it is also a constant. You could try probability other than $1/2$, and you’ll still get $0$ in the end
– Yujie Zha
Sep 10 at 20:54













Suppose I want to compute $E[X]$ in the case of biased coin. I don't know what it is, but it should equal $(a, a, a, a)$ for some constant $a$. So $X-E[X] = (-a, -a, 1-a, 1-a)$. Now, I also know that $(X-E[X]) cdot mathbb1 = 0$. Now if I solve this equation for $a$, it gives $a = frac12$, which means $E[X] = (1/2, 1/2, 1/2, 1/2)$. But intuitively, it should depend on the bias of the coin, $p$.
– taninamdar
Sep 10 at 20:59





Suppose I want to compute $E[X]$ in the case of biased coin. I don't know what it is, but it should equal $(a, a, a, a)$ for some constant $a$. So $X-E[X] = (-a, -a, 1-a, 1-a)$. Now, I also know that $(X-E[X]) cdot mathbb1 = 0$. Now if I solve this equation for $a$, it gives $a = frac12$, which means $E[X] = (1/2, 1/2, 1/2, 1/2)$. But intuitively, it should depend on the bias of the coin, $p$.
– taninamdar
Sep 10 at 20:59













(For example, I know that $E[X]$ in the 'normal' way is $2p$.)
– taninamdar
Sep 10 at 21:04




(For example, I know that $E[X]$ in the 'normal' way is $2p$.)
– taninamdar
Sep 10 at 21:04












@taninamdar So you should not use the naive inner product on the vector space, but you should use the inner product defined in my answer above - then you will find the calculation would match. Basically, you would find this random variable could confirm a Hilbert Space. You could view Hilbert space as generalization of the vector space you see in the example. Just do a calculation and you’ll see
– Yujie Zha
Sep 11 at 1:01




@taninamdar So you should not use the naive inner product on the vector space, but you should use the inner product defined in my answer above - then you will find the calculation would match. Basically, you would find this random variable could confirm a Hilbert Space. You could view Hilbert space as generalization of the vector space you see in the example. Just do a calculation and you’ll see
– Yujie Zha
Sep 11 at 1:01

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2912268%2funderstanding-vector-interpretation-of-random-variables%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

Why am i infinitely getting the same tweet with the Twitter Search API?

Is there any way to eliminate the singular point to solve this integral by hand or by approximations?

Strongly p-embedded subgroups and p-Sylow subgroups.