Isonormal Gaussian process
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I am reading Nualart's book The Malliavin Calculus and Related Topics and there is some issues that I am stuck wtih.
Let $H$ be a real separable Hilbert Space with. A stochastic process $W=W(h);;hin H$ defined in a complete probability space $(Omega,mathcalF,P)$ is an isonormal Gaussian process if $W$ is a centered Gaussian family of random variables such that $E(W(h)W(g))=langle h,grangle$ for all $g,hin H.$
The map $hmapsto W(h)$ is linear. Now Nualart said
- The mapping $hto W(h)$ provides a linear isometry of $H$ onoto a closed subspace of $L^2$ that will denote by $mathcalH_1.$
Question $1$: Why the map provides a linear isometry ?
Denote, now, by $mathcalG$ the $sigma-$alebra generated by the random variables $W(h);; hin H.$
- The set $exp(W(h)),hin H$ form a total subset of $L^2(Omega,mathcalG,P)$
The proof goes as follow, we take $Xin L^2$ such that $E(Xe^W(h))=0$ for all $hin H.$
By linearity of $hto W(h)$ we have $$Ebig( Xexp(sum_i=1^n t_iW(h_i))big)=0.quad (3)$$
This equation says that Laplace transform of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big)$ is identically zero on $BbbR^n.$
Then $E(Xmathbb1_G)=0$ for all $GinmathcalG$ so that $X=0.$
Quesstion $2$: I don't understand why linearity of $hto W(h)$ gives us equation $(3)$ and cannot write down that is the Laplace transform of of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big).$
stochastic-processes normal-distribution stochastic-analysis
add a comment |Â
up vote
0
down vote
favorite
I am reading Nualart's book The Malliavin Calculus and Related Topics and there is some issues that I am stuck wtih.
Let $H$ be a real separable Hilbert Space with. A stochastic process $W=W(h);;hin H$ defined in a complete probability space $(Omega,mathcalF,P)$ is an isonormal Gaussian process if $W$ is a centered Gaussian family of random variables such that $E(W(h)W(g))=langle h,grangle$ for all $g,hin H.$
The map $hmapsto W(h)$ is linear. Now Nualart said
- The mapping $hto W(h)$ provides a linear isometry of $H$ onoto a closed subspace of $L^2$ that will denote by $mathcalH_1.$
Question $1$: Why the map provides a linear isometry ?
Denote, now, by $mathcalG$ the $sigma-$alebra generated by the random variables $W(h);; hin H.$
- The set $exp(W(h)),hin H$ form a total subset of $L^2(Omega,mathcalG,P)$
The proof goes as follow, we take $Xin L^2$ such that $E(Xe^W(h))=0$ for all $hin H.$
By linearity of $hto W(h)$ we have $$Ebig( Xexp(sum_i=1^n t_iW(h_i))big)=0.quad (3)$$
This equation says that Laplace transform of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big)$ is identically zero on $BbbR^n.$
Then $E(Xmathbb1_G)=0$ for all $GinmathcalG$ so that $X=0.$
Quesstion $2$: I don't understand why linearity of $hto W(h)$ gives us equation $(3)$ and cannot write down that is the Laplace transform of of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big).$
stochastic-processes normal-distribution stochastic-analysis
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I am reading Nualart's book The Malliavin Calculus and Related Topics and there is some issues that I am stuck wtih.
Let $H$ be a real separable Hilbert Space with. A stochastic process $W=W(h);;hin H$ defined in a complete probability space $(Omega,mathcalF,P)$ is an isonormal Gaussian process if $W$ is a centered Gaussian family of random variables such that $E(W(h)W(g))=langle h,grangle$ for all $g,hin H.$
The map $hmapsto W(h)$ is linear. Now Nualart said
- The mapping $hto W(h)$ provides a linear isometry of $H$ onoto a closed subspace of $L^2$ that will denote by $mathcalH_1.$
Question $1$: Why the map provides a linear isometry ?
Denote, now, by $mathcalG$ the $sigma-$alebra generated by the random variables $W(h);; hin H.$
- The set $exp(W(h)),hin H$ form a total subset of $L^2(Omega,mathcalG,P)$
The proof goes as follow, we take $Xin L^2$ such that $E(Xe^W(h))=0$ for all $hin H.$
By linearity of $hto W(h)$ we have $$Ebig( Xexp(sum_i=1^n t_iW(h_i))big)=0.quad (3)$$
This equation says that Laplace transform of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big)$ is identically zero on $BbbR^n.$
Then $E(Xmathbb1_G)=0$ for all $GinmathcalG$ so that $X=0.$
Quesstion $2$: I don't understand why linearity of $hto W(h)$ gives us equation $(3)$ and cannot write down that is the Laplace transform of of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big).$
stochastic-processes normal-distribution stochastic-analysis
I am reading Nualart's book The Malliavin Calculus and Related Topics and there is some issues that I am stuck wtih.
Let $H$ be a real separable Hilbert Space with. A stochastic process $W=W(h);;hin H$ defined in a complete probability space $(Omega,mathcalF,P)$ is an isonormal Gaussian process if $W$ is a centered Gaussian family of random variables such that $E(W(h)W(g))=langle h,grangle$ for all $g,hin H.$
The map $hmapsto W(h)$ is linear. Now Nualart said
- The mapping $hto W(h)$ provides a linear isometry of $H$ onoto a closed subspace of $L^2$ that will denote by $mathcalH_1.$
Question $1$: Why the map provides a linear isometry ?
Denote, now, by $mathcalG$ the $sigma-$alebra generated by the random variables $W(h);; hin H.$
- The set $exp(W(h)),hin H$ form a total subset of $L^2(Omega,mathcalG,P)$
The proof goes as follow, we take $Xin L^2$ such that $E(Xe^W(h))=0$ for all $hin H.$
By linearity of $hto W(h)$ we have $$Ebig( Xexp(sum_i=1^n t_iW(h_i))big)=0.quad (3)$$
This equation says that Laplace transform of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big)$ is identically zero on $BbbR^n.$
Then $E(Xmathbb1_G)=0$ for all $GinmathcalG$ so that $X=0.$
Quesstion $2$: I don't understand why linearity of $hto W(h)$ gives us equation $(3)$ and cannot write down that is the Laplace transform of of the signed measure $Ebig(Xmathbb1_B(W(h_1),W(h_2),ldots,W(h_n))big).$
stochastic-processes normal-distribution stochastic-analysis
stochastic-processes normal-distribution stochastic-analysis
asked Sep 4 at 8:57
TheVie
364
364
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
First question: $|W(g)|^2=langle W(g), W(g) rangle =langle g, g rangle =|g|^2$ so $|W(g_1)-W(g_2)|=|W(g_1-g_2)|=|g_1-g_2||$. This proves that $g to W(g)$ is an isometry. To prove (3) all you have to do is write $sum_i=1^n t_iW(h_i)$ as $W(h)$ with $h =sum_i=1^n t_i h_i$. For the last part you need the following fact: suppose $mu(B)=EXI_B(Z)$ for Borel sets $B$ in $mathbb R^n$ where $Z$ is a random vector with $n$ components. Then, for any non-negative measurable function $f$ we have $int f, dmu =EXf(Z)$. To prove this last equation you just have to note that it holds when $f$ is an indiactor function (by definition), hence for all simple functions $f$, hence for all non-negative measurable functions $f$.
Thank you, so what is the closed subspace of $L^2$ for the isometry ? Ok for $(3)$ I didn't understand that we write $h$ as $sum t_ih_i:$ why this decomposition ? Thanks for the last fact.
â TheVie
Sep 4 at 9:21
You already know that $Xe^W(h)=0$ for all $h$ $(dagger)$. You are given $t_i$'s and $h_i$'s in 3). If you choose $h$ the way I mentioned and apply $(dagger)$ you will get 3).
â Kavi Rama Murthy
Sep 4 at 9:24
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
First question: $|W(g)|^2=langle W(g), W(g) rangle =langle g, g rangle =|g|^2$ so $|W(g_1)-W(g_2)|=|W(g_1-g_2)|=|g_1-g_2||$. This proves that $g to W(g)$ is an isometry. To prove (3) all you have to do is write $sum_i=1^n t_iW(h_i)$ as $W(h)$ with $h =sum_i=1^n t_i h_i$. For the last part you need the following fact: suppose $mu(B)=EXI_B(Z)$ for Borel sets $B$ in $mathbb R^n$ where $Z$ is a random vector with $n$ components. Then, for any non-negative measurable function $f$ we have $int f, dmu =EXf(Z)$. To prove this last equation you just have to note that it holds when $f$ is an indiactor function (by definition), hence for all simple functions $f$, hence for all non-negative measurable functions $f$.
Thank you, so what is the closed subspace of $L^2$ for the isometry ? Ok for $(3)$ I didn't understand that we write $h$ as $sum t_ih_i:$ why this decomposition ? Thanks for the last fact.
â TheVie
Sep 4 at 9:21
You already know that $Xe^W(h)=0$ for all $h$ $(dagger)$. You are given $t_i$'s and $h_i$'s in 3). If you choose $h$ the way I mentioned and apply $(dagger)$ you will get 3).
â Kavi Rama Murthy
Sep 4 at 9:24
add a comment |Â
up vote
2
down vote
First question: $|W(g)|^2=langle W(g), W(g) rangle =langle g, g rangle =|g|^2$ so $|W(g_1)-W(g_2)|=|W(g_1-g_2)|=|g_1-g_2||$. This proves that $g to W(g)$ is an isometry. To prove (3) all you have to do is write $sum_i=1^n t_iW(h_i)$ as $W(h)$ with $h =sum_i=1^n t_i h_i$. For the last part you need the following fact: suppose $mu(B)=EXI_B(Z)$ for Borel sets $B$ in $mathbb R^n$ where $Z$ is a random vector with $n$ components. Then, for any non-negative measurable function $f$ we have $int f, dmu =EXf(Z)$. To prove this last equation you just have to note that it holds when $f$ is an indiactor function (by definition), hence for all simple functions $f$, hence for all non-negative measurable functions $f$.
Thank you, so what is the closed subspace of $L^2$ for the isometry ? Ok for $(3)$ I didn't understand that we write $h$ as $sum t_ih_i:$ why this decomposition ? Thanks for the last fact.
â TheVie
Sep 4 at 9:21
You already know that $Xe^W(h)=0$ for all $h$ $(dagger)$. You are given $t_i$'s and $h_i$'s in 3). If you choose $h$ the way I mentioned and apply $(dagger)$ you will get 3).
â Kavi Rama Murthy
Sep 4 at 9:24
add a comment |Â
up vote
2
down vote
up vote
2
down vote
First question: $|W(g)|^2=langle W(g), W(g) rangle =langle g, g rangle =|g|^2$ so $|W(g_1)-W(g_2)|=|W(g_1-g_2)|=|g_1-g_2||$. This proves that $g to W(g)$ is an isometry. To prove (3) all you have to do is write $sum_i=1^n t_iW(h_i)$ as $W(h)$ with $h =sum_i=1^n t_i h_i$. For the last part you need the following fact: suppose $mu(B)=EXI_B(Z)$ for Borel sets $B$ in $mathbb R^n$ where $Z$ is a random vector with $n$ components. Then, for any non-negative measurable function $f$ we have $int f, dmu =EXf(Z)$. To prove this last equation you just have to note that it holds when $f$ is an indiactor function (by definition), hence for all simple functions $f$, hence for all non-negative measurable functions $f$.
First question: $|W(g)|^2=langle W(g), W(g) rangle =langle g, g rangle =|g|^2$ so $|W(g_1)-W(g_2)|=|W(g_1-g_2)|=|g_1-g_2||$. This proves that $g to W(g)$ is an isometry. To prove (3) all you have to do is write $sum_i=1^n t_iW(h_i)$ as $W(h)$ with $h =sum_i=1^n t_i h_i$. For the last part you need the following fact: suppose $mu(B)=EXI_B(Z)$ for Borel sets $B$ in $mathbb R^n$ where $Z$ is a random vector with $n$ components. Then, for any non-negative measurable function $f$ we have $int f, dmu =EXf(Z)$. To prove this last equation you just have to note that it holds when $f$ is an indiactor function (by definition), hence for all simple functions $f$, hence for all non-negative measurable functions $f$.
answered Sep 4 at 9:15
Kavi Rama Murthy
26.1k31437
26.1k31437
Thank you, so what is the closed subspace of $L^2$ for the isometry ? Ok for $(3)$ I didn't understand that we write $h$ as $sum t_ih_i:$ why this decomposition ? Thanks for the last fact.
â TheVie
Sep 4 at 9:21
You already know that $Xe^W(h)=0$ for all $h$ $(dagger)$. You are given $t_i$'s and $h_i$'s in 3). If you choose $h$ the way I mentioned and apply $(dagger)$ you will get 3).
â Kavi Rama Murthy
Sep 4 at 9:24
add a comment |Â
Thank you, so what is the closed subspace of $L^2$ for the isometry ? Ok for $(3)$ I didn't understand that we write $h$ as $sum t_ih_i:$ why this decomposition ? Thanks for the last fact.
â TheVie
Sep 4 at 9:21
You already know that $Xe^W(h)=0$ for all $h$ $(dagger)$. You are given $t_i$'s and $h_i$'s in 3). If you choose $h$ the way I mentioned and apply $(dagger)$ you will get 3).
â Kavi Rama Murthy
Sep 4 at 9:24
Thank you, so what is the closed subspace of $L^2$ for the isometry ? Ok for $(3)$ I didn't understand that we write $h$ as $sum t_ih_i:$ why this decomposition ? Thanks for the last fact.
â TheVie
Sep 4 at 9:21
Thank you, so what is the closed subspace of $L^2$ for the isometry ? Ok for $(3)$ I didn't understand that we write $h$ as $sum t_ih_i:$ why this decomposition ? Thanks for the last fact.
â TheVie
Sep 4 at 9:21
You already know that $Xe^W(h)=0$ for all $h$ $(dagger)$. You are given $t_i$'s and $h_i$'s in 3). If you choose $h$ the way I mentioned and apply $(dagger)$ you will get 3).
â Kavi Rama Murthy
Sep 4 at 9:24
You already know that $Xe^W(h)=0$ for all $h$ $(dagger)$. You are given $t_i$'s and $h_i$'s in 3). If you choose $h$ the way I mentioned and apply $(dagger)$ you will get 3).
â Kavi Rama Murthy
Sep 4 at 9:24
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2904800%2fisonormal-gaussian-process%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password