Prove the reduced Riesz representation theorem for finite-dimensional vector spaces with using only the concepts in linear algebra

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite
2












If someone did not study functional analysis but just studied linear algebra, how to let them understand the idea of Riesz representation theorem for finite-dimensional vector spaces?



The Riesz representation theorem Wikipedia



Let $H$ be a Hilbert space, and let $H^*$ denote its dual space, consisting of all continuous linear functionals from $H$ into the field $mathbbR$ or $mathbbC$. If $x$ is an element of $H$, then the function $varphi_x$, for all $y$ in $H$ defined by:
beginalign*
varphi_x (y) = langle y,x rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $H^*$. The Riesz representation theorem states that every element of $H^*$ can be written uniquely in this form.



This description is abstract to me. Since linear algebra is sort of the reduced functional analysis, at the very first step, I am thinking to understand the reduced Riesz representation theorem applied to linear algebra.



In linear algebra, we intend to solve the problem of a linear system
beginalign*
A x = b
endalign*
where $A in mathbbR^m times mathbbR^n$ is an $m$ by $n$ matrix, $x in mathbbR^n$ is an $n$ by $1$ column vector and $b in mathbbR^m$ is an $m$ by $1$ column vector. The matrix $A$ transforms vectors in $mathbbR^n$ to vectors in $mathbbR^m$, thus we say $A: mathbbR^n to mathbbR^m$. But the vector $b$ is actually in the column space of $A$, say $C(A) = mathbbR^r subset mathbbR^m$, which has dimension $r$ that denotes the rank of $A$. Thus we can say $A: mathbbR^n to mathbbR^r$. If we have an $m$ by $1$ column vector $y$, then we can write
beginalign
y^T A x = y^T b
endalign
We can rewrite it in the form of inner product
beginalign
langle y,Ax rangle = langle y,b rangle
endalign
And if we consider $b$ as a functional in the dual space of $mathbbR^r$, denoted by $varphi_Ax(cdot) := langle cdot,b rangle$, then
beginalign
varphi_Ax (y) = langle y, A x rangle
endalign
Note that the mapping between $b$ and $varphi_Ax$ is one-to-one. We say, every $b$ in $mathbbR^r$ can be written uniquely in this form. It is very close to the equation in the Riesz representation theorem, but it seems we have to use $Ax$ instead of $x$, unless $A=I$ and $m=n=r$?



I am trying to state the reduced version of the Riesz representation theorem in linear algebra, as follows:



$mathbbR^r$ is a Hilbert space, and its dual space $(mathbbR^r)^*=mathbbR^r$, consisting of all continuous linear functionals from $mathbbR^r$ into the field $mathbbR$. If $Ax$ is an element of $mathbbR^r$, then the function $varphi_Ax$, for all $y$ in $mathbbR^r$ defined by:
beginalign*
varphi_Ax (y) = langle y,Ax rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $(mathbbR^r)^*$. The Riesz representation theorem states that every element of $(mathbbR^r)^*$ can be written uniquely in this form. That is, every vector $b$ in $mathbbR^r$ can be represented by $langle y,Ax rangle$.



This looks like a connection to the "weak formulation" of $Ax = b$, namely, we can find the solution $x in mathbbR^n$ of $Ax = b$, if for every "test" vector $y in mathbbR^m$ there holds $varphi_Ax (y) = langle y,Ax rangle$.



I am still not fully understand the theorem at this moment, so there might be something wrong stated above. Any comments? Could you provide me with a more clear structure of the reduced Riesz representation theorem in linear algebra?



In addition, the proof of the Riesz representation theorem in textbooks usually take with a nullspace of $varphi$ denoted by $mathrmker(varphi)$ and its orthogonal space $mathrmker(varphi)^perp$. In linear algebra, we know the row space of a matrix is always orthogonal to its nullspace. Is there any connection between these two ideas? In other words, can we prove the reduced Riesz representation theorem for finite-dimensional vector spaces with using only the concepts in linear algebra?










share|cite|improve this question























  • Riesz representation theorem on $mathbb R^n$ reduces to the fact that to every column vector there corresponds one and only one row vector, and that the correspondence is linear and preserves the length. Indeed, if $mathbb R^n$ is identified with the space of column vectors, then $(mathbb R^n)^star$ is the space of row vectors.
    – Giuseppe Negro
    Sep 4 at 16:08











  • @GiuseppeNegro "every column vector there corresponds one and only one row vector" implies the corresponding row vector is always the transpose of the column vector and they have the same length? For a matrix $A$, can we say the row space of $A$ is the dual of the column space of $A$?
    – Analysis Newbie
    Sep 4 at 17:21










  • Yes, obviously the "linear correspondence" I was referring to is the transposition. On $mathbb R^n$, the "Riesz isomorphism" is the transposition. On $mathbb C^n$, it is the transposition and complex conjugation.
    – Giuseppe Negro
    Sep 4 at 17:24















up vote
4
down vote

favorite
2












If someone did not study functional analysis but just studied linear algebra, how to let them understand the idea of Riesz representation theorem for finite-dimensional vector spaces?



The Riesz representation theorem Wikipedia



Let $H$ be a Hilbert space, and let $H^*$ denote its dual space, consisting of all continuous linear functionals from $H$ into the field $mathbbR$ or $mathbbC$. If $x$ is an element of $H$, then the function $varphi_x$, for all $y$ in $H$ defined by:
beginalign*
varphi_x (y) = langle y,x rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $H^*$. The Riesz representation theorem states that every element of $H^*$ can be written uniquely in this form.



This description is abstract to me. Since linear algebra is sort of the reduced functional analysis, at the very first step, I am thinking to understand the reduced Riesz representation theorem applied to linear algebra.



In linear algebra, we intend to solve the problem of a linear system
beginalign*
A x = b
endalign*
where $A in mathbbR^m times mathbbR^n$ is an $m$ by $n$ matrix, $x in mathbbR^n$ is an $n$ by $1$ column vector and $b in mathbbR^m$ is an $m$ by $1$ column vector. The matrix $A$ transforms vectors in $mathbbR^n$ to vectors in $mathbbR^m$, thus we say $A: mathbbR^n to mathbbR^m$. But the vector $b$ is actually in the column space of $A$, say $C(A) = mathbbR^r subset mathbbR^m$, which has dimension $r$ that denotes the rank of $A$. Thus we can say $A: mathbbR^n to mathbbR^r$. If we have an $m$ by $1$ column vector $y$, then we can write
beginalign
y^T A x = y^T b
endalign
We can rewrite it in the form of inner product
beginalign
langle y,Ax rangle = langle y,b rangle
endalign
And if we consider $b$ as a functional in the dual space of $mathbbR^r$, denoted by $varphi_Ax(cdot) := langle cdot,b rangle$, then
beginalign
varphi_Ax (y) = langle y, A x rangle
endalign
Note that the mapping between $b$ and $varphi_Ax$ is one-to-one. We say, every $b$ in $mathbbR^r$ can be written uniquely in this form. It is very close to the equation in the Riesz representation theorem, but it seems we have to use $Ax$ instead of $x$, unless $A=I$ and $m=n=r$?



I am trying to state the reduced version of the Riesz representation theorem in linear algebra, as follows:



$mathbbR^r$ is a Hilbert space, and its dual space $(mathbbR^r)^*=mathbbR^r$, consisting of all continuous linear functionals from $mathbbR^r$ into the field $mathbbR$. If $Ax$ is an element of $mathbbR^r$, then the function $varphi_Ax$, for all $y$ in $mathbbR^r$ defined by:
beginalign*
varphi_Ax (y) = langle y,Ax rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $(mathbbR^r)^*$. The Riesz representation theorem states that every element of $(mathbbR^r)^*$ can be written uniquely in this form. That is, every vector $b$ in $mathbbR^r$ can be represented by $langle y,Ax rangle$.



This looks like a connection to the "weak formulation" of $Ax = b$, namely, we can find the solution $x in mathbbR^n$ of $Ax = b$, if for every "test" vector $y in mathbbR^m$ there holds $varphi_Ax (y) = langle y,Ax rangle$.



I am still not fully understand the theorem at this moment, so there might be something wrong stated above. Any comments? Could you provide me with a more clear structure of the reduced Riesz representation theorem in linear algebra?



In addition, the proof of the Riesz representation theorem in textbooks usually take with a nullspace of $varphi$ denoted by $mathrmker(varphi)$ and its orthogonal space $mathrmker(varphi)^perp$. In linear algebra, we know the row space of a matrix is always orthogonal to its nullspace. Is there any connection between these two ideas? In other words, can we prove the reduced Riesz representation theorem for finite-dimensional vector spaces with using only the concepts in linear algebra?










share|cite|improve this question























  • Riesz representation theorem on $mathbb R^n$ reduces to the fact that to every column vector there corresponds one and only one row vector, and that the correspondence is linear and preserves the length. Indeed, if $mathbb R^n$ is identified with the space of column vectors, then $(mathbb R^n)^star$ is the space of row vectors.
    – Giuseppe Negro
    Sep 4 at 16:08











  • @GiuseppeNegro "every column vector there corresponds one and only one row vector" implies the corresponding row vector is always the transpose of the column vector and they have the same length? For a matrix $A$, can we say the row space of $A$ is the dual of the column space of $A$?
    – Analysis Newbie
    Sep 4 at 17:21










  • Yes, obviously the "linear correspondence" I was referring to is the transposition. On $mathbb R^n$, the "Riesz isomorphism" is the transposition. On $mathbb C^n$, it is the transposition and complex conjugation.
    – Giuseppe Negro
    Sep 4 at 17:24













up vote
4
down vote

favorite
2









up vote
4
down vote

favorite
2






2





If someone did not study functional analysis but just studied linear algebra, how to let them understand the idea of Riesz representation theorem for finite-dimensional vector spaces?



The Riesz representation theorem Wikipedia



Let $H$ be a Hilbert space, and let $H^*$ denote its dual space, consisting of all continuous linear functionals from $H$ into the field $mathbbR$ or $mathbbC$. If $x$ is an element of $H$, then the function $varphi_x$, for all $y$ in $H$ defined by:
beginalign*
varphi_x (y) = langle y,x rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $H^*$. The Riesz representation theorem states that every element of $H^*$ can be written uniquely in this form.



This description is abstract to me. Since linear algebra is sort of the reduced functional analysis, at the very first step, I am thinking to understand the reduced Riesz representation theorem applied to linear algebra.



In linear algebra, we intend to solve the problem of a linear system
beginalign*
A x = b
endalign*
where $A in mathbbR^m times mathbbR^n$ is an $m$ by $n$ matrix, $x in mathbbR^n$ is an $n$ by $1$ column vector and $b in mathbbR^m$ is an $m$ by $1$ column vector. The matrix $A$ transforms vectors in $mathbbR^n$ to vectors in $mathbbR^m$, thus we say $A: mathbbR^n to mathbbR^m$. But the vector $b$ is actually in the column space of $A$, say $C(A) = mathbbR^r subset mathbbR^m$, which has dimension $r$ that denotes the rank of $A$. Thus we can say $A: mathbbR^n to mathbbR^r$. If we have an $m$ by $1$ column vector $y$, then we can write
beginalign
y^T A x = y^T b
endalign
We can rewrite it in the form of inner product
beginalign
langle y,Ax rangle = langle y,b rangle
endalign
And if we consider $b$ as a functional in the dual space of $mathbbR^r$, denoted by $varphi_Ax(cdot) := langle cdot,b rangle$, then
beginalign
varphi_Ax (y) = langle y, A x rangle
endalign
Note that the mapping between $b$ and $varphi_Ax$ is one-to-one. We say, every $b$ in $mathbbR^r$ can be written uniquely in this form. It is very close to the equation in the Riesz representation theorem, but it seems we have to use $Ax$ instead of $x$, unless $A=I$ and $m=n=r$?



I am trying to state the reduced version of the Riesz representation theorem in linear algebra, as follows:



$mathbbR^r$ is a Hilbert space, and its dual space $(mathbbR^r)^*=mathbbR^r$, consisting of all continuous linear functionals from $mathbbR^r$ into the field $mathbbR$. If $Ax$ is an element of $mathbbR^r$, then the function $varphi_Ax$, for all $y$ in $mathbbR^r$ defined by:
beginalign*
varphi_Ax (y) = langle y,Ax rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $(mathbbR^r)^*$. The Riesz representation theorem states that every element of $(mathbbR^r)^*$ can be written uniquely in this form. That is, every vector $b$ in $mathbbR^r$ can be represented by $langle y,Ax rangle$.



This looks like a connection to the "weak formulation" of $Ax = b$, namely, we can find the solution $x in mathbbR^n$ of $Ax = b$, if for every "test" vector $y in mathbbR^m$ there holds $varphi_Ax (y) = langle y,Ax rangle$.



I am still not fully understand the theorem at this moment, so there might be something wrong stated above. Any comments? Could you provide me with a more clear structure of the reduced Riesz representation theorem in linear algebra?



In addition, the proof of the Riesz representation theorem in textbooks usually take with a nullspace of $varphi$ denoted by $mathrmker(varphi)$ and its orthogonal space $mathrmker(varphi)^perp$. In linear algebra, we know the row space of a matrix is always orthogonal to its nullspace. Is there any connection between these two ideas? In other words, can we prove the reduced Riesz representation theorem for finite-dimensional vector spaces with using only the concepts in linear algebra?










share|cite|improve this question















If someone did not study functional analysis but just studied linear algebra, how to let them understand the idea of Riesz representation theorem for finite-dimensional vector spaces?



The Riesz representation theorem Wikipedia



Let $H$ be a Hilbert space, and let $H^*$ denote its dual space, consisting of all continuous linear functionals from $H$ into the field $mathbbR$ or $mathbbC$. If $x$ is an element of $H$, then the function $varphi_x$, for all $y$ in $H$ defined by:
beginalign*
varphi_x (y) = langle y,x rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $H^*$. The Riesz representation theorem states that every element of $H^*$ can be written uniquely in this form.



This description is abstract to me. Since linear algebra is sort of the reduced functional analysis, at the very first step, I am thinking to understand the reduced Riesz representation theorem applied to linear algebra.



In linear algebra, we intend to solve the problem of a linear system
beginalign*
A x = b
endalign*
where $A in mathbbR^m times mathbbR^n$ is an $m$ by $n$ matrix, $x in mathbbR^n$ is an $n$ by $1$ column vector and $b in mathbbR^m$ is an $m$ by $1$ column vector. The matrix $A$ transforms vectors in $mathbbR^n$ to vectors in $mathbbR^m$, thus we say $A: mathbbR^n to mathbbR^m$. But the vector $b$ is actually in the column space of $A$, say $C(A) = mathbbR^r subset mathbbR^m$, which has dimension $r$ that denotes the rank of $A$. Thus we can say $A: mathbbR^n to mathbbR^r$. If we have an $m$ by $1$ column vector $y$, then we can write
beginalign
y^T A x = y^T b
endalign
We can rewrite it in the form of inner product
beginalign
langle y,Ax rangle = langle y,b rangle
endalign
And if we consider $b$ as a functional in the dual space of $mathbbR^r$, denoted by $varphi_Ax(cdot) := langle cdot,b rangle$, then
beginalign
varphi_Ax (y) = langle y, A x rangle
endalign
Note that the mapping between $b$ and $varphi_Ax$ is one-to-one. We say, every $b$ in $mathbbR^r$ can be written uniquely in this form. It is very close to the equation in the Riesz representation theorem, but it seems we have to use $Ax$ instead of $x$, unless $A=I$ and $m=n=r$?



I am trying to state the reduced version of the Riesz representation theorem in linear algebra, as follows:



$mathbbR^r$ is a Hilbert space, and its dual space $(mathbbR^r)^*=mathbbR^r$, consisting of all continuous linear functionals from $mathbbR^r$ into the field $mathbbR$. If $Ax$ is an element of $mathbbR^r$, then the function $varphi_Ax$, for all $y$ in $mathbbR^r$ defined by:
beginalign*
varphi_Ax (y) = langle y,Ax rangle
endalign*
where $langle cdot,cdot rangle$ denotes the inner product of the Hilbert space, is an element of $(mathbbR^r)^*$. The Riesz representation theorem states that every element of $(mathbbR^r)^*$ can be written uniquely in this form. That is, every vector $b$ in $mathbbR^r$ can be represented by $langle y,Ax rangle$.



This looks like a connection to the "weak formulation" of $Ax = b$, namely, we can find the solution $x in mathbbR^n$ of $Ax = b$, if for every "test" vector $y in mathbbR^m$ there holds $varphi_Ax (y) = langle y,Ax rangle$.



I am still not fully understand the theorem at this moment, so there might be something wrong stated above. Any comments? Could you provide me with a more clear structure of the reduced Riesz representation theorem in linear algebra?



In addition, the proof of the Riesz representation theorem in textbooks usually take with a nullspace of $varphi$ denoted by $mathrmker(varphi)$ and its orthogonal space $mathrmker(varphi)^perp$. In linear algebra, we know the row space of a matrix is always orthogonal to its nullspace. Is there any connection between these two ideas? In other words, can we prove the reduced Riesz representation theorem for finite-dimensional vector spaces with using only the concepts in linear algebra?







linear-algebra functional-analysis analysis pde linear-transformations






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 4 at 18:10

























asked Sep 4 at 13:09









Analysis Newbie

34617




34617











  • Riesz representation theorem on $mathbb R^n$ reduces to the fact that to every column vector there corresponds one and only one row vector, and that the correspondence is linear and preserves the length. Indeed, if $mathbb R^n$ is identified with the space of column vectors, then $(mathbb R^n)^star$ is the space of row vectors.
    – Giuseppe Negro
    Sep 4 at 16:08











  • @GiuseppeNegro "every column vector there corresponds one and only one row vector" implies the corresponding row vector is always the transpose of the column vector and they have the same length? For a matrix $A$, can we say the row space of $A$ is the dual of the column space of $A$?
    – Analysis Newbie
    Sep 4 at 17:21










  • Yes, obviously the "linear correspondence" I was referring to is the transposition. On $mathbb R^n$, the "Riesz isomorphism" is the transposition. On $mathbb C^n$, it is the transposition and complex conjugation.
    – Giuseppe Negro
    Sep 4 at 17:24

















  • Riesz representation theorem on $mathbb R^n$ reduces to the fact that to every column vector there corresponds one and only one row vector, and that the correspondence is linear and preserves the length. Indeed, if $mathbb R^n$ is identified with the space of column vectors, then $(mathbb R^n)^star$ is the space of row vectors.
    – Giuseppe Negro
    Sep 4 at 16:08











  • @GiuseppeNegro "every column vector there corresponds one and only one row vector" implies the corresponding row vector is always the transpose of the column vector and they have the same length? For a matrix $A$, can we say the row space of $A$ is the dual of the column space of $A$?
    – Analysis Newbie
    Sep 4 at 17:21










  • Yes, obviously the "linear correspondence" I was referring to is the transposition. On $mathbb R^n$, the "Riesz isomorphism" is the transposition. On $mathbb C^n$, it is the transposition and complex conjugation.
    – Giuseppe Negro
    Sep 4 at 17:24
















Riesz representation theorem on $mathbb R^n$ reduces to the fact that to every column vector there corresponds one and only one row vector, and that the correspondence is linear and preserves the length. Indeed, if $mathbb R^n$ is identified with the space of column vectors, then $(mathbb R^n)^star$ is the space of row vectors.
– Giuseppe Negro
Sep 4 at 16:08





Riesz representation theorem on $mathbb R^n$ reduces to the fact that to every column vector there corresponds one and only one row vector, and that the correspondence is linear and preserves the length. Indeed, if $mathbb R^n$ is identified with the space of column vectors, then $(mathbb R^n)^star$ is the space of row vectors.
– Giuseppe Negro
Sep 4 at 16:08













@GiuseppeNegro "every column vector there corresponds one and only one row vector" implies the corresponding row vector is always the transpose of the column vector and they have the same length? For a matrix $A$, can we say the row space of $A$ is the dual of the column space of $A$?
– Analysis Newbie
Sep 4 at 17:21




@GiuseppeNegro "every column vector there corresponds one and only one row vector" implies the corresponding row vector is always the transpose of the column vector and they have the same length? For a matrix $A$, can we say the row space of $A$ is the dual of the column space of $A$?
– Analysis Newbie
Sep 4 at 17:21












Yes, obviously the "linear correspondence" I was referring to is the transposition. On $mathbb R^n$, the "Riesz isomorphism" is the transposition. On $mathbb C^n$, it is the transposition and complex conjugation.
– Giuseppe Negro
Sep 4 at 17:24





Yes, obviously the "linear correspondence" I was referring to is the transposition. On $mathbb R^n$, the "Riesz isomorphism" is the transposition. On $mathbb C^n$, it is the transposition and complex conjugation.
– Giuseppe Negro
Sep 4 at 17:24











2 Answers
2






active

oldest

votes

















up vote
1
down vote













Here is a formulation of Riesz representation theorem from mine lecture notes which you might find helpfull.




Let $V,W$ be vector spaces over a field $K$ with $dim(V), dim(W)< infty$ and $f:Vtimes W rightarrow K$ be a nondegenerate bilinear form. Then for every $pi in W^*$there exists $v in V$ such that $$pi=f(v,cdot)$$







share|cite|improve this answer


















  • 1




    This version is only true if $W^*$ denotes the continuous linear forms.
    – Jochen
    Sep 4 at 15:43










  • @Jochen my bad, forgot to add $V,W$ have finite dimension.
    – user3342072
    Sep 4 at 15:46

















up vote
1
down vote













Every non-trivial linear functional $Phi$ on a vector space $V$ is characterized by its null space $N=mathcalN(Phi)$, which is always of co-dimension one in $V$. This is because $Phi(v)=1$ for some $v$, which allows you to write
$$
w = (w-Phi(w)v)+Phi(w)v,
$$
and $w-Phi(w)vin N$ is easily directly verified. So $V=Noplus[v]$. Conversely, every such decomposition defines a unique linear functional $Phi$ such that $N=mathcalN(Phi)$ and $Phi(v)=1$. To see that $Phi$ is unique, suppose $Psi$ is another such linear function. Then $Phi(w-Phi(w)v)=0$ implies $Psi(w-Phi(w)v)=0$ or $Psi(w)=Psi(v)Phi(w)=Phi(w)$ for all $w$. This is true for finite- or infinite-dimensional real or complex linear spaces.



If $Phi$ is a non-zero continuous linear functional on a Hilbert space $V$, then $N=mathcalN(Phi)=Phi^-10$ is closed and, therefore, there is a unique vector $v$ such that $v perp N$. So $V=Noplus [v]$, where the decomposition is orthogonal. And we can assume $Phi(v)=1$ by multiplying $v$ by an appropriate scalar. The linear functional $Psi(w)=fraclangle w,vranglelangle v,vrangle$ has the same null space as $Phi$ and $Psi(v)=Phi(v)=1$. Therefore $Psi=Phi$.



A Hilbert space is needed in infinite dimensions in order to come up with a vector that is orthogonal to the null space of a given continuous linear functional. Continuity ensures that the null space is closed, so that this may be done. In finite-dimensional spaces, only Gram-Schmidt is needed.






share|cite|improve this answer






















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2905043%2fprove-the-reduced-riesz-representation-theorem-for-finite-dimensional-vector-spa%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote













    Here is a formulation of Riesz representation theorem from mine lecture notes which you might find helpfull.




    Let $V,W$ be vector spaces over a field $K$ with $dim(V), dim(W)< infty$ and $f:Vtimes W rightarrow K$ be a nondegenerate bilinear form. Then for every $pi in W^*$there exists $v in V$ such that $$pi=f(v,cdot)$$







    share|cite|improve this answer


















    • 1




      This version is only true if $W^*$ denotes the continuous linear forms.
      – Jochen
      Sep 4 at 15:43










    • @Jochen my bad, forgot to add $V,W$ have finite dimension.
      – user3342072
      Sep 4 at 15:46














    up vote
    1
    down vote













    Here is a formulation of Riesz representation theorem from mine lecture notes which you might find helpfull.




    Let $V,W$ be vector spaces over a field $K$ with $dim(V), dim(W)< infty$ and $f:Vtimes W rightarrow K$ be a nondegenerate bilinear form. Then for every $pi in W^*$there exists $v in V$ such that $$pi=f(v,cdot)$$







    share|cite|improve this answer


















    • 1




      This version is only true if $W^*$ denotes the continuous linear forms.
      – Jochen
      Sep 4 at 15:43










    • @Jochen my bad, forgot to add $V,W$ have finite dimension.
      – user3342072
      Sep 4 at 15:46












    up vote
    1
    down vote










    up vote
    1
    down vote









    Here is a formulation of Riesz representation theorem from mine lecture notes which you might find helpfull.




    Let $V,W$ be vector spaces over a field $K$ with $dim(V), dim(W)< infty$ and $f:Vtimes W rightarrow K$ be a nondegenerate bilinear form. Then for every $pi in W^*$there exists $v in V$ such that $$pi=f(v,cdot)$$







    share|cite|improve this answer














    Here is a formulation of Riesz representation theorem from mine lecture notes which you might find helpfull.




    Let $V,W$ be vector spaces over a field $K$ with $dim(V), dim(W)< infty$ and $f:Vtimes W rightarrow K$ be a nondegenerate bilinear form. Then for every $pi in W^*$there exists $v in V$ such that $$pi=f(v,cdot)$$








    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Sep 4 at 15:46

























    answered Sep 4 at 13:19









    user3342072

    310113




    310113







    • 1




      This version is only true if $W^*$ denotes the continuous linear forms.
      – Jochen
      Sep 4 at 15:43










    • @Jochen my bad, forgot to add $V,W$ have finite dimension.
      – user3342072
      Sep 4 at 15:46












    • 1




      This version is only true if $W^*$ denotes the continuous linear forms.
      – Jochen
      Sep 4 at 15:43










    • @Jochen my bad, forgot to add $V,W$ have finite dimension.
      – user3342072
      Sep 4 at 15:46







    1




    1




    This version is only true if $W^*$ denotes the continuous linear forms.
    – Jochen
    Sep 4 at 15:43




    This version is only true if $W^*$ denotes the continuous linear forms.
    – Jochen
    Sep 4 at 15:43












    @Jochen my bad, forgot to add $V,W$ have finite dimension.
    – user3342072
    Sep 4 at 15:46




    @Jochen my bad, forgot to add $V,W$ have finite dimension.
    – user3342072
    Sep 4 at 15:46










    up vote
    1
    down vote













    Every non-trivial linear functional $Phi$ on a vector space $V$ is characterized by its null space $N=mathcalN(Phi)$, which is always of co-dimension one in $V$. This is because $Phi(v)=1$ for some $v$, which allows you to write
    $$
    w = (w-Phi(w)v)+Phi(w)v,
    $$
    and $w-Phi(w)vin N$ is easily directly verified. So $V=Noplus[v]$. Conversely, every such decomposition defines a unique linear functional $Phi$ such that $N=mathcalN(Phi)$ and $Phi(v)=1$. To see that $Phi$ is unique, suppose $Psi$ is another such linear function. Then $Phi(w-Phi(w)v)=0$ implies $Psi(w-Phi(w)v)=0$ or $Psi(w)=Psi(v)Phi(w)=Phi(w)$ for all $w$. This is true for finite- or infinite-dimensional real or complex linear spaces.



    If $Phi$ is a non-zero continuous linear functional on a Hilbert space $V$, then $N=mathcalN(Phi)=Phi^-10$ is closed and, therefore, there is a unique vector $v$ such that $v perp N$. So $V=Noplus [v]$, where the decomposition is orthogonal. And we can assume $Phi(v)=1$ by multiplying $v$ by an appropriate scalar. The linear functional $Psi(w)=fraclangle w,vranglelangle v,vrangle$ has the same null space as $Phi$ and $Psi(v)=Phi(v)=1$. Therefore $Psi=Phi$.



    A Hilbert space is needed in infinite dimensions in order to come up with a vector that is orthogonal to the null space of a given continuous linear functional. Continuity ensures that the null space is closed, so that this may be done. In finite-dimensional spaces, only Gram-Schmidt is needed.






    share|cite|improve this answer


























      up vote
      1
      down vote













      Every non-trivial linear functional $Phi$ on a vector space $V$ is characterized by its null space $N=mathcalN(Phi)$, which is always of co-dimension one in $V$. This is because $Phi(v)=1$ for some $v$, which allows you to write
      $$
      w = (w-Phi(w)v)+Phi(w)v,
      $$
      and $w-Phi(w)vin N$ is easily directly verified. So $V=Noplus[v]$. Conversely, every such decomposition defines a unique linear functional $Phi$ such that $N=mathcalN(Phi)$ and $Phi(v)=1$. To see that $Phi$ is unique, suppose $Psi$ is another such linear function. Then $Phi(w-Phi(w)v)=0$ implies $Psi(w-Phi(w)v)=0$ or $Psi(w)=Psi(v)Phi(w)=Phi(w)$ for all $w$. This is true for finite- or infinite-dimensional real or complex linear spaces.



      If $Phi$ is a non-zero continuous linear functional on a Hilbert space $V$, then $N=mathcalN(Phi)=Phi^-10$ is closed and, therefore, there is a unique vector $v$ such that $v perp N$. So $V=Noplus [v]$, where the decomposition is orthogonal. And we can assume $Phi(v)=1$ by multiplying $v$ by an appropriate scalar. The linear functional $Psi(w)=fraclangle w,vranglelangle v,vrangle$ has the same null space as $Phi$ and $Psi(v)=Phi(v)=1$. Therefore $Psi=Phi$.



      A Hilbert space is needed in infinite dimensions in order to come up with a vector that is orthogonal to the null space of a given continuous linear functional. Continuity ensures that the null space is closed, so that this may be done. In finite-dimensional spaces, only Gram-Schmidt is needed.






      share|cite|improve this answer
























        up vote
        1
        down vote










        up vote
        1
        down vote









        Every non-trivial linear functional $Phi$ on a vector space $V$ is characterized by its null space $N=mathcalN(Phi)$, which is always of co-dimension one in $V$. This is because $Phi(v)=1$ for some $v$, which allows you to write
        $$
        w = (w-Phi(w)v)+Phi(w)v,
        $$
        and $w-Phi(w)vin N$ is easily directly verified. So $V=Noplus[v]$. Conversely, every such decomposition defines a unique linear functional $Phi$ such that $N=mathcalN(Phi)$ and $Phi(v)=1$. To see that $Phi$ is unique, suppose $Psi$ is another such linear function. Then $Phi(w-Phi(w)v)=0$ implies $Psi(w-Phi(w)v)=0$ or $Psi(w)=Psi(v)Phi(w)=Phi(w)$ for all $w$. This is true for finite- or infinite-dimensional real or complex linear spaces.



        If $Phi$ is a non-zero continuous linear functional on a Hilbert space $V$, then $N=mathcalN(Phi)=Phi^-10$ is closed and, therefore, there is a unique vector $v$ such that $v perp N$. So $V=Noplus [v]$, where the decomposition is orthogonal. And we can assume $Phi(v)=1$ by multiplying $v$ by an appropriate scalar. The linear functional $Psi(w)=fraclangle w,vranglelangle v,vrangle$ has the same null space as $Phi$ and $Psi(v)=Phi(v)=1$. Therefore $Psi=Phi$.



        A Hilbert space is needed in infinite dimensions in order to come up with a vector that is orthogonal to the null space of a given continuous linear functional. Continuity ensures that the null space is closed, so that this may be done. In finite-dimensional spaces, only Gram-Schmidt is needed.






        share|cite|improve this answer














        Every non-trivial linear functional $Phi$ on a vector space $V$ is characterized by its null space $N=mathcalN(Phi)$, which is always of co-dimension one in $V$. This is because $Phi(v)=1$ for some $v$, which allows you to write
        $$
        w = (w-Phi(w)v)+Phi(w)v,
        $$
        and $w-Phi(w)vin N$ is easily directly verified. So $V=Noplus[v]$. Conversely, every such decomposition defines a unique linear functional $Phi$ such that $N=mathcalN(Phi)$ and $Phi(v)=1$. To see that $Phi$ is unique, suppose $Psi$ is another such linear function. Then $Phi(w-Phi(w)v)=0$ implies $Psi(w-Phi(w)v)=0$ or $Psi(w)=Psi(v)Phi(w)=Phi(w)$ for all $w$. This is true for finite- or infinite-dimensional real or complex linear spaces.



        If $Phi$ is a non-zero continuous linear functional on a Hilbert space $V$, then $N=mathcalN(Phi)=Phi^-10$ is closed and, therefore, there is a unique vector $v$ such that $v perp N$. So $V=Noplus [v]$, where the decomposition is orthogonal. And we can assume $Phi(v)=1$ by multiplying $v$ by an appropriate scalar. The linear functional $Psi(w)=fraclangle w,vranglelangle v,vrangle$ has the same null space as $Phi$ and $Psi(v)=Phi(v)=1$. Therefore $Psi=Phi$.



        A Hilbert space is needed in infinite dimensions in order to come up with a vector that is orthogonal to the null space of a given continuous linear functional. Continuity ensures that the null space is closed, so that this may be done. In finite-dimensional spaces, only Gram-Schmidt is needed.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Sep 4 at 16:04

























        answered Sep 4 at 15:56









        DisintegratingByParts

        56.3k42373




        56.3k42373



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2905043%2fprove-the-reduced-riesz-representation-theorem-for-finite-dimensional-vector-spa%23new-answer', 'question_page');

            );

            Post as a guest













































































            這個網誌中的熱門文章

            Is there any way to eliminate the singular point to solve this integral by hand or by approximations?

            Why am i infinitely getting the same tweet with the Twitter Search API?

            Carbon dioxide