How to construct a matrix given the null basis of A?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Construct a 4x4 matrix A such that ((1,2,3,4),(1,1,2,2)) is a basis of N(A).
So i know that A will have two pivot columns and two free columns, but beyond this I'm not sure how to approach/solve.
linear-algebra matrices vector-spaces
 |Â
show 1 more comment
up vote
1
down vote
favorite
Construct a 4x4 matrix A such that ((1,2,3,4),(1,1,2,2)) is a basis of N(A).
So i know that A will have two pivot columns and two free columns, but beyond this I'm not sure how to approach/solve.
linear-algebra matrices vector-spaces
Try taking an general $4times 4$-matrix $A$ and establish what it means for bother vectors to be in the null space. This will gives you a system of equations over the variables of the matrix. See if you can continue from there.
â zzuussee
Aug 10 at 23:49
These two null space columns would be used to eliminate the free columns. Can I attempt to perform elimination on the null space columns?
â Alan Dennison
Aug 11 at 0:03
I'm not sure what elimination on the null space columns should achieve. @Cleric posted a nice answer in connection with my comment. Maybe this clears some things.
â zzuussee
Aug 11 at 0:07
If you need to find all matrices satisfying that condition, then you can use the approach you were given. If you only need one such matrix as indicated by the "a 4x4 matrix", then there are shortcuts that you can take to produce just one example. Let $v_1,v_2$ be those two vectors given. Let $v$ be an unknown vector. Consider the system of two equations $vcdot v_1=0$ and $vcdot v_2=0$, in the $4$ unknown coordinates of $v$. You can do row reduction and find two linearly independent solutions $u_1,u_2$. Then the matrix formed by putting $u_1,u_2,u_1,u_2$ as rows will solve the problem.
â user583012
Aug 11 at 0:10
Why would this solve the problem? On one hand, then you multiply the matrix by $v_1$, the components of the result will be dot products of the rows of the matrix and $v_1$. But $u_1,u_2$ were chosen such that those products are zero. Likewise for $v_2$. Now, since you chose $u_1,u_2$ linearly independent, then the rank of the matrix is $2$. Since $v_1,v_2$ are also linearly independent, then the nullity of the matrix is at least $2$. But since the rank is $2$, the nullity must be exactly $4-2=2$.
â user583012
Aug 11 at 0:14
 |Â
show 1 more comment
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Construct a 4x4 matrix A such that ((1,2,3,4),(1,1,2,2)) is a basis of N(A).
So i know that A will have two pivot columns and two free columns, but beyond this I'm not sure how to approach/solve.
linear-algebra matrices vector-spaces
Construct a 4x4 matrix A such that ((1,2,3,4),(1,1,2,2)) is a basis of N(A).
So i know that A will have two pivot columns and two free columns, but beyond this I'm not sure how to approach/solve.
linear-algebra matrices vector-spaces
edited Aug 11 at 18:52
asked Aug 10 at 23:44
Alan Dennison
144
144
Try taking an general $4times 4$-matrix $A$ and establish what it means for bother vectors to be in the null space. This will gives you a system of equations over the variables of the matrix. See if you can continue from there.
â zzuussee
Aug 10 at 23:49
These two null space columns would be used to eliminate the free columns. Can I attempt to perform elimination on the null space columns?
â Alan Dennison
Aug 11 at 0:03
I'm not sure what elimination on the null space columns should achieve. @Cleric posted a nice answer in connection with my comment. Maybe this clears some things.
â zzuussee
Aug 11 at 0:07
If you need to find all matrices satisfying that condition, then you can use the approach you were given. If you only need one such matrix as indicated by the "a 4x4 matrix", then there are shortcuts that you can take to produce just one example. Let $v_1,v_2$ be those two vectors given. Let $v$ be an unknown vector. Consider the system of two equations $vcdot v_1=0$ and $vcdot v_2=0$, in the $4$ unknown coordinates of $v$. You can do row reduction and find two linearly independent solutions $u_1,u_2$. Then the matrix formed by putting $u_1,u_2,u_1,u_2$ as rows will solve the problem.
â user583012
Aug 11 at 0:10
Why would this solve the problem? On one hand, then you multiply the matrix by $v_1$, the components of the result will be dot products of the rows of the matrix and $v_1$. But $u_1,u_2$ were chosen such that those products are zero. Likewise for $v_2$. Now, since you chose $u_1,u_2$ linearly independent, then the rank of the matrix is $2$. Since $v_1,v_2$ are also linearly independent, then the nullity of the matrix is at least $2$. But since the rank is $2$, the nullity must be exactly $4-2=2$.
â user583012
Aug 11 at 0:14
 |Â
show 1 more comment
Try taking an general $4times 4$-matrix $A$ and establish what it means for bother vectors to be in the null space. This will gives you a system of equations over the variables of the matrix. See if you can continue from there.
â zzuussee
Aug 10 at 23:49
These two null space columns would be used to eliminate the free columns. Can I attempt to perform elimination on the null space columns?
â Alan Dennison
Aug 11 at 0:03
I'm not sure what elimination on the null space columns should achieve. @Cleric posted a nice answer in connection with my comment. Maybe this clears some things.
â zzuussee
Aug 11 at 0:07
If you need to find all matrices satisfying that condition, then you can use the approach you were given. If you only need one such matrix as indicated by the "a 4x4 matrix", then there are shortcuts that you can take to produce just one example. Let $v_1,v_2$ be those two vectors given. Let $v$ be an unknown vector. Consider the system of two equations $vcdot v_1=0$ and $vcdot v_2=0$, in the $4$ unknown coordinates of $v$. You can do row reduction and find two linearly independent solutions $u_1,u_2$. Then the matrix formed by putting $u_1,u_2,u_1,u_2$ as rows will solve the problem.
â user583012
Aug 11 at 0:10
Why would this solve the problem? On one hand, then you multiply the matrix by $v_1$, the components of the result will be dot products of the rows of the matrix and $v_1$. But $u_1,u_2$ were chosen such that those products are zero. Likewise for $v_2$. Now, since you chose $u_1,u_2$ linearly independent, then the rank of the matrix is $2$. Since $v_1,v_2$ are also linearly independent, then the nullity of the matrix is at least $2$. But since the rank is $2$, the nullity must be exactly $4-2=2$.
â user583012
Aug 11 at 0:14
Try taking an general $4times 4$-matrix $A$ and establish what it means for bother vectors to be in the null space. This will gives you a system of equations over the variables of the matrix. See if you can continue from there.
â zzuussee
Aug 10 at 23:49
Try taking an general $4times 4$-matrix $A$ and establish what it means for bother vectors to be in the null space. This will gives you a system of equations over the variables of the matrix. See if you can continue from there.
â zzuussee
Aug 10 at 23:49
These two null space columns would be used to eliminate the free columns. Can I attempt to perform elimination on the null space columns?
â Alan Dennison
Aug 11 at 0:03
These two null space columns would be used to eliminate the free columns. Can I attempt to perform elimination on the null space columns?
â Alan Dennison
Aug 11 at 0:03
I'm not sure what elimination on the null space columns should achieve. @Cleric posted a nice answer in connection with my comment. Maybe this clears some things.
â zzuussee
Aug 11 at 0:07
I'm not sure what elimination on the null space columns should achieve. @Cleric posted a nice answer in connection with my comment. Maybe this clears some things.
â zzuussee
Aug 11 at 0:07
If you need to find all matrices satisfying that condition, then you can use the approach you were given. If you only need one such matrix as indicated by the "a 4x4 matrix", then there are shortcuts that you can take to produce just one example. Let $v_1,v_2$ be those two vectors given. Let $v$ be an unknown vector. Consider the system of two equations $vcdot v_1=0$ and $vcdot v_2=0$, in the $4$ unknown coordinates of $v$. You can do row reduction and find two linearly independent solutions $u_1,u_2$. Then the matrix formed by putting $u_1,u_2,u_1,u_2$ as rows will solve the problem.
â user583012
Aug 11 at 0:10
If you need to find all matrices satisfying that condition, then you can use the approach you were given. If you only need one such matrix as indicated by the "a 4x4 matrix", then there are shortcuts that you can take to produce just one example. Let $v_1,v_2$ be those two vectors given. Let $v$ be an unknown vector. Consider the system of two equations $vcdot v_1=0$ and $vcdot v_2=0$, in the $4$ unknown coordinates of $v$. You can do row reduction and find two linearly independent solutions $u_1,u_2$. Then the matrix formed by putting $u_1,u_2,u_1,u_2$ as rows will solve the problem.
â user583012
Aug 11 at 0:10
Why would this solve the problem? On one hand, then you multiply the matrix by $v_1$, the components of the result will be dot products of the rows of the matrix and $v_1$. But $u_1,u_2$ were chosen such that those products are zero. Likewise for $v_2$. Now, since you chose $u_1,u_2$ linearly independent, then the rank of the matrix is $2$. Since $v_1,v_2$ are also linearly independent, then the nullity of the matrix is at least $2$. But since the rank is $2$, the nullity must be exactly $4-2=2$.
â user583012
Aug 11 at 0:14
Why would this solve the problem? On one hand, then you multiply the matrix by $v_1$, the components of the result will be dot products of the rows of the matrix and $v_1$. But $u_1,u_2$ were chosen such that those products are zero. Likewise for $v_2$. Now, since you chose $u_1,u_2$ linearly independent, then the rank of the matrix is $2$. Since $v_1,v_2$ are also linearly independent, then the nullity of the matrix is at least $2$. But since the rank is $2$, the nullity must be exactly $4-2=2$.
â user583012
Aug 11 at 0:14
 |Â
show 1 more comment
5 Answers
5
active
oldest
votes
up vote
2
down vote
The row space of a matrix is the orthogonal complement of its null space. So, you can construct the required matrix by finding a basis for this orthogonal complement. In this case, this will give you two of the rows, and the other two rows can be any linear combinations of those two rows, including rows of all zeros.
Calling the two given vectors $mathbf n_1$ and $mathbf n_2$, the orthogonal complement of their span is the set of all vectors $mathbf x$ that satisfy $mathbf n_1cdotmathbf x=mathbf n_2cdotmathbf x=0$. This is a pair of homogeneous linear equations in the components of $mathbf x$, so $mathscr N(A)^perp$ is the null space of the matrix $smallbeginbmatrixmathbf n_1 & mathbf n_2endbmatrix^T$. IâÂÂm sure you know how to compute that.
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
2
down vote
You could for instance say that you have a matrix
beginequation
A
=
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
endequation
such that
beginequation
AX = 0
endequation
where
beginequation
X =
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
endequation
Hence we have to find $a_ij$'s such that
beginequation
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
=
0
endequation
i.e. we have the following system to solve:
beginalign
a_11 + 2a_12 + 3a_13 + 4a_14 & = 0 \
a_21 + 2a_22 + 3a_23 + 4a_24 & = 0 \
a_31 + 2a_32 + 3a_33 + 4a_34 & = 0 \
a_41 + 2a_42 + 3a_43 + 4a_44 & = 0 \
a_11 + a_12 + 2a_13 + 2a_14 & = 0 \
a_21 + a_22 + 2a_23 + 2a_24 & = 0 \
a_31 + a_32 + 2a_33 + 2a_34 & = 0 \
a_41 + a_42 + 2a_43 + 2a_44 & = 0
endalign
This is exhaustive as it is a system of 8 equations in 16 unknowns. Therefore, we will have infinitely many solutions. \
For example, you could construct a projector matrix $P_X$ that spans the columns of $X$, i.e.
beginequation
P_X = X(X^TX)^-1X^T
=
beginbmatrix
0.5455 & -0.0909 & 0.4545 & -0.1818\
-0.0909 & 0.1818 & 0.0909 & 0.3636\
0.4545 & 0.0909 & 0.5455 & 0.1818\
-0.1818 & 0.3636 & 0.1818 & 0.7273
endbmatrix
endequation
And then you can say that my matrix $A$ spans the null space of $P_X$, i.e.
beginequation
A = I - P_X
=
beginbmatrix
0.4545 & 0.0909 & -0.4545 & 0.1818\
0.0909 & 0.8182 & -0.0909 & -0.3636\
-0.4545 & -0.0909 & 0.4545& -0.1818\
0.1818 & -0.3636 & -0.1818 & 0.2727
endbmatrix
endequation
Now check,
beginequation
AX =(I - P_X)X = X - P_XX = X - X(X^TX)^-1X^TX = X-X = 0
endequation
and voila there you have a matrix with nullspace being the columns of $X$.
add a comment |Â
up vote
1
down vote
Let me show you how it would go. Call $v_1=(1,2,3,4)$ and $v_2=(1,1,2,2)$.
Let $v=(x_1,x_2,x_3,x_4)$. We want $vcdot v_1=0$ and $vcdot v_2=0$. This gives us the system of equations $$beginalignx_1+2x_2+3x_3+4x_4&=0\x_1+x_2+2x_3+2x_4&=0endalign$$
Subtracting the first equation from the second we get: $$beginalignx_1+2x_2+3x_3+4x_4&=0\phantomx_1-x_2-x_3-2x_4&=0endalign$$
Now the system is in echelon form. In order to get two linearly independent solutions we can put first $x_3=1$, $x_4=0$ and solve for $x_1,x_2$, and then put $x_3=0$, $x_4=1$ and solve for $x_1,x_2$.
Doing that gives us two vectors $$beginalignu_1&=(-1,-1,1,0)\u_2&=(0,-2,0,1)endalign$$
Therefore, one matrix satisfying the conditions of the problem would be $$beginpmatrix-1,&-1,&1,&0\0,&-2,&0,&1\-1,&-1,&1,&0\0,&-2,&0,&1endpmatrix$$
obtained by putting $u_1,u_2$ and again $u_1,u_2$ as rows.
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
0
down vote
Here is a more detailed tip based on zzuussee's comment:
Let
$$A=beginbmatrixa_11&a_12&a_13&a_14\a_21&a_22&a_23&a_24\a_31&a_32&a_33&a_34\a_41&a_42&a_43&a_44endbmatrix.$$
What could we say about each $a_ij$ if we require that
$$Abeginbmatrix1\2\3\4endbmatrix=Abeginbmatrix1\1\2\2endbmatrix=beginbmatrix0\0\0\0endbmatrix?$$
Can you choose values for each $a_ij$ that satisfy the above requirements?
add a comment |Â
up vote
0
down vote
It's well-known that a matrix is determined by its effect on a basis. So let's complete our null basis to a basis for $mathbb R^4$. For instance, we could use $e_1=(1,0,0,0)$ and $e_2=(0,1,0,0)$.
Now, if we call our basis $beta$, then $[T]_beta^beta=beginpmatrix1&0&0&0\0&1&0&0\0&0&0&0\0&0&0&0endpmatrix$ would be a matrix with the right null space (actually the first two columns could be any two vectors that are linearly independent...)
Meanwhile, the change of basis matrix from $beta$ to the standard basis is: $P=beginpmatrix1&0&1&1\0&1&2&1\0&0&3&2\0&0&4&2endpmatrix$.
So, we could let $A=P[T]_beta^betaP^-1$.
I get $A=beginpmatrix1&0&-1½\0&1&0&-frac12\0&0&0&0\0&0&0&0endpmatrix$.
add a comment |Â
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
The row space of a matrix is the orthogonal complement of its null space. So, you can construct the required matrix by finding a basis for this orthogonal complement. In this case, this will give you two of the rows, and the other two rows can be any linear combinations of those two rows, including rows of all zeros.
Calling the two given vectors $mathbf n_1$ and $mathbf n_2$, the orthogonal complement of their span is the set of all vectors $mathbf x$ that satisfy $mathbf n_1cdotmathbf x=mathbf n_2cdotmathbf x=0$. This is a pair of homogeneous linear equations in the components of $mathbf x$, so $mathscr N(A)^perp$ is the null space of the matrix $smallbeginbmatrixmathbf n_1 & mathbf n_2endbmatrix^T$. IâÂÂm sure you know how to compute that.
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
2
down vote
The row space of a matrix is the orthogonal complement of its null space. So, you can construct the required matrix by finding a basis for this orthogonal complement. In this case, this will give you two of the rows, and the other two rows can be any linear combinations of those two rows, including rows of all zeros.
Calling the two given vectors $mathbf n_1$ and $mathbf n_2$, the orthogonal complement of their span is the set of all vectors $mathbf x$ that satisfy $mathbf n_1cdotmathbf x=mathbf n_2cdotmathbf x=0$. This is a pair of homogeneous linear equations in the components of $mathbf x$, so $mathscr N(A)^perp$ is the null space of the matrix $smallbeginbmatrixmathbf n_1 & mathbf n_2endbmatrix^T$. IâÂÂm sure you know how to compute that.
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
2
down vote
up vote
2
down vote
The row space of a matrix is the orthogonal complement of its null space. So, you can construct the required matrix by finding a basis for this orthogonal complement. In this case, this will give you two of the rows, and the other two rows can be any linear combinations of those two rows, including rows of all zeros.
Calling the two given vectors $mathbf n_1$ and $mathbf n_2$, the orthogonal complement of their span is the set of all vectors $mathbf x$ that satisfy $mathbf n_1cdotmathbf x=mathbf n_2cdotmathbf x=0$. This is a pair of homogeneous linear equations in the components of $mathbf x$, so $mathscr N(A)^perp$ is the null space of the matrix $smallbeginbmatrixmathbf n_1 & mathbf n_2endbmatrix^T$. IâÂÂm sure you know how to compute that.
The row space of a matrix is the orthogonal complement of its null space. So, you can construct the required matrix by finding a basis for this orthogonal complement. In this case, this will give you two of the rows, and the other two rows can be any linear combinations of those two rows, including rows of all zeros.
Calling the two given vectors $mathbf n_1$ and $mathbf n_2$, the orthogonal complement of their span is the set of all vectors $mathbf x$ that satisfy $mathbf n_1cdotmathbf x=mathbf n_2cdotmathbf x=0$. This is a pair of homogeneous linear equations in the components of $mathbf x$, so $mathscr N(A)^perp$ is the null space of the matrix $smallbeginbmatrixmathbf n_1 & mathbf n_2endbmatrix^T$. IâÂÂm sure you know how to compute that.
answered Aug 11 at 1:19
amd
26.1k2944
26.1k2944
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
2
down vote
You could for instance say that you have a matrix
beginequation
A
=
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
endequation
such that
beginequation
AX = 0
endequation
where
beginequation
X =
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
endequation
Hence we have to find $a_ij$'s such that
beginequation
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
=
0
endequation
i.e. we have the following system to solve:
beginalign
a_11 + 2a_12 + 3a_13 + 4a_14 & = 0 \
a_21 + 2a_22 + 3a_23 + 4a_24 & = 0 \
a_31 + 2a_32 + 3a_33 + 4a_34 & = 0 \
a_41 + 2a_42 + 3a_43 + 4a_44 & = 0 \
a_11 + a_12 + 2a_13 + 2a_14 & = 0 \
a_21 + a_22 + 2a_23 + 2a_24 & = 0 \
a_31 + a_32 + 2a_33 + 2a_34 & = 0 \
a_41 + a_42 + 2a_43 + 2a_44 & = 0
endalign
This is exhaustive as it is a system of 8 equations in 16 unknowns. Therefore, we will have infinitely many solutions. \
For example, you could construct a projector matrix $P_X$ that spans the columns of $X$, i.e.
beginequation
P_X = X(X^TX)^-1X^T
=
beginbmatrix
0.5455 & -0.0909 & 0.4545 & -0.1818\
-0.0909 & 0.1818 & 0.0909 & 0.3636\
0.4545 & 0.0909 & 0.5455 & 0.1818\
-0.1818 & 0.3636 & 0.1818 & 0.7273
endbmatrix
endequation
And then you can say that my matrix $A$ spans the null space of $P_X$, i.e.
beginequation
A = I - P_X
=
beginbmatrix
0.4545 & 0.0909 & -0.4545 & 0.1818\
0.0909 & 0.8182 & -0.0909 & -0.3636\
-0.4545 & -0.0909 & 0.4545& -0.1818\
0.1818 & -0.3636 & -0.1818 & 0.2727
endbmatrix
endequation
Now check,
beginequation
AX =(I - P_X)X = X - P_XX = X - X(X^TX)^-1X^TX = X-X = 0
endequation
and voila there you have a matrix with nullspace being the columns of $X$.
add a comment |Â
up vote
2
down vote
You could for instance say that you have a matrix
beginequation
A
=
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
endequation
such that
beginequation
AX = 0
endequation
where
beginequation
X =
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
endequation
Hence we have to find $a_ij$'s such that
beginequation
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
=
0
endequation
i.e. we have the following system to solve:
beginalign
a_11 + 2a_12 + 3a_13 + 4a_14 & = 0 \
a_21 + 2a_22 + 3a_23 + 4a_24 & = 0 \
a_31 + 2a_32 + 3a_33 + 4a_34 & = 0 \
a_41 + 2a_42 + 3a_43 + 4a_44 & = 0 \
a_11 + a_12 + 2a_13 + 2a_14 & = 0 \
a_21 + a_22 + 2a_23 + 2a_24 & = 0 \
a_31 + a_32 + 2a_33 + 2a_34 & = 0 \
a_41 + a_42 + 2a_43 + 2a_44 & = 0
endalign
This is exhaustive as it is a system of 8 equations in 16 unknowns. Therefore, we will have infinitely many solutions. \
For example, you could construct a projector matrix $P_X$ that spans the columns of $X$, i.e.
beginequation
P_X = X(X^TX)^-1X^T
=
beginbmatrix
0.5455 & -0.0909 & 0.4545 & -0.1818\
-0.0909 & 0.1818 & 0.0909 & 0.3636\
0.4545 & 0.0909 & 0.5455 & 0.1818\
-0.1818 & 0.3636 & 0.1818 & 0.7273
endbmatrix
endequation
And then you can say that my matrix $A$ spans the null space of $P_X$, i.e.
beginequation
A = I - P_X
=
beginbmatrix
0.4545 & 0.0909 & -0.4545 & 0.1818\
0.0909 & 0.8182 & -0.0909 & -0.3636\
-0.4545 & -0.0909 & 0.4545& -0.1818\
0.1818 & -0.3636 & -0.1818 & 0.2727
endbmatrix
endequation
Now check,
beginequation
AX =(I - P_X)X = X - P_XX = X - X(X^TX)^-1X^TX = X-X = 0
endequation
and voila there you have a matrix with nullspace being the columns of $X$.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
You could for instance say that you have a matrix
beginequation
A
=
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
endequation
such that
beginequation
AX = 0
endequation
where
beginequation
X =
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
endequation
Hence we have to find $a_ij$'s such that
beginequation
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
=
0
endequation
i.e. we have the following system to solve:
beginalign
a_11 + 2a_12 + 3a_13 + 4a_14 & = 0 \
a_21 + 2a_22 + 3a_23 + 4a_24 & = 0 \
a_31 + 2a_32 + 3a_33 + 4a_34 & = 0 \
a_41 + 2a_42 + 3a_43 + 4a_44 & = 0 \
a_11 + a_12 + 2a_13 + 2a_14 & = 0 \
a_21 + a_22 + 2a_23 + 2a_24 & = 0 \
a_31 + a_32 + 2a_33 + 2a_34 & = 0 \
a_41 + a_42 + 2a_43 + 2a_44 & = 0
endalign
This is exhaustive as it is a system of 8 equations in 16 unknowns. Therefore, we will have infinitely many solutions. \
For example, you could construct a projector matrix $P_X$ that spans the columns of $X$, i.e.
beginequation
P_X = X(X^TX)^-1X^T
=
beginbmatrix
0.5455 & -0.0909 & 0.4545 & -0.1818\
-0.0909 & 0.1818 & 0.0909 & 0.3636\
0.4545 & 0.0909 & 0.5455 & 0.1818\
-0.1818 & 0.3636 & 0.1818 & 0.7273
endbmatrix
endequation
And then you can say that my matrix $A$ spans the null space of $P_X$, i.e.
beginequation
A = I - P_X
=
beginbmatrix
0.4545 & 0.0909 & -0.4545 & 0.1818\
0.0909 & 0.8182 & -0.0909 & -0.3636\
-0.4545 & -0.0909 & 0.4545& -0.1818\
0.1818 & -0.3636 & -0.1818 & 0.2727
endbmatrix
endequation
Now check,
beginequation
AX =(I - P_X)X = X - P_XX = X - X(X^TX)^-1X^TX = X-X = 0
endequation
and voila there you have a matrix with nullspace being the columns of $X$.
You could for instance say that you have a matrix
beginequation
A
=
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
endequation
such that
beginequation
AX = 0
endequation
where
beginequation
X =
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
endequation
Hence we have to find $a_ij$'s such that
beginequation
beginbmatrix
a_11 & a_12 & a_13 & a_14 \
a_21 & a_22 & a_23 & a_24 \
a_31 & a_32 & a_33 & a_34 \
a_41 & a_42 & a_43 & a_44 \
endbmatrix
beginbmatrix
1 & 1 \
2 & 1 \
3 & 2\
4 & 2
endbmatrix
=
0
endequation
i.e. we have the following system to solve:
beginalign
a_11 + 2a_12 + 3a_13 + 4a_14 & = 0 \
a_21 + 2a_22 + 3a_23 + 4a_24 & = 0 \
a_31 + 2a_32 + 3a_33 + 4a_34 & = 0 \
a_41 + 2a_42 + 3a_43 + 4a_44 & = 0 \
a_11 + a_12 + 2a_13 + 2a_14 & = 0 \
a_21 + a_22 + 2a_23 + 2a_24 & = 0 \
a_31 + a_32 + 2a_33 + 2a_34 & = 0 \
a_41 + a_42 + 2a_43 + 2a_44 & = 0
endalign
This is exhaustive as it is a system of 8 equations in 16 unknowns. Therefore, we will have infinitely many solutions. \
For example, you could construct a projector matrix $P_X$ that spans the columns of $X$, i.e.
beginequation
P_X = X(X^TX)^-1X^T
=
beginbmatrix
0.5455 & -0.0909 & 0.4545 & -0.1818\
-0.0909 & 0.1818 & 0.0909 & 0.3636\
0.4545 & 0.0909 & 0.5455 & 0.1818\
-0.1818 & 0.3636 & 0.1818 & 0.7273
endbmatrix
endequation
And then you can say that my matrix $A$ spans the null space of $P_X$, i.e.
beginequation
A = I - P_X
=
beginbmatrix
0.4545 & 0.0909 & -0.4545 & 0.1818\
0.0909 & 0.8182 & -0.0909 & -0.3636\
-0.4545 & -0.0909 & 0.4545& -0.1818\
0.1818 & -0.3636 & -0.1818 & 0.2727
endbmatrix
endequation
Now check,
beginequation
AX =(I - P_X)X = X - P_XX = X - X(X^TX)^-1X^TX = X-X = 0
endequation
and voila there you have a matrix with nullspace being the columns of $X$.
answered Aug 11 at 1:26
Ahmad Bazzi
3,0601419
3,0601419
add a comment |Â
add a comment |Â
up vote
1
down vote
Let me show you how it would go. Call $v_1=(1,2,3,4)$ and $v_2=(1,1,2,2)$.
Let $v=(x_1,x_2,x_3,x_4)$. We want $vcdot v_1=0$ and $vcdot v_2=0$. This gives us the system of equations $$beginalignx_1+2x_2+3x_3+4x_4&=0\x_1+x_2+2x_3+2x_4&=0endalign$$
Subtracting the first equation from the second we get: $$beginalignx_1+2x_2+3x_3+4x_4&=0\phantomx_1-x_2-x_3-2x_4&=0endalign$$
Now the system is in echelon form. In order to get two linearly independent solutions we can put first $x_3=1$, $x_4=0$ and solve for $x_1,x_2$, and then put $x_3=0$, $x_4=1$ and solve for $x_1,x_2$.
Doing that gives us two vectors $$beginalignu_1&=(-1,-1,1,0)\u_2&=(0,-2,0,1)endalign$$
Therefore, one matrix satisfying the conditions of the problem would be $$beginpmatrix-1,&-1,&1,&0\0,&-2,&0,&1\-1,&-1,&1,&0\0,&-2,&0,&1endpmatrix$$
obtained by putting $u_1,u_2$ and again $u_1,u_2$ as rows.
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
1
down vote
Let me show you how it would go. Call $v_1=(1,2,3,4)$ and $v_2=(1,1,2,2)$.
Let $v=(x_1,x_2,x_3,x_4)$. We want $vcdot v_1=0$ and $vcdot v_2=0$. This gives us the system of equations $$beginalignx_1+2x_2+3x_3+4x_4&=0\x_1+x_2+2x_3+2x_4&=0endalign$$
Subtracting the first equation from the second we get: $$beginalignx_1+2x_2+3x_3+4x_4&=0\phantomx_1-x_2-x_3-2x_4&=0endalign$$
Now the system is in echelon form. In order to get two linearly independent solutions we can put first $x_3=1$, $x_4=0$ and solve for $x_1,x_2$, and then put $x_3=0$, $x_4=1$ and solve for $x_1,x_2$.
Doing that gives us two vectors $$beginalignu_1&=(-1,-1,1,0)\u_2&=(0,-2,0,1)endalign$$
Therefore, one matrix satisfying the conditions of the problem would be $$beginpmatrix-1,&-1,&1,&0\0,&-2,&0,&1\-1,&-1,&1,&0\0,&-2,&0,&1endpmatrix$$
obtained by putting $u_1,u_2$ and again $u_1,u_2$ as rows.
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Let me show you how it would go. Call $v_1=(1,2,3,4)$ and $v_2=(1,1,2,2)$.
Let $v=(x_1,x_2,x_3,x_4)$. We want $vcdot v_1=0$ and $vcdot v_2=0$. This gives us the system of equations $$beginalignx_1+2x_2+3x_3+4x_4&=0\x_1+x_2+2x_3+2x_4&=0endalign$$
Subtracting the first equation from the second we get: $$beginalignx_1+2x_2+3x_3+4x_4&=0\phantomx_1-x_2-x_3-2x_4&=0endalign$$
Now the system is in echelon form. In order to get two linearly independent solutions we can put first $x_3=1$, $x_4=0$ and solve for $x_1,x_2$, and then put $x_3=0$, $x_4=1$ and solve for $x_1,x_2$.
Doing that gives us two vectors $$beginalignu_1&=(-1,-1,1,0)\u_2&=(0,-2,0,1)endalign$$
Therefore, one matrix satisfying the conditions of the problem would be $$beginpmatrix-1,&-1,&1,&0\0,&-2,&0,&1\-1,&-1,&1,&0\0,&-2,&0,&1endpmatrix$$
obtained by putting $u_1,u_2$ and again $u_1,u_2$ as rows.
Let me show you how it would go. Call $v_1=(1,2,3,4)$ and $v_2=(1,1,2,2)$.
Let $v=(x_1,x_2,x_3,x_4)$. We want $vcdot v_1=0$ and $vcdot v_2=0$. This gives us the system of equations $$beginalignx_1+2x_2+3x_3+4x_4&=0\x_1+x_2+2x_3+2x_4&=0endalign$$
Subtracting the first equation from the second we get: $$beginalignx_1+2x_2+3x_3+4x_4&=0\phantomx_1-x_2-x_3-2x_4&=0endalign$$
Now the system is in echelon form. In order to get two linearly independent solutions we can put first $x_3=1$, $x_4=0$ and solve for $x_1,x_2$, and then put $x_3=0$, $x_4=1$ and solve for $x_1,x_2$.
Doing that gives us two vectors $$beginalignu_1&=(-1,-1,1,0)\u_2&=(0,-2,0,1)endalign$$
Therefore, one matrix satisfying the conditions of the problem would be $$beginpmatrix-1,&-1,&1,&0\0,&-2,&0,&1\-1,&-1,&1,&0\0,&-2,&0,&1endpmatrix$$
obtained by putting $u_1,u_2$ and again $u_1,u_2$ as rows.
answered Aug 11 at 0:42
user583012
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
This was super helpful!
â Alan Dennison
Aug 11 at 20:10
add a comment |Â
up vote
0
down vote
Here is a more detailed tip based on zzuussee's comment:
Let
$$A=beginbmatrixa_11&a_12&a_13&a_14\a_21&a_22&a_23&a_24\a_31&a_32&a_33&a_34\a_41&a_42&a_43&a_44endbmatrix.$$
What could we say about each $a_ij$ if we require that
$$Abeginbmatrix1\2\3\4endbmatrix=Abeginbmatrix1\1\2\2endbmatrix=beginbmatrix0\0\0\0endbmatrix?$$
Can you choose values for each $a_ij$ that satisfy the above requirements?
add a comment |Â
up vote
0
down vote
Here is a more detailed tip based on zzuussee's comment:
Let
$$A=beginbmatrixa_11&a_12&a_13&a_14\a_21&a_22&a_23&a_24\a_31&a_32&a_33&a_34\a_41&a_42&a_43&a_44endbmatrix.$$
What could we say about each $a_ij$ if we require that
$$Abeginbmatrix1\2\3\4endbmatrix=Abeginbmatrix1\1\2\2endbmatrix=beginbmatrix0\0\0\0endbmatrix?$$
Can you choose values for each $a_ij$ that satisfy the above requirements?
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Here is a more detailed tip based on zzuussee's comment:
Let
$$A=beginbmatrixa_11&a_12&a_13&a_14\a_21&a_22&a_23&a_24\a_31&a_32&a_33&a_34\a_41&a_42&a_43&a_44endbmatrix.$$
What could we say about each $a_ij$ if we require that
$$Abeginbmatrix1\2\3\4endbmatrix=Abeginbmatrix1\1\2\2endbmatrix=beginbmatrix0\0\0\0endbmatrix?$$
Can you choose values for each $a_ij$ that satisfy the above requirements?
Here is a more detailed tip based on zzuussee's comment:
Let
$$A=beginbmatrixa_11&a_12&a_13&a_14\a_21&a_22&a_23&a_24\a_31&a_32&a_33&a_34\a_41&a_42&a_43&a_44endbmatrix.$$
What could we say about each $a_ij$ if we require that
$$Abeginbmatrix1\2\3\4endbmatrix=Abeginbmatrix1\1\2\2endbmatrix=beginbmatrix0\0\0\0endbmatrix?$$
Can you choose values for each $a_ij$ that satisfy the above requirements?
answered Aug 11 at 0:02
Cleric
3,06632463
3,06632463
add a comment |Â
add a comment |Â
up vote
0
down vote
It's well-known that a matrix is determined by its effect on a basis. So let's complete our null basis to a basis for $mathbb R^4$. For instance, we could use $e_1=(1,0,0,0)$ and $e_2=(0,1,0,0)$.
Now, if we call our basis $beta$, then $[T]_beta^beta=beginpmatrix1&0&0&0\0&1&0&0\0&0&0&0\0&0&0&0endpmatrix$ would be a matrix with the right null space (actually the first two columns could be any two vectors that are linearly independent...)
Meanwhile, the change of basis matrix from $beta$ to the standard basis is: $P=beginpmatrix1&0&1&1\0&1&2&1\0&0&3&2\0&0&4&2endpmatrix$.
So, we could let $A=P[T]_beta^betaP^-1$.
I get $A=beginpmatrix1&0&-1½\0&1&0&-frac12\0&0&0&0\0&0&0&0endpmatrix$.
add a comment |Â
up vote
0
down vote
It's well-known that a matrix is determined by its effect on a basis. So let's complete our null basis to a basis for $mathbb R^4$. For instance, we could use $e_1=(1,0,0,0)$ and $e_2=(0,1,0,0)$.
Now, if we call our basis $beta$, then $[T]_beta^beta=beginpmatrix1&0&0&0\0&1&0&0\0&0&0&0\0&0&0&0endpmatrix$ would be a matrix with the right null space (actually the first two columns could be any two vectors that are linearly independent...)
Meanwhile, the change of basis matrix from $beta$ to the standard basis is: $P=beginpmatrix1&0&1&1\0&1&2&1\0&0&3&2\0&0&4&2endpmatrix$.
So, we could let $A=P[T]_beta^betaP^-1$.
I get $A=beginpmatrix1&0&-1½\0&1&0&-frac12\0&0&0&0\0&0&0&0endpmatrix$.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
It's well-known that a matrix is determined by its effect on a basis. So let's complete our null basis to a basis for $mathbb R^4$. For instance, we could use $e_1=(1,0,0,0)$ and $e_2=(0,1,0,0)$.
Now, if we call our basis $beta$, then $[T]_beta^beta=beginpmatrix1&0&0&0\0&1&0&0\0&0&0&0\0&0&0&0endpmatrix$ would be a matrix with the right null space (actually the first two columns could be any two vectors that are linearly independent...)
Meanwhile, the change of basis matrix from $beta$ to the standard basis is: $P=beginpmatrix1&0&1&1\0&1&2&1\0&0&3&2\0&0&4&2endpmatrix$.
So, we could let $A=P[T]_beta^betaP^-1$.
I get $A=beginpmatrix1&0&-1½\0&1&0&-frac12\0&0&0&0\0&0&0&0endpmatrix$.
It's well-known that a matrix is determined by its effect on a basis. So let's complete our null basis to a basis for $mathbb R^4$. For instance, we could use $e_1=(1,0,0,0)$ and $e_2=(0,1,0,0)$.
Now, if we call our basis $beta$, then $[T]_beta^beta=beginpmatrix1&0&0&0\0&1&0&0\0&0&0&0\0&0&0&0endpmatrix$ would be a matrix with the right null space (actually the first two columns could be any two vectors that are linearly independent...)
Meanwhile, the change of basis matrix from $beta$ to the standard basis is: $P=beginpmatrix1&0&1&1\0&1&2&1\0&0&3&2\0&0&4&2endpmatrix$.
So, we could let $A=P[T]_beta^betaP^-1$.
I get $A=beginpmatrix1&0&-1½\0&1&0&-frac12\0&0&0&0\0&0&0&0endpmatrix$.
edited Aug 16 at 19:37
answered Aug 11 at 1:03
Chris Custer
5,6912622
5,6912622
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2878915%2fhow-to-construct-a-matrix-given-the-null-basis-of-a%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Try taking an general $4times 4$-matrix $A$ and establish what it means for bother vectors to be in the null space. This will gives you a system of equations over the variables of the matrix. See if you can continue from there.
â zzuussee
Aug 10 at 23:49
These two null space columns would be used to eliminate the free columns. Can I attempt to perform elimination on the null space columns?
â Alan Dennison
Aug 11 at 0:03
I'm not sure what elimination on the null space columns should achieve. @Cleric posted a nice answer in connection with my comment. Maybe this clears some things.
â zzuussee
Aug 11 at 0:07
If you need to find all matrices satisfying that condition, then you can use the approach you were given. If you only need one such matrix as indicated by the "a 4x4 matrix", then there are shortcuts that you can take to produce just one example. Let $v_1,v_2$ be those two vectors given. Let $v$ be an unknown vector. Consider the system of two equations $vcdot v_1=0$ and $vcdot v_2=0$, in the $4$ unknown coordinates of $v$. You can do row reduction and find two linearly independent solutions $u_1,u_2$. Then the matrix formed by putting $u_1,u_2,u_1,u_2$ as rows will solve the problem.
â user583012
Aug 11 at 0:10
Why would this solve the problem? On one hand, then you multiply the matrix by $v_1$, the components of the result will be dot products of the rows of the matrix and $v_1$. But $u_1,u_2$ were chosen such that those products are zero. Likewise for $v_2$. Now, since you chose $u_1,u_2$ linearly independent, then the rank of the matrix is $2$. Since $v_1,v_2$ are also linearly independent, then the nullity of the matrix is at least $2$. But since the rank is $2$, the nullity must be exactly $4-2=2$.
â user583012
Aug 11 at 0:14