Projection on affine subset without inversion
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I have implemented the Chambolle-Pock Algorithm i. e. for a problem of the form
$$min_x , |Kx| + delta_H (x),$$
where $H = x: Ax = b$. In the iteration scheme, it is required to compute the proximal mapping of $delta_H$, which is just the projection onto the affine subset $H.$ This requires the computation of the pseudoinverse of $A$, so it is needed to compute $(AA^T)^-1$. This however poses an issue, since that computation is exponentially more time expensive than every other step in my algorithm. I thought about maybe calculating $operatornameProx_delta_H$ by means of the Moreau decomposition, but that just replaces the projection onto $H$ with the projection on its complement, which doesn't help at all.
My question is now, is there a way to avoid this issue? Either by computing the projection in a more efficient manner or maybe one can avoid it completely?
Thanks in advance!
numerical-methods convex-analysis computational-complexity
add a comment |Â
up vote
0
down vote
favorite
I have implemented the Chambolle-Pock Algorithm i. e. for a problem of the form
$$min_x , |Kx| + delta_H (x),$$
where $H = x: Ax = b$. In the iteration scheme, it is required to compute the proximal mapping of $delta_H$, which is just the projection onto the affine subset $H.$ This requires the computation of the pseudoinverse of $A$, so it is needed to compute $(AA^T)^-1$. This however poses an issue, since that computation is exponentially more time expensive than every other step in my algorithm. I thought about maybe calculating $operatornameProx_delta_H$ by means of the Moreau decomposition, but that just replaces the projection onto $H$ with the projection on its complement, which doesn't help at all.
My question is now, is there a way to avoid this issue? Either by computing the projection in a more efficient manner or maybe one can avoid it completely?
Thanks in advance!
numerical-methods convex-analysis computational-complexity
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have implemented the Chambolle-Pock Algorithm i. e. for a problem of the form
$$min_x , |Kx| + delta_H (x),$$
where $H = x: Ax = b$. In the iteration scheme, it is required to compute the proximal mapping of $delta_H$, which is just the projection onto the affine subset $H.$ This requires the computation of the pseudoinverse of $A$, so it is needed to compute $(AA^T)^-1$. This however poses an issue, since that computation is exponentially more time expensive than every other step in my algorithm. I thought about maybe calculating $operatornameProx_delta_H$ by means of the Moreau decomposition, but that just replaces the projection onto $H$ with the projection on its complement, which doesn't help at all.
My question is now, is there a way to avoid this issue? Either by computing the projection in a more efficient manner or maybe one can avoid it completely?
Thanks in advance!
numerical-methods convex-analysis computational-complexity
I have implemented the Chambolle-Pock Algorithm i. e. for a problem of the form
$$min_x , |Kx| + delta_H (x),$$
where $H = x: Ax = b$. In the iteration scheme, it is required to compute the proximal mapping of $delta_H$, which is just the projection onto the affine subset $H.$ This requires the computation of the pseudoinverse of $A$, so it is needed to compute $(AA^T)^-1$. This however poses an issue, since that computation is exponentially more time expensive than every other step in my algorithm. I thought about maybe calculating $operatornameProx_delta_H$ by means of the Moreau decomposition, but that just replaces the projection onto $H$ with the projection on its complement, which doesn't help at all.
My question is now, is there a way to avoid this issue? Either by computing the projection in a more efficient manner or maybe one can avoid it completely?
Thanks in advance!
numerical-methods convex-analysis computational-complexity
edited Aug 26 at 1:28
Michael Hardy
205k23187463
205k23187463
asked Aug 25 at 9:17
InspectorPing
998
998
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
You are right, you do not need to project explicitly onto $H$.
We can rewrite your problem as
$$min_x f(Kx) + g(Ax),$$
where $f(u)= |u|$ and $g(v) = delta_b(v)$. Let's now use the Fenchel-Legendre transformation to obtain a saddle point problem:
$$min_x max_y_1, y_2 langle Kx, y_1rangle - f^*(y_1) + langle Ax, y_2rangle - g^*(y_2). $$
For simplicity we introduce a new notation $y = (y_1^T, y_2^T)^T$, $B = [K^T, A^T]^T$, and $h(y) = f^*(y_1)+g^*(y_2)$. Then the above problem can be cast as
$$min_x max_y langle Bx,yrangle - h(y).$$
Notice that $textProx_h y = (textProx_f^*y_1, textProx_g^*y_2)$ and you can use a fore-mentioned Moreau decomposition to obtain explicit formula. Now I hope implementation of the Chambolle-Pock algorithm should be straight-forward.
It will need four matrix-vector multiplications per iteration: two with $K$ and two with $A$. The algorithm takes form:
beginalign*x^n+1 & = x^n - tau B^T y^n\
y^n+1 & = textProx_sigma h (y^n+sigma B(2x^n+1-x^n)),
endalign*
where the latter equation can be written more explicitly as
beginalign*
y^n+1_1 & = textProx_sigma f^*( y^n_1 +sigma K(2x^n+1-x^n)) \
y^n+1_2 & = y^n_2+sigma A(2x^n+1-x^n) -sigma b.
endalign*
We used here that $textProx_sigma g^*v =v-sigma b$.
Hence, the only thing you have to do now is to use a Moreau decomposition for computing $textProx_tau f^*$. Of course, in order to ensure convergence you have to choose steps $tau$ and $sigma$ in a proper way: $tau sigma |B|^2 <1$. Alternatively, you can use an extension of Chambolle-Pock algorithm which does not require to compute $|B|$.
Thank you! However, am I not running into the same problem as before when computing the proximal mapping of h, since again I need to compute the one of g?
â InspectorPing
Aug 26 at 10:25
1
No, before you had $delta_H$, and now you have $g = delta_b$ which is much easier for prox-evaluation. Specifically, $textProx_g^* u = u- b$.
â cheyp
Aug 26 at 10:30
Ah, I did not read that correctly, you are right! If I now want to apply the Chambolle-Pock algorithm to the Problem in this form, how would that look? Is the other proximal mapping the proximal mapping of the zero-function? I am sorry, I am not very well versed in this field and have only ever read the iterations off the first form of the problem you stated.
â InspectorPing
Aug 26 at 10:47
1
@InspectorPing, I add some details. Hope, now it is clear.
â cheyp
Aug 26 at 16:39
1
From optimality conditions for the saddle point problem, you can derive that $(x,y)$ is a solution iff $B^Ty = K^Ty_1+ A^Ty_2 = 0$, $Ax = b$, and $Kxin partial f^*(y_1)$. For the first two equations you can just control residuals like $|B^Ty^n|<epsilon $, etc. For the third, you need to know the $partial f^*$ or, equivalently, $partial |cdot|$. This depends on your norm. Instead, you can exploit the fact that $(y_1^n-y_1^n+1)/ sigma + K(2x^n+1-x^n)in partial f^*(y_1^n+1)$ and just check whether $|(y_1^n-y_1^n+1)/ sigma + K(x^n+1-x^n)| < epsilon$.
â cheyp
Aug 27 at 17:30
 |Â
show 1 more comment
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
You are right, you do not need to project explicitly onto $H$.
We can rewrite your problem as
$$min_x f(Kx) + g(Ax),$$
where $f(u)= |u|$ and $g(v) = delta_b(v)$. Let's now use the Fenchel-Legendre transformation to obtain a saddle point problem:
$$min_x max_y_1, y_2 langle Kx, y_1rangle - f^*(y_1) + langle Ax, y_2rangle - g^*(y_2). $$
For simplicity we introduce a new notation $y = (y_1^T, y_2^T)^T$, $B = [K^T, A^T]^T$, and $h(y) = f^*(y_1)+g^*(y_2)$. Then the above problem can be cast as
$$min_x max_y langle Bx,yrangle - h(y).$$
Notice that $textProx_h y = (textProx_f^*y_1, textProx_g^*y_2)$ and you can use a fore-mentioned Moreau decomposition to obtain explicit formula. Now I hope implementation of the Chambolle-Pock algorithm should be straight-forward.
It will need four matrix-vector multiplications per iteration: two with $K$ and two with $A$. The algorithm takes form:
beginalign*x^n+1 & = x^n - tau B^T y^n\
y^n+1 & = textProx_sigma h (y^n+sigma B(2x^n+1-x^n)),
endalign*
where the latter equation can be written more explicitly as
beginalign*
y^n+1_1 & = textProx_sigma f^*( y^n_1 +sigma K(2x^n+1-x^n)) \
y^n+1_2 & = y^n_2+sigma A(2x^n+1-x^n) -sigma b.
endalign*
We used here that $textProx_sigma g^*v =v-sigma b$.
Hence, the only thing you have to do now is to use a Moreau decomposition for computing $textProx_tau f^*$. Of course, in order to ensure convergence you have to choose steps $tau$ and $sigma$ in a proper way: $tau sigma |B|^2 <1$. Alternatively, you can use an extension of Chambolle-Pock algorithm which does not require to compute $|B|$.
Thank you! However, am I not running into the same problem as before when computing the proximal mapping of h, since again I need to compute the one of g?
â InspectorPing
Aug 26 at 10:25
1
No, before you had $delta_H$, and now you have $g = delta_b$ which is much easier for prox-evaluation. Specifically, $textProx_g^* u = u- b$.
â cheyp
Aug 26 at 10:30
Ah, I did not read that correctly, you are right! If I now want to apply the Chambolle-Pock algorithm to the Problem in this form, how would that look? Is the other proximal mapping the proximal mapping of the zero-function? I am sorry, I am not very well versed in this field and have only ever read the iterations off the first form of the problem you stated.
â InspectorPing
Aug 26 at 10:47
1
@InspectorPing, I add some details. Hope, now it is clear.
â cheyp
Aug 26 at 16:39
1
From optimality conditions for the saddle point problem, you can derive that $(x,y)$ is a solution iff $B^Ty = K^Ty_1+ A^Ty_2 = 0$, $Ax = b$, and $Kxin partial f^*(y_1)$. For the first two equations you can just control residuals like $|B^Ty^n|<epsilon $, etc. For the third, you need to know the $partial f^*$ or, equivalently, $partial |cdot|$. This depends on your norm. Instead, you can exploit the fact that $(y_1^n-y_1^n+1)/ sigma + K(2x^n+1-x^n)in partial f^*(y_1^n+1)$ and just check whether $|(y_1^n-y_1^n+1)/ sigma + K(x^n+1-x^n)| < epsilon$.
â cheyp
Aug 27 at 17:30
 |Â
show 1 more comment
up vote
1
down vote
accepted
You are right, you do not need to project explicitly onto $H$.
We can rewrite your problem as
$$min_x f(Kx) + g(Ax),$$
where $f(u)= |u|$ and $g(v) = delta_b(v)$. Let's now use the Fenchel-Legendre transformation to obtain a saddle point problem:
$$min_x max_y_1, y_2 langle Kx, y_1rangle - f^*(y_1) + langle Ax, y_2rangle - g^*(y_2). $$
For simplicity we introduce a new notation $y = (y_1^T, y_2^T)^T$, $B = [K^T, A^T]^T$, and $h(y) = f^*(y_1)+g^*(y_2)$. Then the above problem can be cast as
$$min_x max_y langle Bx,yrangle - h(y).$$
Notice that $textProx_h y = (textProx_f^*y_1, textProx_g^*y_2)$ and you can use a fore-mentioned Moreau decomposition to obtain explicit formula. Now I hope implementation of the Chambolle-Pock algorithm should be straight-forward.
It will need four matrix-vector multiplications per iteration: two with $K$ and two with $A$. The algorithm takes form:
beginalign*x^n+1 & = x^n - tau B^T y^n\
y^n+1 & = textProx_sigma h (y^n+sigma B(2x^n+1-x^n)),
endalign*
where the latter equation can be written more explicitly as
beginalign*
y^n+1_1 & = textProx_sigma f^*( y^n_1 +sigma K(2x^n+1-x^n)) \
y^n+1_2 & = y^n_2+sigma A(2x^n+1-x^n) -sigma b.
endalign*
We used here that $textProx_sigma g^*v =v-sigma b$.
Hence, the only thing you have to do now is to use a Moreau decomposition for computing $textProx_tau f^*$. Of course, in order to ensure convergence you have to choose steps $tau$ and $sigma$ in a proper way: $tau sigma |B|^2 <1$. Alternatively, you can use an extension of Chambolle-Pock algorithm which does not require to compute $|B|$.
Thank you! However, am I not running into the same problem as before when computing the proximal mapping of h, since again I need to compute the one of g?
â InspectorPing
Aug 26 at 10:25
1
No, before you had $delta_H$, and now you have $g = delta_b$ which is much easier for prox-evaluation. Specifically, $textProx_g^* u = u- b$.
â cheyp
Aug 26 at 10:30
Ah, I did not read that correctly, you are right! If I now want to apply the Chambolle-Pock algorithm to the Problem in this form, how would that look? Is the other proximal mapping the proximal mapping of the zero-function? I am sorry, I am not very well versed in this field and have only ever read the iterations off the first form of the problem you stated.
â InspectorPing
Aug 26 at 10:47
1
@InspectorPing, I add some details. Hope, now it is clear.
â cheyp
Aug 26 at 16:39
1
From optimality conditions for the saddle point problem, you can derive that $(x,y)$ is a solution iff $B^Ty = K^Ty_1+ A^Ty_2 = 0$, $Ax = b$, and $Kxin partial f^*(y_1)$. For the first two equations you can just control residuals like $|B^Ty^n|<epsilon $, etc. For the third, you need to know the $partial f^*$ or, equivalently, $partial |cdot|$. This depends on your norm. Instead, you can exploit the fact that $(y_1^n-y_1^n+1)/ sigma + K(2x^n+1-x^n)in partial f^*(y_1^n+1)$ and just check whether $|(y_1^n-y_1^n+1)/ sigma + K(x^n+1-x^n)| < epsilon$.
â cheyp
Aug 27 at 17:30
 |Â
show 1 more comment
up vote
1
down vote
accepted
up vote
1
down vote
accepted
You are right, you do not need to project explicitly onto $H$.
We can rewrite your problem as
$$min_x f(Kx) + g(Ax),$$
where $f(u)= |u|$ and $g(v) = delta_b(v)$. Let's now use the Fenchel-Legendre transformation to obtain a saddle point problem:
$$min_x max_y_1, y_2 langle Kx, y_1rangle - f^*(y_1) + langle Ax, y_2rangle - g^*(y_2). $$
For simplicity we introduce a new notation $y = (y_1^T, y_2^T)^T$, $B = [K^T, A^T]^T$, and $h(y) = f^*(y_1)+g^*(y_2)$. Then the above problem can be cast as
$$min_x max_y langle Bx,yrangle - h(y).$$
Notice that $textProx_h y = (textProx_f^*y_1, textProx_g^*y_2)$ and you can use a fore-mentioned Moreau decomposition to obtain explicit formula. Now I hope implementation of the Chambolle-Pock algorithm should be straight-forward.
It will need four matrix-vector multiplications per iteration: two with $K$ and two with $A$. The algorithm takes form:
beginalign*x^n+1 & = x^n - tau B^T y^n\
y^n+1 & = textProx_sigma h (y^n+sigma B(2x^n+1-x^n)),
endalign*
where the latter equation can be written more explicitly as
beginalign*
y^n+1_1 & = textProx_sigma f^*( y^n_1 +sigma K(2x^n+1-x^n)) \
y^n+1_2 & = y^n_2+sigma A(2x^n+1-x^n) -sigma b.
endalign*
We used here that $textProx_sigma g^*v =v-sigma b$.
Hence, the only thing you have to do now is to use a Moreau decomposition for computing $textProx_tau f^*$. Of course, in order to ensure convergence you have to choose steps $tau$ and $sigma$ in a proper way: $tau sigma |B|^2 <1$. Alternatively, you can use an extension of Chambolle-Pock algorithm which does not require to compute $|B|$.
You are right, you do not need to project explicitly onto $H$.
We can rewrite your problem as
$$min_x f(Kx) + g(Ax),$$
where $f(u)= |u|$ and $g(v) = delta_b(v)$. Let's now use the Fenchel-Legendre transformation to obtain a saddle point problem:
$$min_x max_y_1, y_2 langle Kx, y_1rangle - f^*(y_1) + langle Ax, y_2rangle - g^*(y_2). $$
For simplicity we introduce a new notation $y = (y_1^T, y_2^T)^T$, $B = [K^T, A^T]^T$, and $h(y) = f^*(y_1)+g^*(y_2)$. Then the above problem can be cast as
$$min_x max_y langle Bx,yrangle - h(y).$$
Notice that $textProx_h y = (textProx_f^*y_1, textProx_g^*y_2)$ and you can use a fore-mentioned Moreau decomposition to obtain explicit formula. Now I hope implementation of the Chambolle-Pock algorithm should be straight-forward.
It will need four matrix-vector multiplications per iteration: two with $K$ and two with $A$. The algorithm takes form:
beginalign*x^n+1 & = x^n - tau B^T y^n\
y^n+1 & = textProx_sigma h (y^n+sigma B(2x^n+1-x^n)),
endalign*
where the latter equation can be written more explicitly as
beginalign*
y^n+1_1 & = textProx_sigma f^*( y^n_1 +sigma K(2x^n+1-x^n)) \
y^n+1_2 & = y^n_2+sigma A(2x^n+1-x^n) -sigma b.
endalign*
We used here that $textProx_sigma g^*v =v-sigma b$.
Hence, the only thing you have to do now is to use a Moreau decomposition for computing $textProx_tau f^*$. Of course, in order to ensure convergence you have to choose steps $tau$ and $sigma$ in a proper way: $tau sigma |B|^2 <1$. Alternatively, you can use an extension of Chambolle-Pock algorithm which does not require to compute $|B|$.
edited Aug 27 at 17:15
answered Aug 26 at 0:17
cheyp
766
766
Thank you! However, am I not running into the same problem as before when computing the proximal mapping of h, since again I need to compute the one of g?
â InspectorPing
Aug 26 at 10:25
1
No, before you had $delta_H$, and now you have $g = delta_b$ which is much easier for prox-evaluation. Specifically, $textProx_g^* u = u- b$.
â cheyp
Aug 26 at 10:30
Ah, I did not read that correctly, you are right! If I now want to apply the Chambolle-Pock algorithm to the Problem in this form, how would that look? Is the other proximal mapping the proximal mapping of the zero-function? I am sorry, I am not very well versed in this field and have only ever read the iterations off the first form of the problem you stated.
â InspectorPing
Aug 26 at 10:47
1
@InspectorPing, I add some details. Hope, now it is clear.
â cheyp
Aug 26 at 16:39
1
From optimality conditions for the saddle point problem, you can derive that $(x,y)$ is a solution iff $B^Ty = K^Ty_1+ A^Ty_2 = 0$, $Ax = b$, and $Kxin partial f^*(y_1)$. For the first two equations you can just control residuals like $|B^Ty^n|<epsilon $, etc. For the third, you need to know the $partial f^*$ or, equivalently, $partial |cdot|$. This depends on your norm. Instead, you can exploit the fact that $(y_1^n-y_1^n+1)/ sigma + K(2x^n+1-x^n)in partial f^*(y_1^n+1)$ and just check whether $|(y_1^n-y_1^n+1)/ sigma + K(x^n+1-x^n)| < epsilon$.
â cheyp
Aug 27 at 17:30
 |Â
show 1 more comment
Thank you! However, am I not running into the same problem as before when computing the proximal mapping of h, since again I need to compute the one of g?
â InspectorPing
Aug 26 at 10:25
1
No, before you had $delta_H$, and now you have $g = delta_b$ which is much easier for prox-evaluation. Specifically, $textProx_g^* u = u- b$.
â cheyp
Aug 26 at 10:30
Ah, I did not read that correctly, you are right! If I now want to apply the Chambolle-Pock algorithm to the Problem in this form, how would that look? Is the other proximal mapping the proximal mapping of the zero-function? I am sorry, I am not very well versed in this field and have only ever read the iterations off the first form of the problem you stated.
â InspectorPing
Aug 26 at 10:47
1
@InspectorPing, I add some details. Hope, now it is clear.
â cheyp
Aug 26 at 16:39
1
From optimality conditions for the saddle point problem, you can derive that $(x,y)$ is a solution iff $B^Ty = K^Ty_1+ A^Ty_2 = 0$, $Ax = b$, and $Kxin partial f^*(y_1)$. For the first two equations you can just control residuals like $|B^Ty^n|<epsilon $, etc. For the third, you need to know the $partial f^*$ or, equivalently, $partial |cdot|$. This depends on your norm. Instead, you can exploit the fact that $(y_1^n-y_1^n+1)/ sigma + K(2x^n+1-x^n)in partial f^*(y_1^n+1)$ and just check whether $|(y_1^n-y_1^n+1)/ sigma + K(x^n+1-x^n)| < epsilon$.
â cheyp
Aug 27 at 17:30
Thank you! However, am I not running into the same problem as before when computing the proximal mapping of h, since again I need to compute the one of g?
â InspectorPing
Aug 26 at 10:25
Thank you! However, am I not running into the same problem as before when computing the proximal mapping of h, since again I need to compute the one of g?
â InspectorPing
Aug 26 at 10:25
1
1
No, before you had $delta_H$, and now you have $g = delta_b$ which is much easier for prox-evaluation. Specifically, $textProx_g^* u = u- b$.
â cheyp
Aug 26 at 10:30
No, before you had $delta_H$, and now you have $g = delta_b$ which is much easier for prox-evaluation. Specifically, $textProx_g^* u = u- b$.
â cheyp
Aug 26 at 10:30
Ah, I did not read that correctly, you are right! If I now want to apply the Chambolle-Pock algorithm to the Problem in this form, how would that look? Is the other proximal mapping the proximal mapping of the zero-function? I am sorry, I am not very well versed in this field and have only ever read the iterations off the first form of the problem you stated.
â InspectorPing
Aug 26 at 10:47
Ah, I did not read that correctly, you are right! If I now want to apply the Chambolle-Pock algorithm to the Problem in this form, how would that look? Is the other proximal mapping the proximal mapping of the zero-function? I am sorry, I am not very well versed in this field and have only ever read the iterations off the first form of the problem you stated.
â InspectorPing
Aug 26 at 10:47
1
1
@InspectorPing, I add some details. Hope, now it is clear.
â cheyp
Aug 26 at 16:39
@InspectorPing, I add some details. Hope, now it is clear.
â cheyp
Aug 26 at 16:39
1
1
From optimality conditions for the saddle point problem, you can derive that $(x,y)$ is a solution iff $B^Ty = K^Ty_1+ A^Ty_2 = 0$, $Ax = b$, and $Kxin partial f^*(y_1)$. For the first two equations you can just control residuals like $|B^Ty^n|<epsilon $, etc. For the third, you need to know the $partial f^*$ or, equivalently, $partial |cdot|$. This depends on your norm. Instead, you can exploit the fact that $(y_1^n-y_1^n+1)/ sigma + K(2x^n+1-x^n)in partial f^*(y_1^n+1)$ and just check whether $|(y_1^n-y_1^n+1)/ sigma + K(x^n+1-x^n)| < epsilon$.
â cheyp
Aug 27 at 17:30
From optimality conditions for the saddle point problem, you can derive that $(x,y)$ is a solution iff $B^Ty = K^Ty_1+ A^Ty_2 = 0$, $Ax = b$, and $Kxin partial f^*(y_1)$. For the first two equations you can just control residuals like $|B^Ty^n|<epsilon $, etc. For the third, you need to know the $partial f^*$ or, equivalently, $partial |cdot|$. This depends on your norm. Instead, you can exploit the fact that $(y_1^n-y_1^n+1)/ sigma + K(2x^n+1-x^n)in partial f^*(y_1^n+1)$ and just check whether $|(y_1^n-y_1^n+1)/ sigma + K(x^n+1-x^n)| < epsilon$.
â cheyp
Aug 27 at 17:30
 |Â
show 1 more comment
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2893926%2fprojection-on-affine-subset-without-inversion%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password