Posterior distribution of $theta$ with prior Uniform $(0, B)$ and density $p(x;theta) = e^-(x-theta)$
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I am trying to solve this problem from a past exam paper:
Let $X_1, dots, X_n$ be a random sample of size $n$ from the density:
$p(x; theta) = exp -(x-theta) , qquad x > theta$
In a Bayesian framework, assume that the prior distribution for
$theta$, $pi(theta)$, is a Uniform distribution on the interval
$(0, B)$, where $B$ is a known positive constant. Determine the
posterior distribution of $theta$, clearly stating the normalising
constant.
I know that posterior $propto$ prior $times$ likelihood.
Prior: $$pi(theta) = dfrac1B$$
Likelihood: $$p(underlinex ; theta) = prod_i=i^n exp - (x_i - theta) = exp left - left(sum_i=1^n x_i - nthetaright) right $$
EDIT: Changed equality to proportionality in response to the comment below.
Posterior: $$ pi(theta mid underlinex) propto dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right $$
Now I am stuck. I know the posterior distribution needs to integrate to 1, and this is the way I can find the normalising constant. However, I am unsure about how to find the limits of the integration and how to actually compute this integration (I assume the integration is with respect to $theta$?)
probability integration probability-theory uniform-distribution bayesian
add a comment |Â
up vote
1
down vote
favorite
I am trying to solve this problem from a past exam paper:
Let $X_1, dots, X_n$ be a random sample of size $n$ from the density:
$p(x; theta) = exp -(x-theta) , qquad x > theta$
In a Bayesian framework, assume that the prior distribution for
$theta$, $pi(theta)$, is a Uniform distribution on the interval
$(0, B)$, where $B$ is a known positive constant. Determine the
posterior distribution of $theta$, clearly stating the normalising
constant.
I know that posterior $propto$ prior $times$ likelihood.
Prior: $$pi(theta) = dfrac1B$$
Likelihood: $$p(underlinex ; theta) = prod_i=i^n exp - (x_i - theta) = exp left - left(sum_i=1^n x_i - nthetaright) right $$
EDIT: Changed equality to proportionality in response to the comment below.
Posterior: $$ pi(theta mid underlinex) propto dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right $$
Now I am stuck. I know the posterior distribution needs to integrate to 1, and this is the way I can find the normalising constant. However, I am unsure about how to find the limits of the integration and how to actually compute this integration (I assume the integration is with respect to $theta$?)
probability integration probability-theory uniform-distribution bayesian
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I am trying to solve this problem from a past exam paper:
Let $X_1, dots, X_n$ be a random sample of size $n$ from the density:
$p(x; theta) = exp -(x-theta) , qquad x > theta$
In a Bayesian framework, assume that the prior distribution for
$theta$, $pi(theta)$, is a Uniform distribution on the interval
$(0, B)$, where $B$ is a known positive constant. Determine the
posterior distribution of $theta$, clearly stating the normalising
constant.
I know that posterior $propto$ prior $times$ likelihood.
Prior: $$pi(theta) = dfrac1B$$
Likelihood: $$p(underlinex ; theta) = prod_i=i^n exp - (x_i - theta) = exp left - left(sum_i=1^n x_i - nthetaright) right $$
EDIT: Changed equality to proportionality in response to the comment below.
Posterior: $$ pi(theta mid underlinex) propto dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right $$
Now I am stuck. I know the posterior distribution needs to integrate to 1, and this is the way I can find the normalising constant. However, I am unsure about how to find the limits of the integration and how to actually compute this integration (I assume the integration is with respect to $theta$?)
probability integration probability-theory uniform-distribution bayesian
I am trying to solve this problem from a past exam paper:
Let $X_1, dots, X_n$ be a random sample of size $n$ from the density:
$p(x; theta) = exp -(x-theta) , qquad x > theta$
In a Bayesian framework, assume that the prior distribution for
$theta$, $pi(theta)$, is a Uniform distribution on the interval
$(0, B)$, where $B$ is a known positive constant. Determine the
posterior distribution of $theta$, clearly stating the normalising
constant.
I know that posterior $propto$ prior $times$ likelihood.
Prior: $$pi(theta) = dfrac1B$$
Likelihood: $$p(underlinex ; theta) = prod_i=i^n exp - (x_i - theta) = exp left - left(sum_i=1^n x_i - nthetaright) right $$
EDIT: Changed equality to proportionality in response to the comment below.
Posterior: $$ pi(theta mid underlinex) propto dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right $$
Now I am stuck. I know the posterior distribution needs to integrate to 1, and this is the way I can find the normalising constant. However, I am unsure about how to find the limits of the integration and how to actually compute this integration (I assume the integration is with respect to $theta$?)
probability integration probability-theory uniform-distribution bayesian
edited Aug 15 at 12:29
asked Aug 15 at 11:38


meenaparam
1326
1326
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Minor comment: posterior is proportional to the product of prior and maximum likelihood, not equal.
The integration is indeed with respect to $theta$. To find the limits refer to the definition of p.d.f., namely it's non-zero when $theta < x$. Hence,
$$
pi(theta mid underlinex) = frac 1 mathcalZ cdot dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right
$$
and we need to find the constant $mathcalZ$. Integrating one gets that
$$
int pi(theta mid underlinex) ~dtheta = frac 1 mathcalZ cdot frac 1 nB e^-sum_i=1^n x_i int_-infty^max(x_i) e^ntheta ~ d(ntheta) = frac 1nB e^nmax(x_i) - sum_i=1^n x_i cdot frac 1 mathcalZ = 1 implies \ mathcalZ = Bn cdot e^- nmax(x_i) + sum_i=1^n x_i.
$$
Finally, the posterior distribution can be written as follows
$$
pi(theta mid underlinex) = frac 1 B^2n cdot expleftn[max(x_i) + theta] -2 sum_i=1^n x_iright.
$$
Thanks for your minor comment - I have updated the post accordingly. Thanks also for explaining where the limits of the integration come from, that makes sense. Where do the extra $n$s come from e.g. in $dfrac1nB$ and $d(n theta)$? Is that just because we have $e^ntheta$ in the integral?
â meenaparam
Aug 15 at 12:27
I multiply and divide by $n$, in order to make the integral look more friendly ;)
â pointguard0
Aug 15 at 12:33
Ah ok, thanks for clarifying and for your help with this!
â meenaparam
Aug 15 at 12:51
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Minor comment: posterior is proportional to the product of prior and maximum likelihood, not equal.
The integration is indeed with respect to $theta$. To find the limits refer to the definition of p.d.f., namely it's non-zero when $theta < x$. Hence,
$$
pi(theta mid underlinex) = frac 1 mathcalZ cdot dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right
$$
and we need to find the constant $mathcalZ$. Integrating one gets that
$$
int pi(theta mid underlinex) ~dtheta = frac 1 mathcalZ cdot frac 1 nB e^-sum_i=1^n x_i int_-infty^max(x_i) e^ntheta ~ d(ntheta) = frac 1nB e^nmax(x_i) - sum_i=1^n x_i cdot frac 1 mathcalZ = 1 implies \ mathcalZ = Bn cdot e^- nmax(x_i) + sum_i=1^n x_i.
$$
Finally, the posterior distribution can be written as follows
$$
pi(theta mid underlinex) = frac 1 B^2n cdot expleftn[max(x_i) + theta] -2 sum_i=1^n x_iright.
$$
Thanks for your minor comment - I have updated the post accordingly. Thanks also for explaining where the limits of the integration come from, that makes sense. Where do the extra $n$s come from e.g. in $dfrac1nB$ and $d(n theta)$? Is that just because we have $e^ntheta$ in the integral?
â meenaparam
Aug 15 at 12:27
I multiply and divide by $n$, in order to make the integral look more friendly ;)
â pointguard0
Aug 15 at 12:33
Ah ok, thanks for clarifying and for your help with this!
â meenaparam
Aug 15 at 12:51
add a comment |Â
up vote
1
down vote
accepted
Minor comment: posterior is proportional to the product of prior and maximum likelihood, not equal.
The integration is indeed with respect to $theta$. To find the limits refer to the definition of p.d.f., namely it's non-zero when $theta < x$. Hence,
$$
pi(theta mid underlinex) = frac 1 mathcalZ cdot dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right
$$
and we need to find the constant $mathcalZ$. Integrating one gets that
$$
int pi(theta mid underlinex) ~dtheta = frac 1 mathcalZ cdot frac 1 nB e^-sum_i=1^n x_i int_-infty^max(x_i) e^ntheta ~ d(ntheta) = frac 1nB e^nmax(x_i) - sum_i=1^n x_i cdot frac 1 mathcalZ = 1 implies \ mathcalZ = Bn cdot e^- nmax(x_i) + sum_i=1^n x_i.
$$
Finally, the posterior distribution can be written as follows
$$
pi(theta mid underlinex) = frac 1 B^2n cdot expleftn[max(x_i) + theta] -2 sum_i=1^n x_iright.
$$
Thanks for your minor comment - I have updated the post accordingly. Thanks also for explaining where the limits of the integration come from, that makes sense. Where do the extra $n$s come from e.g. in $dfrac1nB$ and $d(n theta)$? Is that just because we have $e^ntheta$ in the integral?
â meenaparam
Aug 15 at 12:27
I multiply and divide by $n$, in order to make the integral look more friendly ;)
â pointguard0
Aug 15 at 12:33
Ah ok, thanks for clarifying and for your help with this!
â meenaparam
Aug 15 at 12:51
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Minor comment: posterior is proportional to the product of prior and maximum likelihood, not equal.
The integration is indeed with respect to $theta$. To find the limits refer to the definition of p.d.f., namely it's non-zero when $theta < x$. Hence,
$$
pi(theta mid underlinex) = frac 1 mathcalZ cdot dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right
$$
and we need to find the constant $mathcalZ$. Integrating one gets that
$$
int pi(theta mid underlinex) ~dtheta = frac 1 mathcalZ cdot frac 1 nB e^-sum_i=1^n x_i int_-infty^max(x_i) e^ntheta ~ d(ntheta) = frac 1nB e^nmax(x_i) - sum_i=1^n x_i cdot frac 1 mathcalZ = 1 implies \ mathcalZ = Bn cdot e^- nmax(x_i) + sum_i=1^n x_i.
$$
Finally, the posterior distribution can be written as follows
$$
pi(theta mid underlinex) = frac 1 B^2n cdot expleftn[max(x_i) + theta] -2 sum_i=1^n x_iright.
$$
Minor comment: posterior is proportional to the product of prior and maximum likelihood, not equal.
The integration is indeed with respect to $theta$. To find the limits refer to the definition of p.d.f., namely it's non-zero when $theta < x$. Hence,
$$
pi(theta mid underlinex) = frac 1 mathcalZ cdot dfrac1B times exp left - left(sum_i=1^n x_i - nthetaright) right
$$
and we need to find the constant $mathcalZ$. Integrating one gets that
$$
int pi(theta mid underlinex) ~dtheta = frac 1 mathcalZ cdot frac 1 nB e^-sum_i=1^n x_i int_-infty^max(x_i) e^ntheta ~ d(ntheta) = frac 1nB e^nmax(x_i) - sum_i=1^n x_i cdot frac 1 mathcalZ = 1 implies \ mathcalZ = Bn cdot e^- nmax(x_i) + sum_i=1^n x_i.
$$
Finally, the posterior distribution can be written as follows
$$
pi(theta mid underlinex) = frac 1 B^2n cdot expleftn[max(x_i) + theta] -2 sum_i=1^n x_iright.
$$
edited Aug 15 at 12:07
answered Aug 15 at 12:01
pointguard0
1,240821
1,240821
Thanks for your minor comment - I have updated the post accordingly. Thanks also for explaining where the limits of the integration come from, that makes sense. Where do the extra $n$s come from e.g. in $dfrac1nB$ and $d(n theta)$? Is that just because we have $e^ntheta$ in the integral?
â meenaparam
Aug 15 at 12:27
I multiply and divide by $n$, in order to make the integral look more friendly ;)
â pointguard0
Aug 15 at 12:33
Ah ok, thanks for clarifying and for your help with this!
â meenaparam
Aug 15 at 12:51
add a comment |Â
Thanks for your minor comment - I have updated the post accordingly. Thanks also for explaining where the limits of the integration come from, that makes sense. Where do the extra $n$s come from e.g. in $dfrac1nB$ and $d(n theta)$? Is that just because we have $e^ntheta$ in the integral?
â meenaparam
Aug 15 at 12:27
I multiply and divide by $n$, in order to make the integral look more friendly ;)
â pointguard0
Aug 15 at 12:33
Ah ok, thanks for clarifying and for your help with this!
â meenaparam
Aug 15 at 12:51
Thanks for your minor comment - I have updated the post accordingly. Thanks also for explaining where the limits of the integration come from, that makes sense. Where do the extra $n$s come from e.g. in $dfrac1nB$ and $d(n theta)$? Is that just because we have $e^ntheta$ in the integral?
â meenaparam
Aug 15 at 12:27
Thanks for your minor comment - I have updated the post accordingly. Thanks also for explaining where the limits of the integration come from, that makes sense. Where do the extra $n$s come from e.g. in $dfrac1nB$ and $d(n theta)$? Is that just because we have $e^ntheta$ in the integral?
â meenaparam
Aug 15 at 12:27
I multiply and divide by $n$, in order to make the integral look more friendly ;)
â pointguard0
Aug 15 at 12:33
I multiply and divide by $n$, in order to make the integral look more friendly ;)
â pointguard0
Aug 15 at 12:33
Ah ok, thanks for clarifying and for your help with this!
â meenaparam
Aug 15 at 12:51
Ah ok, thanks for clarifying and for your help with this!
â meenaparam
Aug 15 at 12:51
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2883504%2fposterior-distribution-of-theta-with-prior-uniform-0-b-and-density-px%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password