Expected number of trials for all possible outcomes to happen
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.
I wonder what's the expected number of trials for all outcomes to happen at least once?
If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.
probability combinatorics probability-distributions
 |Â
show 1 more comment
up vote
1
down vote
favorite
Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.
I wonder what's the expected number of trials for all outcomes to happen at least once?
If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.
probability combinatorics probability-distributions
Consider the least possible outcome with the smallest $p_i$.
â msm
Sep 9 at 12:07
combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
â bof
Sep 9 at 12:23
@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
â msm
Sep 9 at 13:05
@msn Oops. Right.
â bof
Sep 9 at 13:17
For the case with equal probabilities, see math.stackexchange.com/q/28905.
â Did
Sep 9 at 13:28
 |Â
show 1 more comment
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.
I wonder what's the expected number of trials for all outcomes to happen at least once?
If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.
probability combinatorics probability-distributions
Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.
I wonder what's the expected number of trials for all outcomes to happen at least once?
If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.
probability combinatorics probability-distributions
probability combinatorics probability-distributions
asked Sep 9 at 11:57
Igor
1,093917
1,093917
Consider the least possible outcome with the smallest $p_i$.
â msm
Sep 9 at 12:07
combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
â bof
Sep 9 at 12:23
@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
â msm
Sep 9 at 13:05
@msn Oops. Right.
â bof
Sep 9 at 13:17
For the case with equal probabilities, see math.stackexchange.com/q/28905.
â Did
Sep 9 at 13:28
 |Â
show 1 more comment
Consider the least possible outcome with the smallest $p_i$.
â msm
Sep 9 at 12:07
combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
â bof
Sep 9 at 12:23
@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
â msm
Sep 9 at 13:05
@msn Oops. Right.
â bof
Sep 9 at 13:17
For the case with equal probabilities, see math.stackexchange.com/q/28905.
â Did
Sep 9 at 13:28
Consider the least possible outcome with the smallest $p_i$.
â msm
Sep 9 at 12:07
Consider the least possible outcome with the smallest $p_i$.
â msm
Sep 9 at 12:07
combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
â bof
Sep 9 at 12:23
combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
â bof
Sep 9 at 12:23
@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
â msm
Sep 9 at 13:05
@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
â msm
Sep 9 at 13:05
@msn Oops. Right.
â bof
Sep 9 at 13:17
@msn Oops. Right.
â bof
Sep 9 at 13:17
For the case with equal probabilities, see math.stackexchange.com/q/28905.
â Did
Sep 9 at 13:28
For the case with equal probabilities, see math.stackexchange.com/q/28905.
â Did
Sep 9 at 13:28
 |Â
show 1 more comment
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,
$$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$
This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
$$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,
$$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$
This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
$$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.
add a comment |Â
up vote
2
down vote
accepted
Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,
$$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$
This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
$$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,
$$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$
This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
$$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.
Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,
$$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$
This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
$$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.
edited Sep 9 at 14:03
answered Sep 9 at 13:53
Did
243k23209444
243k23209444
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2910715%2fexpected-number-of-trials-for-all-possible-outcomes-to-happen%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Consider the least possible outcome with the smallest $p_i$.
â msm
Sep 9 at 12:07
combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
â bof
Sep 9 at 12:23
@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
â msm
Sep 9 at 13:05
@msn Oops. Right.
â bof
Sep 9 at 13:17
For the case with equal probabilities, see math.stackexchange.com/q/28905.
â Did
Sep 9 at 13:28