Expected number of trials for all possible outcomes to happen

Multi tool use
Multi tool use

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.



I wonder what's the expected number of trials for all outcomes to happen at least once?



If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.










share|cite|improve this question





















  • Consider the least possible outcome with the smallest $p_i$.
    – msm
    Sep 9 at 12:07










  • combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
    – bof
    Sep 9 at 12:23










  • @bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
    – msm
    Sep 9 at 13:05










  • @msn Oops. Right.
    – bof
    Sep 9 at 13:17










  • For the case with equal probabilities, see math.stackexchange.com/q/28905.
    – Did
    Sep 9 at 13:28














up vote
1
down vote

favorite












Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.



I wonder what's the expected number of trials for all outcomes to happen at least once?



If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.










share|cite|improve this question





















  • Consider the least possible outcome with the smallest $p_i$.
    – msm
    Sep 9 at 12:07










  • combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
    – bof
    Sep 9 at 12:23










  • @bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
    – msm
    Sep 9 at 13:05










  • @msn Oops. Right.
    – bof
    Sep 9 at 13:17










  • For the case with equal probabilities, see math.stackexchange.com/q/28905.
    – Did
    Sep 9 at 13:28












up vote
1
down vote

favorite









up vote
1
down vote

favorite











Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.



I wonder what's the expected number of trials for all outcomes to happen at least once?



If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.










share|cite|improve this question













Suppose that in each trial there are exactly $n$ possible outcomes with probabilities $p_1, dots, p_n$ which sum to $1$.



I wonder what's the expected number of trials for all outcomes to happen at least once?



If there were only two outcomes then geometric distribution seems to solve the problem. But i don't see a simple way to generalize it for the current problem. Any ideas are highly appreciated.







probability combinatorics probability-distributions






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Sep 9 at 11:57









Igor

1,093917




1,093917











  • Consider the least possible outcome with the smallest $p_i$.
    – msm
    Sep 9 at 12:07










  • combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
    – bof
    Sep 9 at 12:23










  • @bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
    – msm
    Sep 9 at 13:05










  • @msn Oops. Right.
    – bof
    Sep 9 at 13:17










  • For the case with equal probabilities, see math.stackexchange.com/q/28905.
    – Did
    Sep 9 at 13:28
















  • Consider the least possible outcome with the smallest $p_i$.
    – msm
    Sep 9 at 12:07










  • combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
    – bof
    Sep 9 at 12:23










  • @bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
    – msm
    Sep 9 at 13:05










  • @msn Oops. Right.
    – bof
    Sep 9 at 13:17










  • For the case with equal probabilities, see math.stackexchange.com/q/28905.
    – Did
    Sep 9 at 13:28















Consider the least possible outcome with the smallest $p_i$.
– msm
Sep 9 at 12:07




Consider the least possible outcome with the smallest $p_i$.
– msm
Sep 9 at 12:07












combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
– bof
Sep 9 at 12:23




combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf
– bof
Sep 9 at 12:23












@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
– msm
Sep 9 at 13:05




@bof Agreed. For the case with equal probabilities I think you meant $n(1+frac12+cdots + frac1n)$, right?
– msm
Sep 9 at 13:05












@msn Oops. Right.
– bof
Sep 9 at 13:17




@msn Oops. Right.
– bof
Sep 9 at 13:17












For the case with equal probabilities, see math.stackexchange.com/q/28905.
– Did
Sep 9 at 13:28




For the case with equal probabilities, see math.stackexchange.com/q/28905.
– Did
Sep 9 at 13:28










1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,




$$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$




This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
$$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.






share|cite|improve this answer






















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2910715%2fexpected-number-of-trials-for-all-possible-outcomes-to-happen%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    2
    down vote



    accepted










    Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,




    $$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$




    This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
    In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
    $$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.






    share|cite|improve this answer


























      up vote
      2
      down vote



      accepted










      Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,




      $$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$




      This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
      In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
      $$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.






      share|cite|improve this answer
























        up vote
        2
        down vote



        accepted







        up vote
        2
        down vote



        accepted






        Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,




        $$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$




        This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
        In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
        $$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.






        share|cite|improve this answer














        Note that $$E(T)=sum_t=0^infty P(T>t)=sum_t=0^infty P(A(1,t)cup A(2,t)cupcdotscup A(n,t))$$ where, for each $k$, $A(k,t)$ denotes the event that none of the $t$ first tries produces the result $k$. Now, $$P(A(1,t)cup A(2,t)cupcdotscup A(n,t))=sum_i=1^n(-1)^i+1sum_=iP(A(I,t))$$ where, for every $Isubseteq1,2,ldots,n$, $$A(I,t)=bigcap_iin IA(i,t)$$ Thus, $$P(A(I,t))=(1-p_I)^t$$ where $$p_I=sum_iin Ip_i$$ Collecting everything, one gets $$E(T)=sum_t=0^inftysum_i=1^n(-1)^i+1sum_=i(1-p_I)^t$$ that is,




        $$E(T)=sum_i=1^n(-1)^i+1sum_=ifrac1p_I$$




        This expresses $E(T)$ as the sum of $2^n-1$ terms. For example, for three different results, $$E(T)=frac1p_1+frac1p_2+frac1p_3-frac11-p_1-frac11-p_2-frac11-p_3+1$$
        In the equiprobable case, $p_k=1/n$ for every $k$ in $1,2,ldots,n$, hence
        $$E(T)=sum_k=1^n(-1)^k+1nchoose kfrac nk$$ Equivalently, $$E(T)=nsum_k=1^n(-1)^k+1nchoose kint_0^1x^k-1dx$$ that is, $$E(T)=nint_0^1(1-(1-x)^n)fracdxx=nint_0^1frac1-x^n1-xdx$$ or $$E(T)=nint_0^1sum_i=0^n-1x^idx$$ that is, finally, $$E(T)=nsum_i=1^nfrac1i=nH_n$$ as already known.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Sep 9 at 14:03

























        answered Sep 9 at 13:53









        Did

        243k23209444




        243k23209444



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2910715%2fexpected-number-of-trials-for-all-possible-outcomes-to-happen%23new-answer', 'question_page');

            );

            Post as a guest













































































            cHcwRtq587YXlc
            olI23u,oR,cAthPFh4gt0TV2fOyY,xIF,dXSC,Mc7 f98QKSeUgNgxK MN0hAhn

            這個網誌中的熱門文章

            How to combine Bézier curves to a surface?

            Propositional logic and tautologies

            Distribution of Stopped Wiener Process with Stochastic Volatility