Bounding coefficients of a uniform unit vector projected in a basis

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












When bounding the convergence rate of many numerical linear algebra algorithms (e.g. the Lanczos method), typically one must use a bound on the coefficients of the initial vector in the eigenbasis of the input matrix. I wonder how to prove such a general bound for a uniformly randomly drawn vector from the unit sphere.



My idea so far is the following and I am wondering whether it makes sense:
Given a vector $v in mathbbR^n$ which is drawn random at uniform from the unit sphere, and some orthonormal basis $u_1,ldots,u_n$, I decompose this unit vector in this basis, i.e., $v = sum_i alpha_i u_i$. The question is now, whether I can lower bound the magnitude of these coefficients in probability, say with $p geq 1/2$, $alpha_i geq xi$.



My approach is the following very simple one:
Let $x$ be the vector with entries $alpha_i$;



  1. It is straight forward to bound, based on Gaussian concentration bounds the magnitude of the norm of $v$ in high probability.

  2. I could then conditionally on the norm being bounded find a lower bound on the entires, which are $chi^2$-distributed. However, I was not able to show a lower bound, since in particular I believe that the mean is $0$, which seems to imply that I can only obtain results of the form
    $mathbb P(min_i X_i > t) leq sum_i=1^n mathbb P(X_i > t) leq n cdot exp(-t^2/2 sigma^2)$ and then using that
    $min_1 leq i leq n |X_i| = min_1 leq i leq 2n X_i$ for $X_i+n = - X_i$ and hence obtaining:
    $$mathbb P(min_i |X_i| > t) leq 2n cdot exp(-t^2/2 sigma^2).$$
    However I would need in step three a different bound which lower bounds the probability.

  3. If point (2) can be obtained, then I could just obtain the final success probability in terms of the error rates of the step (1,2) by bounding $mathbbP[v_i/||v|| geq c/sqrtn] geq 1 - delta_1 - delta_2$, where $delta_1/2$ are the results from the first two steps.

I was wondering if this approach is correct or if there is a better way?
Furthermore does anyone have a hint where I can find a lower bound on the minimum of a sub-gaussian distributed variable?







share|cite|improve this question


























    up vote
    0
    down vote

    favorite












    When bounding the convergence rate of many numerical linear algebra algorithms (e.g. the Lanczos method), typically one must use a bound on the coefficients of the initial vector in the eigenbasis of the input matrix. I wonder how to prove such a general bound for a uniformly randomly drawn vector from the unit sphere.



    My idea so far is the following and I am wondering whether it makes sense:
    Given a vector $v in mathbbR^n$ which is drawn random at uniform from the unit sphere, and some orthonormal basis $u_1,ldots,u_n$, I decompose this unit vector in this basis, i.e., $v = sum_i alpha_i u_i$. The question is now, whether I can lower bound the magnitude of these coefficients in probability, say with $p geq 1/2$, $alpha_i geq xi$.



    My approach is the following very simple one:
    Let $x$ be the vector with entries $alpha_i$;



    1. It is straight forward to bound, based on Gaussian concentration bounds the magnitude of the norm of $v$ in high probability.

    2. I could then conditionally on the norm being bounded find a lower bound on the entires, which are $chi^2$-distributed. However, I was not able to show a lower bound, since in particular I believe that the mean is $0$, which seems to imply that I can only obtain results of the form
      $mathbb P(min_i X_i > t) leq sum_i=1^n mathbb P(X_i > t) leq n cdot exp(-t^2/2 sigma^2)$ and then using that
      $min_1 leq i leq n |X_i| = min_1 leq i leq 2n X_i$ for $X_i+n = - X_i$ and hence obtaining:
      $$mathbb P(min_i |X_i| > t) leq 2n cdot exp(-t^2/2 sigma^2).$$
      However I would need in step three a different bound which lower bounds the probability.

    3. If point (2) can be obtained, then I could just obtain the final success probability in terms of the error rates of the step (1,2) by bounding $mathbbP[v_i/||v|| geq c/sqrtn] geq 1 - delta_1 - delta_2$, where $delta_1/2$ are the results from the first two steps.

    I was wondering if this approach is correct or if there is a better way?
    Furthermore does anyone have a hint where I can find a lower bound on the minimum of a sub-gaussian distributed variable?







    share|cite|improve this question
























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      When bounding the convergence rate of many numerical linear algebra algorithms (e.g. the Lanczos method), typically one must use a bound on the coefficients of the initial vector in the eigenbasis of the input matrix. I wonder how to prove such a general bound for a uniformly randomly drawn vector from the unit sphere.



      My idea so far is the following and I am wondering whether it makes sense:
      Given a vector $v in mathbbR^n$ which is drawn random at uniform from the unit sphere, and some orthonormal basis $u_1,ldots,u_n$, I decompose this unit vector in this basis, i.e., $v = sum_i alpha_i u_i$. The question is now, whether I can lower bound the magnitude of these coefficients in probability, say with $p geq 1/2$, $alpha_i geq xi$.



      My approach is the following very simple one:
      Let $x$ be the vector with entries $alpha_i$;



      1. It is straight forward to bound, based on Gaussian concentration bounds the magnitude of the norm of $v$ in high probability.

      2. I could then conditionally on the norm being bounded find a lower bound on the entires, which are $chi^2$-distributed. However, I was not able to show a lower bound, since in particular I believe that the mean is $0$, which seems to imply that I can only obtain results of the form
        $mathbb P(min_i X_i > t) leq sum_i=1^n mathbb P(X_i > t) leq n cdot exp(-t^2/2 sigma^2)$ and then using that
        $min_1 leq i leq n |X_i| = min_1 leq i leq 2n X_i$ for $X_i+n = - X_i$ and hence obtaining:
        $$mathbb P(min_i |X_i| > t) leq 2n cdot exp(-t^2/2 sigma^2).$$
        However I would need in step three a different bound which lower bounds the probability.

      3. If point (2) can be obtained, then I could just obtain the final success probability in terms of the error rates of the step (1,2) by bounding $mathbbP[v_i/||v|| geq c/sqrtn] geq 1 - delta_1 - delta_2$, where $delta_1/2$ are the results from the first two steps.

      I was wondering if this approach is correct or if there is a better way?
      Furthermore does anyone have a hint where I can find a lower bound on the minimum of a sub-gaussian distributed variable?







      share|cite|improve this question














      When bounding the convergence rate of many numerical linear algebra algorithms (e.g. the Lanczos method), typically one must use a bound on the coefficients of the initial vector in the eigenbasis of the input matrix. I wonder how to prove such a general bound for a uniformly randomly drawn vector from the unit sphere.



      My idea so far is the following and I am wondering whether it makes sense:
      Given a vector $v in mathbbR^n$ which is drawn random at uniform from the unit sphere, and some orthonormal basis $u_1,ldots,u_n$, I decompose this unit vector in this basis, i.e., $v = sum_i alpha_i u_i$. The question is now, whether I can lower bound the magnitude of these coefficients in probability, say with $p geq 1/2$, $alpha_i geq xi$.



      My approach is the following very simple one:
      Let $x$ be the vector with entries $alpha_i$;



      1. It is straight forward to bound, based on Gaussian concentration bounds the magnitude of the norm of $v$ in high probability.

      2. I could then conditionally on the norm being bounded find a lower bound on the entires, which are $chi^2$-distributed. However, I was not able to show a lower bound, since in particular I believe that the mean is $0$, which seems to imply that I can only obtain results of the form
        $mathbb P(min_i X_i > t) leq sum_i=1^n mathbb P(X_i > t) leq n cdot exp(-t^2/2 sigma^2)$ and then using that
        $min_1 leq i leq n |X_i| = min_1 leq i leq 2n X_i$ for $X_i+n = - X_i$ and hence obtaining:
        $$mathbb P(min_i |X_i| > t) leq 2n cdot exp(-t^2/2 sigma^2).$$
        However I would need in step three a different bound which lower bounds the probability.

      3. If point (2) can be obtained, then I could just obtain the final success probability in terms of the error rates of the step (1,2) by bounding $mathbbP[v_i/||v|| geq c/sqrtn] geq 1 - delta_1 - delta_2$, where $delta_1/2$ are the results from the first two steps.

      I was wondering if this approach is correct or if there is a better way?
      Furthermore does anyone have a hint where I can find a lower bound on the minimum of a sub-gaussian distributed variable?









      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 30 at 23:47

























      asked Aug 28 at 15:16









      LeoW.

      568




      568

























          active

          oldest

          votes











          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2897384%2fbounding-coefficients-of-a-uniform-unit-vector-projected-in-a-basis%23new-answer', 'question_page');

          );

          Post as a guest



































          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















           

          draft saved


          draft discarded















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2897384%2fbounding-coefficients-of-a-uniform-unit-vector-projected-in-a-basis%23new-answer', 'question_page');

          );

          Post as a guest













































































          這個網誌中的熱門文章

          How to combine Bézier curves to a surface?

          Why am i infinitely getting the same tweet with the Twitter Search API?

          Is there any way to eliminate the singular point to solve this integral by hand or by approximations?