Vectors in $mathbbR^n$ orthogonal to $2^n-1$ sign-vectors

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite
3












Fact. Let $uinmathbbR^n$, $uneq 0$. Suppose that $u$ is orthogonal to precisely $2^n-1$ sign-vectors (i.e., vectors $varepsiloninmathbbR^n$ such that $varepsilon_i=pm 1$ for each $1leq ileq n$.) Then $u$ has precisely two non-zero coordinates which have the same absolute value.



Does anyone have a reference for a proof of this fact, or a proof without a reference? I have a proof, but it is a bit long and somewhat tricky.










share|cite|improve this question





















  • If the formal proof is long and tricky, it might be easier to just make a heuristic argument instead of proving it formally. I've found it's a lot simpler to argue heuristicly when dealing with vector spaces as formal proofs can become very pendantic very quickly.
    – Patty
    Jun 18 '17 at 22:41






  • 1




    Have you heard of Hadamard matrices (en.wikipedia.org/wiki/Hadamard_matrix)?
    – Jean Marie
    Jun 18 '17 at 22:42











  • @JeanMarie. Yes, I have. Can you apply them for $n=13$?
    – uniquesolution
    Jun 18 '17 at 22:42










  • @Patty - I'd love to hear a heuristic argument!
    – uniquesolution
    Jun 18 '17 at 22:43










  • @JeanMarie - For every $ngeq 2$, the vector $(1,1,0,dots,0)$ satisfies the hypothesis. The problem is to show that this is essentially the only one, up to scaling and a cube-symmetry.
    – uniquesolution
    Jun 19 '17 at 7:37














up vote
0
down vote

favorite
3












Fact. Let $uinmathbbR^n$, $uneq 0$. Suppose that $u$ is orthogonal to precisely $2^n-1$ sign-vectors (i.e., vectors $varepsiloninmathbbR^n$ such that $varepsilon_i=pm 1$ for each $1leq ileq n$.) Then $u$ has precisely two non-zero coordinates which have the same absolute value.



Does anyone have a reference for a proof of this fact, or a proof without a reference? I have a proof, but it is a bit long and somewhat tricky.










share|cite|improve this question





















  • If the formal proof is long and tricky, it might be easier to just make a heuristic argument instead of proving it formally. I've found it's a lot simpler to argue heuristicly when dealing with vector spaces as formal proofs can become very pendantic very quickly.
    – Patty
    Jun 18 '17 at 22:41






  • 1




    Have you heard of Hadamard matrices (en.wikipedia.org/wiki/Hadamard_matrix)?
    – Jean Marie
    Jun 18 '17 at 22:42











  • @JeanMarie. Yes, I have. Can you apply them for $n=13$?
    – uniquesolution
    Jun 18 '17 at 22:42










  • @Patty - I'd love to hear a heuristic argument!
    – uniquesolution
    Jun 18 '17 at 22:43










  • @JeanMarie - For every $ngeq 2$, the vector $(1,1,0,dots,0)$ satisfies the hypothesis. The problem is to show that this is essentially the only one, up to scaling and a cube-symmetry.
    – uniquesolution
    Jun 19 '17 at 7:37












up vote
0
down vote

favorite
3









up vote
0
down vote

favorite
3






3





Fact. Let $uinmathbbR^n$, $uneq 0$. Suppose that $u$ is orthogonal to precisely $2^n-1$ sign-vectors (i.e., vectors $varepsiloninmathbbR^n$ such that $varepsilon_i=pm 1$ for each $1leq ileq n$.) Then $u$ has precisely two non-zero coordinates which have the same absolute value.



Does anyone have a reference for a proof of this fact, or a proof without a reference? I have a proof, but it is a bit long and somewhat tricky.










share|cite|improve this question













Fact. Let $uinmathbbR^n$, $uneq 0$. Suppose that $u$ is orthogonal to precisely $2^n-1$ sign-vectors (i.e., vectors $varepsiloninmathbbR^n$ such that $varepsilon_i=pm 1$ for each $1leq ileq n$.) Then $u$ has precisely two non-zero coordinates which have the same absolute value.



Does anyone have a reference for a proof of this fact, or a proof without a reference? I have a proof, but it is a bit long and somewhat tricky.







combinatorics






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jun 18 '17 at 22:32









uniquesolution

8,316823




8,316823











  • If the formal proof is long and tricky, it might be easier to just make a heuristic argument instead of proving it formally. I've found it's a lot simpler to argue heuristicly when dealing with vector spaces as formal proofs can become very pendantic very quickly.
    – Patty
    Jun 18 '17 at 22:41






  • 1




    Have you heard of Hadamard matrices (en.wikipedia.org/wiki/Hadamard_matrix)?
    – Jean Marie
    Jun 18 '17 at 22:42











  • @JeanMarie. Yes, I have. Can you apply them for $n=13$?
    – uniquesolution
    Jun 18 '17 at 22:42










  • @Patty - I'd love to hear a heuristic argument!
    – uniquesolution
    Jun 18 '17 at 22:43










  • @JeanMarie - For every $ngeq 2$, the vector $(1,1,0,dots,0)$ satisfies the hypothesis. The problem is to show that this is essentially the only one, up to scaling and a cube-symmetry.
    – uniquesolution
    Jun 19 '17 at 7:37
















  • If the formal proof is long and tricky, it might be easier to just make a heuristic argument instead of proving it formally. I've found it's a lot simpler to argue heuristicly when dealing with vector spaces as formal proofs can become very pendantic very quickly.
    – Patty
    Jun 18 '17 at 22:41






  • 1




    Have you heard of Hadamard matrices (en.wikipedia.org/wiki/Hadamard_matrix)?
    – Jean Marie
    Jun 18 '17 at 22:42











  • @JeanMarie. Yes, I have. Can you apply them for $n=13$?
    – uniquesolution
    Jun 18 '17 at 22:42










  • @Patty - I'd love to hear a heuristic argument!
    – uniquesolution
    Jun 18 '17 at 22:43










  • @JeanMarie - For every $ngeq 2$, the vector $(1,1,0,dots,0)$ satisfies the hypothesis. The problem is to show that this is essentially the only one, up to scaling and a cube-symmetry.
    – uniquesolution
    Jun 19 '17 at 7:37















If the formal proof is long and tricky, it might be easier to just make a heuristic argument instead of proving it formally. I've found it's a lot simpler to argue heuristicly when dealing with vector spaces as formal proofs can become very pendantic very quickly.
– Patty
Jun 18 '17 at 22:41




If the formal proof is long and tricky, it might be easier to just make a heuristic argument instead of proving it formally. I've found it's a lot simpler to argue heuristicly when dealing with vector spaces as formal proofs can become very pendantic very quickly.
– Patty
Jun 18 '17 at 22:41




1




1




Have you heard of Hadamard matrices (en.wikipedia.org/wiki/Hadamard_matrix)?
– Jean Marie
Jun 18 '17 at 22:42





Have you heard of Hadamard matrices (en.wikipedia.org/wiki/Hadamard_matrix)?
– Jean Marie
Jun 18 '17 at 22:42













@JeanMarie. Yes, I have. Can you apply them for $n=13$?
– uniquesolution
Jun 18 '17 at 22:42




@JeanMarie. Yes, I have. Can you apply them for $n=13$?
– uniquesolution
Jun 18 '17 at 22:42












@Patty - I'd love to hear a heuristic argument!
– uniquesolution
Jun 18 '17 at 22:43




@Patty - I'd love to hear a heuristic argument!
– uniquesolution
Jun 18 '17 at 22:43












@JeanMarie - For every $ngeq 2$, the vector $(1,1,0,dots,0)$ satisfies the hypothesis. The problem is to show that this is essentially the only one, up to scaling and a cube-symmetry.
– uniquesolution
Jun 19 '17 at 7:37




@JeanMarie - For every $ngeq 2$, the vector $(1,1,0,dots,0)$ satisfies the hypothesis. The problem is to show that this is essentially the only one, up to scaling and a cube-symmetry.
– uniquesolution
Jun 19 '17 at 7:37










4 Answers
4






active

oldest

votes

















up vote
5
down vote



accepted










I find it simpler to deal with sets than with orthogonal vectors, so I am going to translate the problem:



For $v=(v_1,dots,v_n)in BbbR^n$ let $E_v$ be the sets of subsets $X$ of $BbbN_n:=1,dots,n$ such that
$$
(1)quadquadsum_iin Xv_i=frac 12 sum_iin BbbN_nv_i.
$$
Clearly there is a bijection between the sets $X$ in $E_v$ and the sign vectors which are orthogonal to $v$,
assigning to every set $X$ the sign vector $S_X:=2e_X-BbbI$, where $BbbI$ is the vector with all entries equal to $1$ and $(e_X)_i=1$ if $iin X$ and $0$ otherwise.
If $Xin E_v$, then we have
$$
vcdot e_X=sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=frac 12 vcdotBbbI,
$$
and so $vcdot S_X=0$. On the other hand $vcdot S_X=0$ implies $Xin E_v$.



For any $iin BbbN_n$ with $v_ine 0$, we can define an injective map $R_i$ from $E_v$ into $P(BbbN_n)setminus E_v$ (the complement of $E_v$ in the powerset of $BbbN_n$) defining $R_i(X)=Xtrianglei$, i.e., $R_i(X)=Xcupi$ if $inotin X$ and $R_i(X)=Xsetminusi$ if $iin X$. Clearly, if (1) is valid for $X$, then it is not valid for $R_i(X)$.



Now, if $#(E_v)=2^n-1$, then $R_i$ is a bijection for every $i$ with $v_ine 0$, and it is clearly its own inverse.



Assume by contradiction that there exist three non-zero entries $v_i_1,v_i_2,v_i_3$, and set $X=BbbN_nsetminusi_1,i_2,i_3$.



If $Xin E_v$, then $R_i_1(X)notin E_v$ and so $R_i_2(R_i_1(X))in E_v$, similarly
$R_i_3(R_i_1(X))in E_v$, hence
$$
v_2+v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_3+v_1+sum_iin Xv_i,
$$
and so $v_2=v_3$. Simlarly one proves $v_1=v_2$. But $Xin E_v$, so
$$
sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+v_1+sum_iin Xv_i,
$$
and so $v_1=v_2=0$, a contradiction.



If $Xnotin E_v$, then $R_i_1(X),R_i_2(X),R_i_3(X)in E_v$, hence
$$
v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+sum_iin Xv_i=v_3+sum_iin Xv_i,
$$
and so $v_1=v_2=v_3$. But then $R_i_2(R_i_1(X))notin E_v$ which implies that
$R_i_3(R_i_2(R_i_1(X)))in E_v$ and so
$$
v_1+v_2+v_3+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_1+sum_iin Xv_i,
$$
which leads to $v_2=v_3=0$, also a contradiction.



Hence there are at most two nonzero entries, and clearly they are either equal or they are opposite, and it is also clear that no vector with only one nonzero entry satisfies (1).






share|cite|improve this answer




















  • Thank you! This is very nice.
    – uniquesolution
    Jun 22 '17 at 14:01

















up vote
2
down vote



+200










The following more general version of your question was originally raised by Littlewood and Offord in their study of the roots of random polynomials.




Let $a_1, a_2, dots, a_k$ be $k$ (not necessarily distinct) real numbers of magnitude at least $1$. What is the maximum number of expressions of the form
$$pm a_1 pm a_2 dots pm a_k$$ that can lie in an interval of length less than $2$?




They proved a bound that was within a $log k$ factor of optimal. Shortly afterwards, Erdős showed that the correct bound is $binomklfloor k/2 rfloor$. Taking his result, rescaling it, and padding the sequence out with zeros gives the following:




Let $a_1, a_2, dots, a_n$ be real numbers, $k$ of which are nonzero. Then for any $c$, the number of solutions to
$$pm a_1 pm a_2 dots pm a_n = c$$
is at most $binomklfloor k/2 rfloor 2^n-k$. Equality holds iff the nonzero terms are all equal in magnitude.




Erdős's argument is actually a close cousin of San's: WLOG we can assume that all of the $a_i$ are non-negative. Then associate each sign sequence with the set corresponding to the indices of nonzero terms with positive signs. Changing negative signs to positive signs increases the sum, so the sets corresponding to equal sums form an antichain (no set is a subset of another). The bound follows from Sperner's Theorem.



In terms of your problem, you're stating that the ratio $binomklfloor k/2 rfloor/2^k$ is equal to $frac12$, which only happens for $k=1$ and $k=2$, and $k=1$ doesn't give $c=0$.






share|cite|improve this answer






















  • Very nice! How do you deduce from the equality case in Sperner's theorem that all non-zero terms are equal in magnitude?
    – uniquesolution
    Jun 23 '17 at 23:07










  • Equality holds in Sperner's theorem only when the set is a slice of the hypercube (either all sets of size $frack2$, if $k$ is even, or either all sets of size $frack+12$ or all sets of size $frack-12$, if $k$ is odd). If equality holds in Littlewood-Offord, then all subsets of the corresponding size must have the same sum, which implies that all of the summands must be equal.
    – Kevin P. Costello
    Jun 23 '17 at 23:26


















up vote
2
down vote













This is just a translation of san's beautiful argument to a geometric language, with a few short-cuts.



By symmetry we may assume that
$$u_1geq u_2geqcdots geq u_ngeq 0quadquadquadquadquad(1)$$
If $n=2$ then the only vectors orthogonal to sign-vectors are proportional to sign-vectors themselves, so the assertion is clear. Assume henceforth that $ngeq 3$.
Put
$$E_u=varepsilonin-1,1^n: langlevarepsilon,urangle = 0$$
Consider the $i$'th coordinate "sign-flip", i.e., the cube symmetry $R_i$ defined by multiplying the $i$'th coordinate of a vector by $-1$:
$$R_i(x_1,cdots,x_i,cdots x_n)=(x_1,cdots,-x_i,cdots x_n)$$



There are two crucial properties of $R_i$ in connection with our problem, which are easy to check:



  1. For each $i$ such that $u_ineq 0$, $R_i$ is an injection of $E_u$ into its complement $(E_u)^c$ in $-1,1^n$,
    and if $|E_u|=2^n-1$ it is a bijection.


  2. $R_i^2=Id$


Now assume that $u_3>0$. Put $varepsilon=(-1,-1,-1,+1,dots,+1)$.



Case A: $varepsilonin E_u$.



$R_2varepsilon$ does not belong to $E_u$, and since $R_1$ is a bijection such that $R_1^2=Id$, the sign-vector $R_1(R_2varepsilon)$ belongs to $E_u$. That is:
$$u_1+u_2-u_3=-sum_i=4^nu_i$$
By (1), the r.h.s is not positive and the l.h.s is not negative, so they both must be zero, implying that $u_1+u_2=u_3$, which (by (1) again) is impossible unless $u_1=u_2=u_3=0$, contradiction.



Case B: $varepsilonnotin E_u$



Flipping once with $R_1$ takes $varepsilon$ to $E_u$. Flipping again with $R_2$ takes $R_1varepsilon$ out of $E_u$, but flipping a third time with $R_3$ takes it back, so $R_1(R_2(R_3varepsilon))in E_u$. Hence:
$$u_1+u_2+u_3=-sum_i=4^nu_i$$
and we arrive at a contradiction as before.



It follows that $u_3=0$, and so there are only two non-zero coordinates - and so
$u_1=u_2$ as in the two-dimensional case above.






share|cite|improve this answer






















  • Nice translation and nice shortcuts. Maybe $E_v$ should be $E_u$.
    – san
    Jun 22 '17 at 14:57










  • Thank you. Of course, $E_u$.
    – uniquesolution
    Jun 22 '17 at 15:21

















up vote
1
down vote













Partial solution: I have made this community wiki so that anyone can finish the answer if they know how (otherwise, feel free to just create a new answer with your full solution) Let $E$ be the set of sign vectors which are orthogonal to $v$. If, for all $ein E$, we have $e_1=e_2$, then this set must be exactly
$$
E=(1,1,pm1,dots,pm1)cup(-1,-1,mp1,dots,mp1)
$$
which can be seen by the involution of multiplication by $-1$. Now $v$ is perpendicular to $(1,1,1,dots,1)$ and $(1,1,-1,dots,-1)$ so it is perpendicular to the sum, $(2,2,0,dots,0)$. Thus $v_1=-v_2$. If both $v_1=v_2=0$, then $E$ would not be the set described above, so we must have $|v_1|=|v_2|neq0$. Given the description of $E$ above it is now easy to see that the last $n-2$ coordinates of $v$ are $0$.



In fact, the above logic works if $e_i=e_j$ for all $ein E$ and any indices $i,j$, and similarly if $e_i=-e_j$ for any $i,j$.



Now it suffices to show that no such $E$ can exist which doesn't meet those restrictions. I am thinking that we should use the fact that $E$ is closed under multiplication by $-1$.






share|cite|improve this answer






















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2327869%2fvectors-in-mathbbrn-orthogonal-to-2n-1-sign-vectors%23new-answer', 'question_page');

    );

    Post as a guest






























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    5
    down vote



    accepted










    I find it simpler to deal with sets than with orthogonal vectors, so I am going to translate the problem:



    For $v=(v_1,dots,v_n)in BbbR^n$ let $E_v$ be the sets of subsets $X$ of $BbbN_n:=1,dots,n$ such that
    $$
    (1)quadquadsum_iin Xv_i=frac 12 sum_iin BbbN_nv_i.
    $$
    Clearly there is a bijection between the sets $X$ in $E_v$ and the sign vectors which are orthogonal to $v$,
    assigning to every set $X$ the sign vector $S_X:=2e_X-BbbI$, where $BbbI$ is the vector with all entries equal to $1$ and $(e_X)_i=1$ if $iin X$ and $0$ otherwise.
    If $Xin E_v$, then we have
    $$
    vcdot e_X=sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=frac 12 vcdotBbbI,
    $$
    and so $vcdot S_X=0$. On the other hand $vcdot S_X=0$ implies $Xin E_v$.



    For any $iin BbbN_n$ with $v_ine 0$, we can define an injective map $R_i$ from $E_v$ into $P(BbbN_n)setminus E_v$ (the complement of $E_v$ in the powerset of $BbbN_n$) defining $R_i(X)=Xtrianglei$, i.e., $R_i(X)=Xcupi$ if $inotin X$ and $R_i(X)=Xsetminusi$ if $iin X$. Clearly, if (1) is valid for $X$, then it is not valid for $R_i(X)$.



    Now, if $#(E_v)=2^n-1$, then $R_i$ is a bijection for every $i$ with $v_ine 0$, and it is clearly its own inverse.



    Assume by contradiction that there exist three non-zero entries $v_i_1,v_i_2,v_i_3$, and set $X=BbbN_nsetminusi_1,i_2,i_3$.



    If $Xin E_v$, then $R_i_1(X)notin E_v$ and so $R_i_2(R_i_1(X))in E_v$, similarly
    $R_i_3(R_i_1(X))in E_v$, hence
    $$
    v_2+v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_3+v_1+sum_iin Xv_i,
    $$
    and so $v_2=v_3$. Simlarly one proves $v_1=v_2$. But $Xin E_v$, so
    $$
    sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+v_1+sum_iin Xv_i,
    $$
    and so $v_1=v_2=0$, a contradiction.



    If $Xnotin E_v$, then $R_i_1(X),R_i_2(X),R_i_3(X)in E_v$, hence
    $$
    v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+sum_iin Xv_i=v_3+sum_iin Xv_i,
    $$
    and so $v_1=v_2=v_3$. But then $R_i_2(R_i_1(X))notin E_v$ which implies that
    $R_i_3(R_i_2(R_i_1(X)))in E_v$ and so
    $$
    v_1+v_2+v_3+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_1+sum_iin Xv_i,
    $$
    which leads to $v_2=v_3=0$, also a contradiction.



    Hence there are at most two nonzero entries, and clearly they are either equal or they are opposite, and it is also clear that no vector with only one nonzero entry satisfies (1).






    share|cite|improve this answer




















    • Thank you! This is very nice.
      – uniquesolution
      Jun 22 '17 at 14:01














    up vote
    5
    down vote



    accepted










    I find it simpler to deal with sets than with orthogonal vectors, so I am going to translate the problem:



    For $v=(v_1,dots,v_n)in BbbR^n$ let $E_v$ be the sets of subsets $X$ of $BbbN_n:=1,dots,n$ such that
    $$
    (1)quadquadsum_iin Xv_i=frac 12 sum_iin BbbN_nv_i.
    $$
    Clearly there is a bijection between the sets $X$ in $E_v$ and the sign vectors which are orthogonal to $v$,
    assigning to every set $X$ the sign vector $S_X:=2e_X-BbbI$, where $BbbI$ is the vector with all entries equal to $1$ and $(e_X)_i=1$ if $iin X$ and $0$ otherwise.
    If $Xin E_v$, then we have
    $$
    vcdot e_X=sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=frac 12 vcdotBbbI,
    $$
    and so $vcdot S_X=0$. On the other hand $vcdot S_X=0$ implies $Xin E_v$.



    For any $iin BbbN_n$ with $v_ine 0$, we can define an injective map $R_i$ from $E_v$ into $P(BbbN_n)setminus E_v$ (the complement of $E_v$ in the powerset of $BbbN_n$) defining $R_i(X)=Xtrianglei$, i.e., $R_i(X)=Xcupi$ if $inotin X$ and $R_i(X)=Xsetminusi$ if $iin X$. Clearly, if (1) is valid for $X$, then it is not valid for $R_i(X)$.



    Now, if $#(E_v)=2^n-1$, then $R_i$ is a bijection for every $i$ with $v_ine 0$, and it is clearly its own inverse.



    Assume by contradiction that there exist three non-zero entries $v_i_1,v_i_2,v_i_3$, and set $X=BbbN_nsetminusi_1,i_2,i_3$.



    If $Xin E_v$, then $R_i_1(X)notin E_v$ and so $R_i_2(R_i_1(X))in E_v$, similarly
    $R_i_3(R_i_1(X))in E_v$, hence
    $$
    v_2+v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_3+v_1+sum_iin Xv_i,
    $$
    and so $v_2=v_3$. Simlarly one proves $v_1=v_2$. But $Xin E_v$, so
    $$
    sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+v_1+sum_iin Xv_i,
    $$
    and so $v_1=v_2=0$, a contradiction.



    If $Xnotin E_v$, then $R_i_1(X),R_i_2(X),R_i_3(X)in E_v$, hence
    $$
    v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+sum_iin Xv_i=v_3+sum_iin Xv_i,
    $$
    and so $v_1=v_2=v_3$. But then $R_i_2(R_i_1(X))notin E_v$ which implies that
    $R_i_3(R_i_2(R_i_1(X)))in E_v$ and so
    $$
    v_1+v_2+v_3+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_1+sum_iin Xv_i,
    $$
    which leads to $v_2=v_3=0$, also a contradiction.



    Hence there are at most two nonzero entries, and clearly they are either equal or they are opposite, and it is also clear that no vector with only one nonzero entry satisfies (1).






    share|cite|improve this answer




















    • Thank you! This is very nice.
      – uniquesolution
      Jun 22 '17 at 14:01












    up vote
    5
    down vote



    accepted







    up vote
    5
    down vote



    accepted






    I find it simpler to deal with sets than with orthogonal vectors, so I am going to translate the problem:



    For $v=(v_1,dots,v_n)in BbbR^n$ let $E_v$ be the sets of subsets $X$ of $BbbN_n:=1,dots,n$ such that
    $$
    (1)quadquadsum_iin Xv_i=frac 12 sum_iin BbbN_nv_i.
    $$
    Clearly there is a bijection between the sets $X$ in $E_v$ and the sign vectors which are orthogonal to $v$,
    assigning to every set $X$ the sign vector $S_X:=2e_X-BbbI$, where $BbbI$ is the vector with all entries equal to $1$ and $(e_X)_i=1$ if $iin X$ and $0$ otherwise.
    If $Xin E_v$, then we have
    $$
    vcdot e_X=sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=frac 12 vcdotBbbI,
    $$
    and so $vcdot S_X=0$. On the other hand $vcdot S_X=0$ implies $Xin E_v$.



    For any $iin BbbN_n$ with $v_ine 0$, we can define an injective map $R_i$ from $E_v$ into $P(BbbN_n)setminus E_v$ (the complement of $E_v$ in the powerset of $BbbN_n$) defining $R_i(X)=Xtrianglei$, i.e., $R_i(X)=Xcupi$ if $inotin X$ and $R_i(X)=Xsetminusi$ if $iin X$. Clearly, if (1) is valid for $X$, then it is not valid for $R_i(X)$.



    Now, if $#(E_v)=2^n-1$, then $R_i$ is a bijection for every $i$ with $v_ine 0$, and it is clearly its own inverse.



    Assume by contradiction that there exist three non-zero entries $v_i_1,v_i_2,v_i_3$, and set $X=BbbN_nsetminusi_1,i_2,i_3$.



    If $Xin E_v$, then $R_i_1(X)notin E_v$ and so $R_i_2(R_i_1(X))in E_v$, similarly
    $R_i_3(R_i_1(X))in E_v$, hence
    $$
    v_2+v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_3+v_1+sum_iin Xv_i,
    $$
    and so $v_2=v_3$. Simlarly one proves $v_1=v_2$. But $Xin E_v$, so
    $$
    sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+v_1+sum_iin Xv_i,
    $$
    and so $v_1=v_2=0$, a contradiction.



    If $Xnotin E_v$, then $R_i_1(X),R_i_2(X),R_i_3(X)in E_v$, hence
    $$
    v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+sum_iin Xv_i=v_3+sum_iin Xv_i,
    $$
    and so $v_1=v_2=v_3$. But then $R_i_2(R_i_1(X))notin E_v$ which implies that
    $R_i_3(R_i_2(R_i_1(X)))in E_v$ and so
    $$
    v_1+v_2+v_3+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_1+sum_iin Xv_i,
    $$
    which leads to $v_2=v_3=0$, also a contradiction.



    Hence there are at most two nonzero entries, and clearly they are either equal or they are opposite, and it is also clear that no vector with only one nonzero entry satisfies (1).






    share|cite|improve this answer












    I find it simpler to deal with sets than with orthogonal vectors, so I am going to translate the problem:



    For $v=(v_1,dots,v_n)in BbbR^n$ let $E_v$ be the sets of subsets $X$ of $BbbN_n:=1,dots,n$ such that
    $$
    (1)quadquadsum_iin Xv_i=frac 12 sum_iin BbbN_nv_i.
    $$
    Clearly there is a bijection between the sets $X$ in $E_v$ and the sign vectors which are orthogonal to $v$,
    assigning to every set $X$ the sign vector $S_X:=2e_X-BbbI$, where $BbbI$ is the vector with all entries equal to $1$ and $(e_X)_i=1$ if $iin X$ and $0$ otherwise.
    If $Xin E_v$, then we have
    $$
    vcdot e_X=sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=frac 12 vcdotBbbI,
    $$
    and so $vcdot S_X=0$. On the other hand $vcdot S_X=0$ implies $Xin E_v$.



    For any $iin BbbN_n$ with $v_ine 0$, we can define an injective map $R_i$ from $E_v$ into $P(BbbN_n)setminus E_v$ (the complement of $E_v$ in the powerset of $BbbN_n$) defining $R_i(X)=Xtrianglei$, i.e., $R_i(X)=Xcupi$ if $inotin X$ and $R_i(X)=Xsetminusi$ if $iin X$. Clearly, if (1) is valid for $X$, then it is not valid for $R_i(X)$.



    Now, if $#(E_v)=2^n-1$, then $R_i$ is a bijection for every $i$ with $v_ine 0$, and it is clearly its own inverse.



    Assume by contradiction that there exist three non-zero entries $v_i_1,v_i_2,v_i_3$, and set $X=BbbN_nsetminusi_1,i_2,i_3$.



    If $Xin E_v$, then $R_i_1(X)notin E_v$ and so $R_i_2(R_i_1(X))in E_v$, similarly
    $R_i_3(R_i_1(X))in E_v$, hence
    $$
    v_2+v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_3+v_1+sum_iin Xv_i,
    $$
    and so $v_2=v_3$. Simlarly one proves $v_1=v_2$. But $Xin E_v$, so
    $$
    sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+v_1+sum_iin Xv_i,
    $$
    and so $v_1=v_2=0$, a contradiction.



    If $Xnotin E_v$, then $R_i_1(X),R_i_2(X),R_i_3(X)in E_v$, hence
    $$
    v_1+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_2+sum_iin Xv_i=v_3+sum_iin Xv_i,
    $$
    and so $v_1=v_2=v_3$. But then $R_i_2(R_i_1(X))notin E_v$ which implies that
    $R_i_3(R_i_2(R_i_1(X)))in E_v$ and so
    $$
    v_1+v_2+v_3+sum_iin Xv_i=frac 12 sum_iin BbbN_nv_i=v_1+sum_iin Xv_i,
    $$
    which leads to $v_2=v_3=0$, also a contradiction.



    Hence there are at most two nonzero entries, and clearly they are either equal or they are opposite, and it is also clear that no vector with only one nonzero entry satisfies (1).







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Jun 21 '17 at 20:41









    san

    11.9k11132




    11.9k11132











    • Thank you! This is very nice.
      – uniquesolution
      Jun 22 '17 at 14:01
















    • Thank you! This is very nice.
      – uniquesolution
      Jun 22 '17 at 14:01















    Thank you! This is very nice.
    – uniquesolution
    Jun 22 '17 at 14:01




    Thank you! This is very nice.
    – uniquesolution
    Jun 22 '17 at 14:01










    up vote
    2
    down vote



    +200










    The following more general version of your question was originally raised by Littlewood and Offord in their study of the roots of random polynomials.




    Let $a_1, a_2, dots, a_k$ be $k$ (not necessarily distinct) real numbers of magnitude at least $1$. What is the maximum number of expressions of the form
    $$pm a_1 pm a_2 dots pm a_k$$ that can lie in an interval of length less than $2$?




    They proved a bound that was within a $log k$ factor of optimal. Shortly afterwards, Erdős showed that the correct bound is $binomklfloor k/2 rfloor$. Taking his result, rescaling it, and padding the sequence out with zeros gives the following:




    Let $a_1, a_2, dots, a_n$ be real numbers, $k$ of which are nonzero. Then for any $c$, the number of solutions to
    $$pm a_1 pm a_2 dots pm a_n = c$$
    is at most $binomklfloor k/2 rfloor 2^n-k$. Equality holds iff the nonzero terms are all equal in magnitude.




    Erdős's argument is actually a close cousin of San's: WLOG we can assume that all of the $a_i$ are non-negative. Then associate each sign sequence with the set corresponding to the indices of nonzero terms with positive signs. Changing negative signs to positive signs increases the sum, so the sets corresponding to equal sums form an antichain (no set is a subset of another). The bound follows from Sperner's Theorem.



    In terms of your problem, you're stating that the ratio $binomklfloor k/2 rfloor/2^k$ is equal to $frac12$, which only happens for $k=1$ and $k=2$, and $k=1$ doesn't give $c=0$.






    share|cite|improve this answer






















    • Very nice! How do you deduce from the equality case in Sperner's theorem that all non-zero terms are equal in magnitude?
      – uniquesolution
      Jun 23 '17 at 23:07










    • Equality holds in Sperner's theorem only when the set is a slice of the hypercube (either all sets of size $frack2$, if $k$ is even, or either all sets of size $frack+12$ or all sets of size $frack-12$, if $k$ is odd). If equality holds in Littlewood-Offord, then all subsets of the corresponding size must have the same sum, which implies that all of the summands must be equal.
      – Kevin P. Costello
      Jun 23 '17 at 23:26















    up vote
    2
    down vote



    +200










    The following more general version of your question was originally raised by Littlewood and Offord in their study of the roots of random polynomials.




    Let $a_1, a_2, dots, a_k$ be $k$ (not necessarily distinct) real numbers of magnitude at least $1$. What is the maximum number of expressions of the form
    $$pm a_1 pm a_2 dots pm a_k$$ that can lie in an interval of length less than $2$?




    They proved a bound that was within a $log k$ factor of optimal. Shortly afterwards, Erdős showed that the correct bound is $binomklfloor k/2 rfloor$. Taking his result, rescaling it, and padding the sequence out with zeros gives the following:




    Let $a_1, a_2, dots, a_n$ be real numbers, $k$ of which are nonzero. Then for any $c$, the number of solutions to
    $$pm a_1 pm a_2 dots pm a_n = c$$
    is at most $binomklfloor k/2 rfloor 2^n-k$. Equality holds iff the nonzero terms are all equal in magnitude.




    Erdős's argument is actually a close cousin of San's: WLOG we can assume that all of the $a_i$ are non-negative. Then associate each sign sequence with the set corresponding to the indices of nonzero terms with positive signs. Changing negative signs to positive signs increases the sum, so the sets corresponding to equal sums form an antichain (no set is a subset of another). The bound follows from Sperner's Theorem.



    In terms of your problem, you're stating that the ratio $binomklfloor k/2 rfloor/2^k$ is equal to $frac12$, which only happens for $k=1$ and $k=2$, and $k=1$ doesn't give $c=0$.






    share|cite|improve this answer






















    • Very nice! How do you deduce from the equality case in Sperner's theorem that all non-zero terms are equal in magnitude?
      – uniquesolution
      Jun 23 '17 at 23:07










    • Equality holds in Sperner's theorem only when the set is a slice of the hypercube (either all sets of size $frack2$, if $k$ is even, or either all sets of size $frack+12$ or all sets of size $frack-12$, if $k$ is odd). If equality holds in Littlewood-Offord, then all subsets of the corresponding size must have the same sum, which implies that all of the summands must be equal.
      – Kevin P. Costello
      Jun 23 '17 at 23:26













    up vote
    2
    down vote



    +200







    up vote
    2
    down vote



    +200




    +200




    The following more general version of your question was originally raised by Littlewood and Offord in their study of the roots of random polynomials.




    Let $a_1, a_2, dots, a_k$ be $k$ (not necessarily distinct) real numbers of magnitude at least $1$. What is the maximum number of expressions of the form
    $$pm a_1 pm a_2 dots pm a_k$$ that can lie in an interval of length less than $2$?




    They proved a bound that was within a $log k$ factor of optimal. Shortly afterwards, Erdős showed that the correct bound is $binomklfloor k/2 rfloor$. Taking his result, rescaling it, and padding the sequence out with zeros gives the following:




    Let $a_1, a_2, dots, a_n$ be real numbers, $k$ of which are nonzero. Then for any $c$, the number of solutions to
    $$pm a_1 pm a_2 dots pm a_n = c$$
    is at most $binomklfloor k/2 rfloor 2^n-k$. Equality holds iff the nonzero terms are all equal in magnitude.




    Erdős's argument is actually a close cousin of San's: WLOG we can assume that all of the $a_i$ are non-negative. Then associate each sign sequence with the set corresponding to the indices of nonzero terms with positive signs. Changing negative signs to positive signs increases the sum, so the sets corresponding to equal sums form an antichain (no set is a subset of another). The bound follows from Sperner's Theorem.



    In terms of your problem, you're stating that the ratio $binomklfloor k/2 rfloor/2^k$ is equal to $frac12$, which only happens for $k=1$ and $k=2$, and $k=1$ doesn't give $c=0$.






    share|cite|improve this answer














    The following more general version of your question was originally raised by Littlewood and Offord in their study of the roots of random polynomials.




    Let $a_1, a_2, dots, a_k$ be $k$ (not necessarily distinct) real numbers of magnitude at least $1$. What is the maximum number of expressions of the form
    $$pm a_1 pm a_2 dots pm a_k$$ that can lie in an interval of length less than $2$?




    They proved a bound that was within a $log k$ factor of optimal. Shortly afterwards, Erdős showed that the correct bound is $binomklfloor k/2 rfloor$. Taking his result, rescaling it, and padding the sequence out with zeros gives the following:




    Let $a_1, a_2, dots, a_n$ be real numbers, $k$ of which are nonzero. Then for any $c$, the number of solutions to
    $$pm a_1 pm a_2 dots pm a_n = c$$
    is at most $binomklfloor k/2 rfloor 2^n-k$. Equality holds iff the nonzero terms are all equal in magnitude.




    Erdős's argument is actually a close cousin of San's: WLOG we can assume that all of the $a_i$ are non-negative. Then associate each sign sequence with the set corresponding to the indices of nonzero terms with positive signs. Changing negative signs to positive signs increases the sum, so the sets corresponding to equal sums form an antichain (no set is a subset of another). The bound follows from Sperner's Theorem.



    In terms of your problem, you're stating that the ratio $binomklfloor k/2 rfloor/2^k$ is equal to $frac12$, which only happens for $k=1$ and $k=2$, and $k=1$ doesn't give $c=0$.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Jun 23 '17 at 21:52

























    answered Jun 23 '17 at 21:47









    Kevin P. Costello

    3,9551633




    3,9551633











    • Very nice! How do you deduce from the equality case in Sperner's theorem that all non-zero terms are equal in magnitude?
      – uniquesolution
      Jun 23 '17 at 23:07










    • Equality holds in Sperner's theorem only when the set is a slice of the hypercube (either all sets of size $frack2$, if $k$ is even, or either all sets of size $frack+12$ or all sets of size $frack-12$, if $k$ is odd). If equality holds in Littlewood-Offord, then all subsets of the corresponding size must have the same sum, which implies that all of the summands must be equal.
      – Kevin P. Costello
      Jun 23 '17 at 23:26

















    • Very nice! How do you deduce from the equality case in Sperner's theorem that all non-zero terms are equal in magnitude?
      – uniquesolution
      Jun 23 '17 at 23:07










    • Equality holds in Sperner's theorem only when the set is a slice of the hypercube (either all sets of size $frack2$, if $k$ is even, or either all sets of size $frack+12$ or all sets of size $frack-12$, if $k$ is odd). If equality holds in Littlewood-Offord, then all subsets of the corresponding size must have the same sum, which implies that all of the summands must be equal.
      – Kevin P. Costello
      Jun 23 '17 at 23:26
















    Very nice! How do you deduce from the equality case in Sperner's theorem that all non-zero terms are equal in magnitude?
    – uniquesolution
    Jun 23 '17 at 23:07




    Very nice! How do you deduce from the equality case in Sperner's theorem that all non-zero terms are equal in magnitude?
    – uniquesolution
    Jun 23 '17 at 23:07












    Equality holds in Sperner's theorem only when the set is a slice of the hypercube (either all sets of size $frack2$, if $k$ is even, or either all sets of size $frack+12$ or all sets of size $frack-12$, if $k$ is odd). If equality holds in Littlewood-Offord, then all subsets of the corresponding size must have the same sum, which implies that all of the summands must be equal.
    – Kevin P. Costello
    Jun 23 '17 at 23:26





    Equality holds in Sperner's theorem only when the set is a slice of the hypercube (either all sets of size $frack2$, if $k$ is even, or either all sets of size $frack+12$ or all sets of size $frack-12$, if $k$ is odd). If equality holds in Littlewood-Offord, then all subsets of the corresponding size must have the same sum, which implies that all of the summands must be equal.
    – Kevin P. Costello
    Jun 23 '17 at 23:26











    up vote
    2
    down vote













    This is just a translation of san's beautiful argument to a geometric language, with a few short-cuts.



    By symmetry we may assume that
    $$u_1geq u_2geqcdots geq u_ngeq 0quadquadquadquadquad(1)$$
    If $n=2$ then the only vectors orthogonal to sign-vectors are proportional to sign-vectors themselves, so the assertion is clear. Assume henceforth that $ngeq 3$.
    Put
    $$E_u=varepsilonin-1,1^n: langlevarepsilon,urangle = 0$$
    Consider the $i$'th coordinate "sign-flip", i.e., the cube symmetry $R_i$ defined by multiplying the $i$'th coordinate of a vector by $-1$:
    $$R_i(x_1,cdots,x_i,cdots x_n)=(x_1,cdots,-x_i,cdots x_n)$$



    There are two crucial properties of $R_i$ in connection with our problem, which are easy to check:



    1. For each $i$ such that $u_ineq 0$, $R_i$ is an injection of $E_u$ into its complement $(E_u)^c$ in $-1,1^n$,
      and if $|E_u|=2^n-1$ it is a bijection.


    2. $R_i^2=Id$


    Now assume that $u_3>0$. Put $varepsilon=(-1,-1,-1,+1,dots,+1)$.



    Case A: $varepsilonin E_u$.



    $R_2varepsilon$ does not belong to $E_u$, and since $R_1$ is a bijection such that $R_1^2=Id$, the sign-vector $R_1(R_2varepsilon)$ belongs to $E_u$. That is:
    $$u_1+u_2-u_3=-sum_i=4^nu_i$$
    By (1), the r.h.s is not positive and the l.h.s is not negative, so they both must be zero, implying that $u_1+u_2=u_3$, which (by (1) again) is impossible unless $u_1=u_2=u_3=0$, contradiction.



    Case B: $varepsilonnotin E_u$



    Flipping once with $R_1$ takes $varepsilon$ to $E_u$. Flipping again with $R_2$ takes $R_1varepsilon$ out of $E_u$, but flipping a third time with $R_3$ takes it back, so $R_1(R_2(R_3varepsilon))in E_u$. Hence:
    $$u_1+u_2+u_3=-sum_i=4^nu_i$$
    and we arrive at a contradiction as before.



    It follows that $u_3=0$, and so there are only two non-zero coordinates - and so
    $u_1=u_2$ as in the two-dimensional case above.






    share|cite|improve this answer






















    • Nice translation and nice shortcuts. Maybe $E_v$ should be $E_u$.
      – san
      Jun 22 '17 at 14:57










    • Thank you. Of course, $E_u$.
      – uniquesolution
      Jun 22 '17 at 15:21














    up vote
    2
    down vote













    This is just a translation of san's beautiful argument to a geometric language, with a few short-cuts.



    By symmetry we may assume that
    $$u_1geq u_2geqcdots geq u_ngeq 0quadquadquadquadquad(1)$$
    If $n=2$ then the only vectors orthogonal to sign-vectors are proportional to sign-vectors themselves, so the assertion is clear. Assume henceforth that $ngeq 3$.
    Put
    $$E_u=varepsilonin-1,1^n: langlevarepsilon,urangle = 0$$
    Consider the $i$'th coordinate "sign-flip", i.e., the cube symmetry $R_i$ defined by multiplying the $i$'th coordinate of a vector by $-1$:
    $$R_i(x_1,cdots,x_i,cdots x_n)=(x_1,cdots,-x_i,cdots x_n)$$



    There are two crucial properties of $R_i$ in connection with our problem, which are easy to check:



    1. For each $i$ such that $u_ineq 0$, $R_i$ is an injection of $E_u$ into its complement $(E_u)^c$ in $-1,1^n$,
      and if $|E_u|=2^n-1$ it is a bijection.


    2. $R_i^2=Id$


    Now assume that $u_3>0$. Put $varepsilon=(-1,-1,-1,+1,dots,+1)$.



    Case A: $varepsilonin E_u$.



    $R_2varepsilon$ does not belong to $E_u$, and since $R_1$ is a bijection such that $R_1^2=Id$, the sign-vector $R_1(R_2varepsilon)$ belongs to $E_u$. That is:
    $$u_1+u_2-u_3=-sum_i=4^nu_i$$
    By (1), the r.h.s is not positive and the l.h.s is not negative, so they both must be zero, implying that $u_1+u_2=u_3$, which (by (1) again) is impossible unless $u_1=u_2=u_3=0$, contradiction.



    Case B: $varepsilonnotin E_u$



    Flipping once with $R_1$ takes $varepsilon$ to $E_u$. Flipping again with $R_2$ takes $R_1varepsilon$ out of $E_u$, but flipping a third time with $R_3$ takes it back, so $R_1(R_2(R_3varepsilon))in E_u$. Hence:
    $$u_1+u_2+u_3=-sum_i=4^nu_i$$
    and we arrive at a contradiction as before.



    It follows that $u_3=0$, and so there are only two non-zero coordinates - and so
    $u_1=u_2$ as in the two-dimensional case above.






    share|cite|improve this answer






















    • Nice translation and nice shortcuts. Maybe $E_v$ should be $E_u$.
      – san
      Jun 22 '17 at 14:57










    • Thank you. Of course, $E_u$.
      – uniquesolution
      Jun 22 '17 at 15:21












    up vote
    2
    down vote










    up vote
    2
    down vote









    This is just a translation of san's beautiful argument to a geometric language, with a few short-cuts.



    By symmetry we may assume that
    $$u_1geq u_2geqcdots geq u_ngeq 0quadquadquadquadquad(1)$$
    If $n=2$ then the only vectors orthogonal to sign-vectors are proportional to sign-vectors themselves, so the assertion is clear. Assume henceforth that $ngeq 3$.
    Put
    $$E_u=varepsilonin-1,1^n: langlevarepsilon,urangle = 0$$
    Consider the $i$'th coordinate "sign-flip", i.e., the cube symmetry $R_i$ defined by multiplying the $i$'th coordinate of a vector by $-1$:
    $$R_i(x_1,cdots,x_i,cdots x_n)=(x_1,cdots,-x_i,cdots x_n)$$



    There are two crucial properties of $R_i$ in connection with our problem, which are easy to check:



    1. For each $i$ such that $u_ineq 0$, $R_i$ is an injection of $E_u$ into its complement $(E_u)^c$ in $-1,1^n$,
      and if $|E_u|=2^n-1$ it is a bijection.


    2. $R_i^2=Id$


    Now assume that $u_3>0$. Put $varepsilon=(-1,-1,-1,+1,dots,+1)$.



    Case A: $varepsilonin E_u$.



    $R_2varepsilon$ does not belong to $E_u$, and since $R_1$ is a bijection such that $R_1^2=Id$, the sign-vector $R_1(R_2varepsilon)$ belongs to $E_u$. That is:
    $$u_1+u_2-u_3=-sum_i=4^nu_i$$
    By (1), the r.h.s is not positive and the l.h.s is not negative, so they both must be zero, implying that $u_1+u_2=u_3$, which (by (1) again) is impossible unless $u_1=u_2=u_3=0$, contradiction.



    Case B: $varepsilonnotin E_u$



    Flipping once with $R_1$ takes $varepsilon$ to $E_u$. Flipping again with $R_2$ takes $R_1varepsilon$ out of $E_u$, but flipping a third time with $R_3$ takes it back, so $R_1(R_2(R_3varepsilon))in E_u$. Hence:
    $$u_1+u_2+u_3=-sum_i=4^nu_i$$
    and we arrive at a contradiction as before.



    It follows that $u_3=0$, and so there are only two non-zero coordinates - and so
    $u_1=u_2$ as in the two-dimensional case above.






    share|cite|improve this answer














    This is just a translation of san's beautiful argument to a geometric language, with a few short-cuts.



    By symmetry we may assume that
    $$u_1geq u_2geqcdots geq u_ngeq 0quadquadquadquadquad(1)$$
    If $n=2$ then the only vectors orthogonal to sign-vectors are proportional to sign-vectors themselves, so the assertion is clear. Assume henceforth that $ngeq 3$.
    Put
    $$E_u=varepsilonin-1,1^n: langlevarepsilon,urangle = 0$$
    Consider the $i$'th coordinate "sign-flip", i.e., the cube symmetry $R_i$ defined by multiplying the $i$'th coordinate of a vector by $-1$:
    $$R_i(x_1,cdots,x_i,cdots x_n)=(x_1,cdots,-x_i,cdots x_n)$$



    There are two crucial properties of $R_i$ in connection with our problem, which are easy to check:



    1. For each $i$ such that $u_ineq 0$, $R_i$ is an injection of $E_u$ into its complement $(E_u)^c$ in $-1,1^n$,
      and if $|E_u|=2^n-1$ it is a bijection.


    2. $R_i^2=Id$


    Now assume that $u_3>0$. Put $varepsilon=(-1,-1,-1,+1,dots,+1)$.



    Case A: $varepsilonin E_u$.



    $R_2varepsilon$ does not belong to $E_u$, and since $R_1$ is a bijection such that $R_1^2=Id$, the sign-vector $R_1(R_2varepsilon)$ belongs to $E_u$. That is:
    $$u_1+u_2-u_3=-sum_i=4^nu_i$$
    By (1), the r.h.s is not positive and the l.h.s is not negative, so they both must be zero, implying that $u_1+u_2=u_3$, which (by (1) again) is impossible unless $u_1=u_2=u_3=0$, contradiction.



    Case B: $varepsilonnotin E_u$



    Flipping once with $R_1$ takes $varepsilon$ to $E_u$. Flipping again with $R_2$ takes $R_1varepsilon$ out of $E_u$, but flipping a third time with $R_3$ takes it back, so $R_1(R_2(R_3varepsilon))in E_u$. Hence:
    $$u_1+u_2+u_3=-sum_i=4^nu_i$$
    and we arrive at a contradiction as before.



    It follows that $u_3=0$, and so there are only two non-zero coordinates - and so
    $u_1=u_2$ as in the two-dimensional case above.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Sep 1 at 0:53

























    answered Jun 22 '17 at 13:55









    uniquesolution

    8,316823




    8,316823











    • Nice translation and nice shortcuts. Maybe $E_v$ should be $E_u$.
      – san
      Jun 22 '17 at 14:57










    • Thank you. Of course, $E_u$.
      – uniquesolution
      Jun 22 '17 at 15:21
















    • Nice translation and nice shortcuts. Maybe $E_v$ should be $E_u$.
      – san
      Jun 22 '17 at 14:57










    • Thank you. Of course, $E_u$.
      – uniquesolution
      Jun 22 '17 at 15:21















    Nice translation and nice shortcuts. Maybe $E_v$ should be $E_u$.
    – san
    Jun 22 '17 at 14:57




    Nice translation and nice shortcuts. Maybe $E_v$ should be $E_u$.
    – san
    Jun 22 '17 at 14:57












    Thank you. Of course, $E_u$.
    – uniquesolution
    Jun 22 '17 at 15:21




    Thank you. Of course, $E_u$.
    – uniquesolution
    Jun 22 '17 at 15:21










    up vote
    1
    down vote













    Partial solution: I have made this community wiki so that anyone can finish the answer if they know how (otherwise, feel free to just create a new answer with your full solution) Let $E$ be the set of sign vectors which are orthogonal to $v$. If, for all $ein E$, we have $e_1=e_2$, then this set must be exactly
    $$
    E=(1,1,pm1,dots,pm1)cup(-1,-1,mp1,dots,mp1)
    $$
    which can be seen by the involution of multiplication by $-1$. Now $v$ is perpendicular to $(1,1,1,dots,1)$ and $(1,1,-1,dots,-1)$ so it is perpendicular to the sum, $(2,2,0,dots,0)$. Thus $v_1=-v_2$. If both $v_1=v_2=0$, then $E$ would not be the set described above, so we must have $|v_1|=|v_2|neq0$. Given the description of $E$ above it is now easy to see that the last $n-2$ coordinates of $v$ are $0$.



    In fact, the above logic works if $e_i=e_j$ for all $ein E$ and any indices $i,j$, and similarly if $e_i=-e_j$ for any $i,j$.



    Now it suffices to show that no such $E$ can exist which doesn't meet those restrictions. I am thinking that we should use the fact that $E$ is closed under multiplication by $-1$.






    share|cite|improve this answer


























      up vote
      1
      down vote













      Partial solution: I have made this community wiki so that anyone can finish the answer if they know how (otherwise, feel free to just create a new answer with your full solution) Let $E$ be the set of sign vectors which are orthogonal to $v$. If, for all $ein E$, we have $e_1=e_2$, then this set must be exactly
      $$
      E=(1,1,pm1,dots,pm1)cup(-1,-1,mp1,dots,mp1)
      $$
      which can be seen by the involution of multiplication by $-1$. Now $v$ is perpendicular to $(1,1,1,dots,1)$ and $(1,1,-1,dots,-1)$ so it is perpendicular to the sum, $(2,2,0,dots,0)$. Thus $v_1=-v_2$. If both $v_1=v_2=0$, then $E$ would not be the set described above, so we must have $|v_1|=|v_2|neq0$. Given the description of $E$ above it is now easy to see that the last $n-2$ coordinates of $v$ are $0$.



      In fact, the above logic works if $e_i=e_j$ for all $ein E$ and any indices $i,j$, and similarly if $e_i=-e_j$ for any $i,j$.



      Now it suffices to show that no such $E$ can exist which doesn't meet those restrictions. I am thinking that we should use the fact that $E$ is closed under multiplication by $-1$.






      share|cite|improve this answer
























        up vote
        1
        down vote










        up vote
        1
        down vote









        Partial solution: I have made this community wiki so that anyone can finish the answer if they know how (otherwise, feel free to just create a new answer with your full solution) Let $E$ be the set of sign vectors which are orthogonal to $v$. If, for all $ein E$, we have $e_1=e_2$, then this set must be exactly
        $$
        E=(1,1,pm1,dots,pm1)cup(-1,-1,mp1,dots,mp1)
        $$
        which can be seen by the involution of multiplication by $-1$. Now $v$ is perpendicular to $(1,1,1,dots,1)$ and $(1,1,-1,dots,-1)$ so it is perpendicular to the sum, $(2,2,0,dots,0)$. Thus $v_1=-v_2$. If both $v_1=v_2=0$, then $E$ would not be the set described above, so we must have $|v_1|=|v_2|neq0$. Given the description of $E$ above it is now easy to see that the last $n-2$ coordinates of $v$ are $0$.



        In fact, the above logic works if $e_i=e_j$ for all $ein E$ and any indices $i,j$, and similarly if $e_i=-e_j$ for any $i,j$.



        Now it suffices to show that no such $E$ can exist which doesn't meet those restrictions. I am thinking that we should use the fact that $E$ is closed under multiplication by $-1$.






        share|cite|improve this answer














        Partial solution: I have made this community wiki so that anyone can finish the answer if they know how (otherwise, feel free to just create a new answer with your full solution) Let $E$ be the set of sign vectors which are orthogonal to $v$. If, for all $ein E$, we have $e_1=e_2$, then this set must be exactly
        $$
        E=(1,1,pm1,dots,pm1)cup(-1,-1,mp1,dots,mp1)
        $$
        which can be seen by the involution of multiplication by $-1$. Now $v$ is perpendicular to $(1,1,1,dots,1)$ and $(1,1,-1,dots,-1)$ so it is perpendicular to the sum, $(2,2,0,dots,0)$. Thus $v_1=-v_2$. If both $v_1=v_2=0$, then $E$ would not be the set described above, so we must have $|v_1|=|v_2|neq0$. Given the description of $E$ above it is now easy to see that the last $n-2$ coordinates of $v$ are $0$.



        In fact, the above logic works if $e_i=e_j$ for all $ein E$ and any indices $i,j$, and similarly if $e_i=-e_j$ for any $i,j$.



        Now it suffices to show that no such $E$ can exist which doesn't meet those restrictions. I am thinking that we should use the fact that $E$ is closed under multiplication by $-1$.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        answered Jun 18 '17 at 23:50


























        community wiki





        ThomasGrubb




























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2327869%2fvectors-in-mathbbrn-orthogonal-to-2n-1-sign-vectors%23new-answer', 'question_page');

            );

            Post as a guest













































































            這個網誌中的熱門文章

            How to combine Bézier curves to a surface?

            Mutual Information Always Non-negative

            Why am i infinitely getting the same tweet with the Twitter Search API?