Is there a combinatorial “square-root cancellation” philosophy?

As an analytic number theorist, I live among oscillating sums, from

\sum_{n\leq x}{\mu(n)}

where μ is the Möbius function, to

\sum_{x\ mod\ p}{\exp(2i\pi (x+\bar{x})/p)}

(the famed Kloosterman sum, one of the rare mathematical objects to have given his name to an infectious disease). In these finite sums, we add a finite number of complex numbers (say N), each of modulus at most 1, and we are in situations where the arguments of the complex numbers varies “quite a lot”.

For many such sums, a very reliable rule of thumb (at least as far as guessing what should be true) is that the sum should be of order of magnitude roughly given by the square root of the number of terms. This is easily justified on probabilistic grounds, and it is further confirmed by a remarkable array of rigorous results with sums of many different origins. It is also often incredibly deep: for the first sum above, a result of this type is equivalent to the Riemann Hypothesis, and for the second, a bound of the right shape was only proved by A. Weil in the 1940’s as a consequence of the Riemann Hypothesis for curves over finite fields. And it is quite easy, also, to write down oscillating sums where this type of cancellation would lead to a proof of the Twin Prime Conjecture, for instance.

In any case, if I am confronted with a sum as above, I don’t feel afraid, and I can often say something intelligent.

However, one can also find many oscillating, or at least alternating, sums coming from combinatorics where, however, the size of the terms in the sum are by no means bounded, but growing as the length of the sum increases… and yet the total sum remains controlled in magnitude. The most trivial example is

\sum_{j=0}^n{(-1)^j({}^n_j)}

which is 0 (for n at least 1) from the binomial theorem, whereas the size of the largest summand is exponential in terms of n.

Now, when I encounter such a sum, I do feel scared; if I suspect the sum is small, I am quite at a loss to know why that could be true, and even more so when it comes to trying to prove that this is the case.

But since I have reasons to suspect that some combinatorialists read this blog, maybe some enlightenment may come: is there a general rule that can lead to a serious guess that a “combinatorial looking” alternating sum is much smaller than the largest term (say, of polynomial size, compared with possibly exponential terms)? If yes, what are the most efficient and systematic means of trying to confirm such a guess? And, out of curiosity: what are the most remarkable yet-unproved statements of this type?

Published by

Kowalski

I am a professor of mathematics at ETH Zürich since 2008.

8 thoughts on “Is there a combinatorial “square-root cancellation” philosophy?”

  1. Well, for the example sum you give there’s a “combinatorial reason” that the sum is zero — there’s a bijection between even-sized subsets of [n] and odd-sized subsets of [n].

    More generally, a lot of combinatorial alternating sums can be viewed as the difference between the number of even-sized objects of a certain type and the number of odd-sized objects. So perhaps the right heuristic is that

    sum_{k=0}^n f(n,k) (-1)^k

    should be, in order of magnitude, no larger than the square root of

    sum_{k=0}^n f(n,k).

    The idea here is roughly as follows. Say f(n,k) is the number of objects of “size” n and “weight” k. For example, f(n,k) = n choose k counts the number of subsets of [n] with k elements. Or we could have f(n,k) be the number of partitions of n with k parts. (I’m not sure if this particular example actually has the properties you’re asking about.) Then the second sum is the difference between the number of even-weight objects and odd-weight objects. But naively we expect there to be about the same number of odd-weight and even-weight objects of size n in some “generic” combinatorial problem, and in particular we expect that if we picked an object of size n uniformly at random it would have probability 1/2 of having odd weight and 1/2 of having even weight. (This is of course for n large.) So on probabilistic grounds similar to your intuition on oscillating sums, the second sum should be of order no larger than the square root of the first.

    There might also be a generating-function approach to your question — given a bivariate generating function

    F(x,y) = sum_{k=0}^infty sum_{n=0}^infty f(n,k) x^n y^k

    it turns out that F(x,1) is the generating function for the second sum above as a function of n, and F(x, -1) is the generating function for the first sum. (Or something like that — I’m not 100 percent sure I have the indexing right.)

    I haven’t thought about whether this is actually something that happens in real-life examples, but it’s a nice question!

  2. That’s a good idea indeed, but there is still the fact that, although the number of even/odd objects could be expected to be comparable, the “square root” of the total number of objects would remain exponential in terms of the main parameter n, whereas often the “combinatorial cancellation” leads to a polynomial quantity, and this is the most surprising. (E.g., there are exponentially many subsets of {1,…,n}, but, for instance, among those with at least 3 elements, the difference between the number of those with even and odd cardinality is polynomial in n).

    I actually have some specific examples that I am currently looking at, where some similar combinatorial cancellation seems to occur which I can not explain except by computing things explicitly and seeing that the sums are much smaller than could be expected (but I also have other a priori reasons to expect the answer I get, or something close to it). I will write a post concerning those sums once I’ve gone a bit beyond in analyzing them.

  3. I was actually thinking something similar once I started to look at some examples. A couple more I thought of:

    – there are binomial coefficient sums that are equal to the Fibonacci numbers: C(n, 0) + C(n-1, 1) + … = F_{n+1}. But C(n, 0) – C(n-1, 1) + … (the corresponding alternating sum) is always 1, 0, or -1. (The proof I was able to find is not particularly elegant; basically I played around with generating functions for a while.)

    – partitions into distinct odd parts: Euler’s pentagonal number theorem states that the number of partitions of n into distinct parts with an even number of parts, minus the number of partitions of n into distinct parts with an odd number of parts, is 1, 0, or -1.

    – the example I alluded to of partitions in general. Some numerical experimentation shows the following. Let p(n) be the number of partitions of n, and let q(n) be the difference of the number with an even number of parts and the number with an odd number of parts. Then q(n)^2*sqrt(n)/p(n) appears to go to a constant near 0.35 as n goes to infinity, so q(n) grows slower than the square root of p(n). (This may be well known to people who know more about partitions than I do. It also may be false.)

    In your example of sets of cardinality at least 3, that follows from the fact that the alternating sum of C(n, k), k going from 0 to n, is zero, since you’re just eliminating the k = 0, k = 1, and k = 2 terms from the sum.

    My intuition is that the function being summed in these cases (binomial coefficients, numbers of partitions into certain numbers of parts) are “smooth” (in some informal sense) and that keeps the alternating sums from getting too out of hand. It might be possible to use the Euler-Maclaurin summation formulas to show something in general here, although I haven’t tried.

  4. The idea that there should be a nice class of “smooth” function in a suitable sense seems very appealing…

    This is what happens in many questions concerning “ordinary” oscillating sums (the case of algebraic sums over finite fields being the one where the theory is most striking).

  5. It happens sometimes that such a sum counts the Euler characteristic of something, and it is known without doing the sum…

  6. For an alternating sum, unimodality of the terms gives cancellation, but not the huge amount of cancellation that you are seeking, only that the size of the sum is bounded by the size of the largest term.

    An example would be the relation between the counting functions $\pi_k(x)$ of the squarefrees $n \leq x$ that have exactly $k$ prime factors, and the summatory function $M(x)$ of the mu-function. Unimodality in $k$ of $\pi_k(x)$ for each fixed $x$ was conjectured by P. Erd\”{o}s in 1948 and proved by M. Balazard around 1990. This easily yields $M(x) = O(x/\sqrt{\log\log(x)})$ by an old, elementary estimate of Hardy and Ramanujan. Unfortunately it is not a shortcut to the PNT, because Balazard’s proof is difficult.

  7. I wouldn’t expect that sums like the one in Comment 6, which are related to the distribution of prime numbers, would behave as nicely as more elementary combinatorial expressions.
    In particular, the unimodality gives only a weak form of the PNT, compared both with what is known, and what is expected.

  8. I would say “the number of terms is the square of the value of the sum” making it more direct.

    1) Given a sum : cancellation may occur for two reasons :
    – Because of the “meaning” of the terms (symetries of the problem).
    – Because the terms ARE THERE.

    Let us focus on the second reason as a rather general principle :
    1.a) Think of magic square made up cyclically.
    Adding t times the same t terms an grouping cyclically gives t groups of same value.
    Consider equal group’s value as a form a cancellation (entropy goes low).
    This example is in positive value.
    1.b) So up to reorganizing : every term meet every term and the process is generic.

    LITTLE PRINCIPLE : To add everybody with everybody is generic and offers good cancellation.

    2) Other possible “square roots”
    2.a) In brownian motion on the plane ( drunkard in Manhattan metric) strays at sqrt(time)
    This seems to be no more than the probabilistic assertion you made , yet I have tried to find
    a convincing heuristic (physical) argument like “the walk is painting the surface et covers” without success.
    2.b) Discrepancy could be a lead ( I cannot comment on it ) here are some references.
    – The Discrepancy Method -( Bernard Chazelles ) Cambridge University Press
    http://books.google.fr/books?id=dmOPmEh6LdYC&pg=PA1&lpg=PA1&dq=combinatorial+Discrepancy+Chazelles&source=web&ots=CBIeM1Aty8&sig=XFMr-TmJLzU9EBGXhpF2PJHFf8s&hl=en&sa=X&oi=book_result&resnum=1&ct=result#PPA2,M1

    Hope it helps

Comments are closed.