As an analytic number theorist, I live among oscillating sums, from
where μ is the Möbius function, to
(the famed Kloosterman sum, one of the rare mathematical objects to have given his name to an infectious disease). In these finite sums, we add a finite number of complex numbers (say N), each of modulus at most 1, and we are in situations where the arguments of the complex numbers varies “quite a lot”.
For many such sums, a very reliable rule of thumb (at least as far as guessing what should be true) is that the sum should be of order of magnitude roughly given by the square root of the number of terms. This is easily justified on probabilistic grounds, and it is further confirmed by a remarkable array of rigorous results with sums of many different origins. It is also often incredibly deep: for the first sum above, a result of this type is equivalent to the Riemann Hypothesis, and for the second, a bound of the right shape was only proved by A. Weil in the 1940’s as a consequence of the Riemann Hypothesis for curves over finite fields. And it is quite easy, also, to write down oscillating sums where this type of cancellation would lead to a proof of the Twin Prime Conjecture, for instance.
In any case, if I am confronted with a sum as above, I don’t feel afraid, and I can often say something intelligent.
However, one can also find many oscillating, or at least alternating, sums coming from combinatorics where, however, the size of the terms in the sum are by no means bounded, but growing as the length of the sum increases… and yet the total sum remains controlled in magnitude. The most trivial example is
which is 0 (for n at least 1) from the binomial theorem, whereas the size of the largest summand is exponential in terms of n.
Now, when I encounter such a sum, I do feel scared; if I suspect the sum is small, I am quite at a loss to know why that could be true, and even more so when it comes to trying to prove that this is the case.
But since I have reasons to suspect that some combinatorialists read this blog, maybe some enlightenment may come: is there a general rule that can lead to a serious guess that a “combinatorial looking” alternating sum is much smaller than the largest term (say, of polynomial size, compared with possibly exponential terms)? If yes, what are the most efficient and systematic means of trying to confirm such a guess? And, out of curiosity: what are the most remarkable yet-unproved statements of this type?