# The fourth moment of Kloosterman sums

One of my favorite computations is that of the fourth moment of Kloosterman sums:

$M_4=\sum_{1\leq a,b\leq p-1}{|S(a,b;p)|^4}=(p-1)(2p^3-3p^2-3p-1),$

where

$S(a,b;p)=\sum_{1\leq x\leq p-1}{\exp(2i\pi (ax+b\bar{x})/p)},\quad\text{where}\quad x\bar{x}\equiv 1\text{ mod } p.$

This was almost first performed with spectacular consequences by H.D. Kloosterman himself in 1927, as a crucial step to proving an upper bound for his sums, which was sufficiently good for his application to representations of integers by integral positive definite quadratic forms in four variables (I recommend reading at least the introduction to this paper: it is strikingly modern).

I say almost, because after just checking his paper, I realized he just got the right order of magnitude, and not the exact formula, for M4.

The standard reference I had been using (including for the exam of a graduate course I taught a while ago…) was in Iwaniec’s delightful book on classical modular forms. (Kloosterman sums appear there because, as in fact already noticed by Poincaré in 1912, they occur in the formulas for Fourier coefficients of Poincaré series…) But while typing the result for my lecture notes of my new course on sums over finite fields, I worked out a different argument than the one in Iwaniec’s book (different but, it turns out, rather closer to Kloosterman’s own…).

Roughly, one first quickly reduces (using orthogonality of additive characters) to computing the number of solutions of the equations

$x_1+x_2=y_1+y_2,\quad 1/x_1+1/x_2=1/y_1+1/y_2,\quad x_i,\ y_i\in \mathbf{F}_p^{\times}.$

This can be computed quite directly, as Iwaniec does, but one can also observe that there are obvious solutions

$(x_1,x_2,x_1,x_2),\quad (x_1,x_2,x_2,x_1),$

and wonder what others can exist? It is then fairly natural to try to see whether knowing

$(x_1+x_2,1/x_1+1/x_2),$

is enough to recover the pair (x1,x2), up to permutation. This will be the case, if we can compute the value of the product x1 x2 from the two symmetric quantities above (this is the theory of symmetric functions, in a rather trivial case). Now, observe the identity

$x_1x_2=\frac{(x_1+x_2)}{(x_1^{-1}+x_2^{-1})},$

which gives what we want, provided (of course) the denominator is non-zero. And indeed, this may of course vanish, and does so precisely for the extra solutions

$(x_1,-x_1,y_1,-y_1)$

of the original equations… The argument therefore proves there are no other than these three families, and after figuring out their intersections, the formula for the fourth moment follows. (Details are in my ongoing notes already mentioned above…)

This computation may seem desperately low-brow; however, as I discuss briefly in Section 6 of my most recent survey on applications of the Riemann Hypothesis over finite fields (I tend to like writing about this, I must confess…), this can be interpreted, via the “Larsen alternative” as the crucial step in proving the vertical (or average) Sato-Tate Law for Kloosterman sums: if we write

$S(1,a;p)=2\sqrt{p}\cos \theta_{a,p},\quad\quad \theta_{a,p}\in [0,\pi],$

then the collection of angles

$\{\theta_{p,a}\}_{1\leq a\leq p-1},$

becomes equidistributed with respect to the Sato-Tate measure

$\mu=\frac{2}{\pi}\sin^2\theta d\theta,$

as p goes to infinity…

[Update (27.2.2010): thanks to Ke Gong for sending some useful typographical corrections to the notes.]