# Three little things I learnt recently

In no particular order, and with no relevance whatsoever to the beginning of the year, here are three mathematical facts I learnt in recent months which might belong to the “I should have known this” category:

(1) Which finite fields $k$ have the property that there is a “square root” homomorphism
$s\ :\ (k^{\times})^2\rightarrow k^{\times},$
i.e., a group homomorphism such that $s(x)^2=x$ for all non-zero squares $x$ in $k$?

The answer is that such an $s$ exists if and only if either $p=2$ or $-1$ is not a square in $k$ (so, for $k=\mathbf{Z}/p\mathbf{Z}$, this means that $p\equiv 3\pmod 4$).

The proof of this is an elementary exercise. In particular, the necessity of the condition, for $p$ odd, is just the same argument that works for the complex numbers: if $s$ exists and $-1$ is a square, then we have
$1=s(1)=s((-1)\times (-1))=s(-1)^2=-1,$
which is a contradiction (note that $s(-1)$ only exists because of the assumption that $-1$ is a square).

The question, and the similarity with the real and complex cases, immediately suggests the question of determining (if possible) which other fields admit a square-root homomorphism. And, lo and behold, the first Google search reveals a nice 2012 paper by Waterhouse in the American Math. Monthly that shows that the answer is the same: if $K$ is a field of characteristic different from $2$, then $K$ admits a homomorphism
$s\ :\ (K^{\times})^2\rightarrow K^{\times},$
with $s(x)^2=x$, if and only if $-1$ is not a square in $K$.

(The argument for sufficiency is not very hard: one first checks that it is enough to find a subgroup $R$ of $K^{\times}$ such that the homomorphism
$t\, :\, R\times \{\pm 1\}\rightarrow K^{\times}$
given by $t(x,\varepsilon)=\varepsilon x$ is an isomorphism; viewing $K^{\times}/(K^{\times})^2)$ as a vector space over $\mathbf{Z}/2\mathbf{Z}$, such a subgroup $R$ is obtained as the pre-image in $K^{\times}$ of a complementary subspace to the line generated by $(-1)(K^\times)^2$, which is a one-dimensional space because $-1$ is assumed to not be a square.)

It seems unlikely that such a basic facts would not have been stated before 2012, but Waterhouse gives no previous reference (and I don’t know any myself!)

(2) While reviewing the Polymath8 paper, I learnt the following identity of Lommel for Bessel functions (see page 135 of Watson’s treatise:
$\int_0^u tJ_{\nu}(t)^2dt=\frac{1}{2}u^2\Bigl(J_{\nu}(u)^2-J_{\nu-1}(u)J_{\nu+1}(u)\Bigr)$
where $J_{\mu}$ is the Bessel function of the first kind. This is used to find the optimal weight in the original Goldston-Pintz-Yıldırım argument (a computation first done by B. Conrey, though it was apparently unpublished until a recent paper of Farkas, Pintz and Révész.)

There are rather few “exact” indefinite integrals of functions obtained from Bessel functions or related functions which are known, and again I should probably have heard of this result before. What could be an analogue for Kloosterman sums?

(3) In my recent paper with G. Ricotta (extending to automorphic forms on all $GL(n)$ the type of central limit theorem found previously in a joint paper with É. Fouvry, S. Ganguly and Ph. Michel for Hecke eigenvalues of classical modular forms in arithmetic progressions), we use the identity
$\sum_{k\geq 0}\binom{N-1+k}{k}^2 T^k=\frac{P_N(T)}{(1-T)^{2N-1}}$
where $N\geq 1$ is a fixed integer and
$P_N(T)=\sum_{k=0}^{N-1}\binom{N-1}{k}^2T^k.$

This is probably well-known, but we didn’t know it before. Our process in finding and checking this formula is certainly rather typical: small values of $N$ were computed by hand (or using a computer algebra system), leading quickly to a general conjecture, namely the identity above. At least Mathematica can in fact check that this is correct (in the sense of evaluating the left-hand side to a form obviously equivalent to the right-hand side), but as usual it gives no clue as to why this is true (and in particular, how difficult or deep the result is!) However, a bit of looking around and guessing that this had to do with hypergeometric functions (because $P_N$ is close to a Legendre polynomial, which is a special case of a hypergeometric function) reveal that, in fact, we have to deal with about the simplest identity for hypergeometric functions, going back to Euler: precisely, the formula is identical with the transformation
${}_2F_1(-(N-1),-(N-1);1;T)=(1-T)^{2N-1}{}_2F_1(N,N;1;T),$
where
${}_2F_1(\alpha,\beta;1;z)=\sum_{k\geq 0}\frac{\alpha (\alpha+1)\cdots (\alpha+k-1)\beta(\beta+1)\cdots \beta+k-1)} {(k!)^2}z^k$
is (a special case of) the Gauss hypergeometric function.

1. Fact (2) is actually the Bessel function analogue of the Christoffel-Darboux formula for orthogonal polynomials (i.e., it’s an explicit formula for a reproducing kernel, with practically the same form/explanation as Christoffel-Darboux). More generally, you can integrate $t J_\nu(t) J_\nu(\alpha t)$ for any constant $\alpha$, and not just $\alpha=1$. It’s helpful to think about this in terms of the radial Fourier transform in $n$ dimensions with $\nu = n/2-1$. The zonal spherical functions are, up to scaling, $J_\nu(\alpha t)/t^\nu$ for different values of $\alpha$ and the radial measure is $t^{n-1} \, dt$. If you rewrite the integrand in these terms, you get the product of two zonal spherical functions and the radial measure. Of course the Bessel function formula is slightly more general, since it does not require $n$ to be an integer, but the orthogonality properties and proof remain the same.