While working on a recent paper, I’ve had the occasion to browse a bit through the book of Watson on Bessel functions. This was first to look for references for various facts, and then, lazily, more out of curiosity, on the lookout for some of these “more recondite investigations” that English books of analysis of the time (1922) seem to enjoy.

I thus learnt that integral equations had a sulfurous reputation back then:

[…] On the other hand, researches based on the theory of integral equations are liable to give rise to uneasy feelings in the mind of the ultra-orthodox mathematician. (p. 578)

[Amusing typographical note: it’s not clear if Watson wants to spell “ultraorthodox” or “ultra-orthodox”: there is a hyphenated end-of-line just at the right place to make this ambiguous; or is there some accepted typographical rule to distinguish between the two possibilites?]

Although integral equations are presumably not going to create much unease nowadays, that does not mean that orthodoxy has relented. Somewhere in Godement’s opinionated course of analysis (the perfect gift to annoy your mathematically-minded neo-con nephew), there is a rather stern warning against exposing young minds to tables of integrals of Bessel functions. Presumably, all such functions must be understood only as matrix coefficients of representations of *SL(2)* (except when it is clearly better to see them as having to do with *D*-modules.)

There’s of course a lot of truth in thinking this way, but personally I feel that sometimes one should enjoy mathematics outside of the big picture. Here are two fun facts about Bessel functions that I didn’t know a few days ago. Precisely, they concern the *J* Bessel functions, which are defined either as integrals

or as power series

For our purpose, let’s assume that *ν* is a non-negative real number and *x* also (of course these functions extend to much larger domains).

(1) First fact: what is the size of

This is a type of uniformity question which is often quite important in analytic number theory. The answer (which can be found by the stationary phase method) is due to Cauchy:

as *ν* tends to infinity. (Page 231 in Watson’s book.)

(2) Second fact: one can try to represent functions by series of Bessel functions in various ways. If we consider the Schlömilch series for *ν=0*, namely

something funny happens: although every function *f* which is regular enough on the closed interval *[0,π]* can be represented in this manner for suitable coefficients *a _{m}*, and although one can even write formulas for these coefficients

*there is no unicity*! Precisely, the formula

which is valid for all

(Watson, page 634), gives a non-trivial expression of the zero function as a Schlömilch series on this open interval. However, Watson goes on to prove that this is, up to scaling, the

*only*representation of the zero function. In other words, there is some kind of canonically split exact sequence associated with the problem of representing (smooth enough) functions as Schlömilch series.

The existence of the series representation above (both for general *f*, and the exceptional one for the zero function) are not very difficult, using the integral representation of *J _{0}*. In the former, one starts by showing that

is a solution of the integral equation

and then expand

*g*into a Fourier series (which is possible if

*f*is smooth enough, e.g., C

^{2}).

Watson’s determination of the Schlömilch series representing zero is also interesting: it is more or less similar to Riemann’s approach to trigonometric series. I think the latter is not as well-known today as it was some decades ago; I only remember reading something about it from Zygmund’s classical book on trigonometric sums (another very — in fact, even more — delightful book; I’ve heard a very well-known mathematician say during a lecture that it would be on his short-list of books to bring to a desert island). Very roughly speaking, one of the issues is, given a trigonometric series

(in old-fashioned writing…), with the only assumption that it converges pointwise at all points in *[0,2π]*, and that the sum is zero, to show that every coefficient is zero. The basic idea of Riemann is to consider the series obtained by integrating formally twice; the latter is absolutely convergent everywhere (having gained a factor *1/n ^{2}*), and represents a continuous function

*F*; one shows then that some kind of generalized second derivative of

*F*is everywhere zero, and one can deduce from this that it is a linear function. The linear term having been shown to vanish, one is left with a uniformly convergent trigonometric series vanishing identically, and multiplying by cosines and sines and integrating, one gets the fact that every coefficient is zero. What is nice and surprising is how little of the property of orthogonality is used. This explains probably why it can be adapted somehow to Schlömilch series, since the corresponding functions are not orthogonal!

Of course, notwithstanding my disparaging comments on “big pictures”, I’d be delighted to learn that these two facts are related to matrix coefficients or to *D*-modules. And at least I am pretty certain to incorporate them one day in a course.

I couldn’t think of a more trivial question to ask, but here goes: why “J” for Bessel functions? I’ve heard many people speculate, but does anyone know the true etymology of using “J” ? Seems like a fairly arbitrary letter to use…