Given a unitary matrix of finite size, it is a tautology that the column vectors of are orthonormal, and in particular that

for any $j\not=k$. This has an immediate analogue for a unitary operator , if is a separable Hilbert space: given any orthonormal basis of , we can define the “matrix” representing by

and the “column vectors” , for distinct indices , are orthogonal in the -sense: we have

if .

Now assume that is some space, say , and is an integral operator on given by a kernel , so that

for

Intuitively, the values of the kernel form a kind of “continuous matrix” representing . The question is: are its columns orthogonal? In other words, given in , do we have

If one remembers the fact that “nice” kernels define trace class integral operators in such a way that the trace can be recovered as the integral

over the diagonal (the basis of the *trace formula* for automorphic forms…), this sounds rather reasonable. There is however a difficulty: it is not so easy to write kernels which both define a *unitary* operator, and are such that the integrals

are well-defined in the usual sense! For instance, the most important unitary integral operator is certainly the Fourier transform, defined on , and its kernel is

for which the integrals above are all undefined in the Lebesgue sense. This is natural: if the kernel were square integrable on , for instance, the corresponding integral operator on would be compact, and its spectrum could not be contained in the unit circle (excluding the degenerate case of a finite-dimensional -space.)

This probably explains why this question of orthogonality of column vectors is not to be found in standard textbooks. There are some examples however where things do work.

We consider the space , and as in the previous post, we look at the unitary operator

where is the principal series representation with eigenvalue of . The result of Cogdell and Piatetski-Shapiro already mentioned there shows that is, indeed, a unitary operator given by a smooth kernel for some function on . This function is explicit, and (as expected) not very integrable: we have

Since it is classical that for , this function is neither integrable nor square-integrable. *But*, the function on decays exponentially at infinity! This means that the integrals , which are given by

*make perfect sense* when and have opposite sign (this requires also knowing that there is no problem at , but that is indeed the case, because the Bessel functions here have just a logarithmic singularity there, and the factors eliminate the in the integral.)

It should not be a surprise then that we have

for . This boils down to an identity for integrals of Bessel functions that can be found in (combinations of) standard tables, or it can be proved more conceptually by viewing

as limit of

which is for the function which is the normalized characteristic function of the interval of radius around , and similarly for . Since

when is small enough, the unitarity gives

and one must take the limit , which is made relatively easy by the exponential decay of at infinity…

This is nice, but here comes a challenge: if one spells out this identity in terms of Bessel functions, what needs to be done is equivalent to showing that the function

defined for , is *antisymmetric*: we have

Now, this fact is an “elementary” property of classical functions. ** Can one prove it directly?** (By which I mean, without using the operator interpretation, but also without using an explicit formula for the integral…) For the moment, I have not succeeded…

I’ll conclude by correcting a mistake in my previous post (it should not be a surprise to anyone that if I attempt to be as clever as Euler, I may stumble rather badly, and the correction is in some sense rather small compared with one might expect)… There I claimed that the integral transform appearing in the Voronoi formula for the divisor function is given by

But this is not the case: the proper formula is

where if , but if . This affects the final formula: we have

instead of the claimed

(the "proof" using the Fourier transform has the same mistake of using instead of , so there is no contradiction between the informal argument and the rigorous one.)