Orthogonality of columns of integral unitary operators: a challenge

Given a unitary matrix A=(a_{i,j}) of finite size, it is a tautology that the column vectors of A are orthonormal, and in particular that
\sum_{i} a_{i,j} \overline{a_{i,k}} =0
for any $j\not=k$. This has an immediate analogue for a unitary operator U\,:\, H\rightarrow H, if H is a separable Hilbert space: given any orthonormal basis (e_n)_{n\geq 1} of H, we can define the “matrix” (a_{i,j})_{i,j\geq 1} representing U by
U(e_j)=\sum_{i\geq 1}a_{i,j}e_i,
and the “column vectors” (a_{i,j})_{i\geq 1}, for distinct indices j, are orthogonal in the \ell_2-sense: we have
0=\langle e_j,e_k\rangle = \langle U(e_j),U(e_k)\rangle=\sum_{i}a_{i,j}\overline{a_{i,k}}
if j\not=k.

Now assume that H is some L^2 space, say H=L^2(X,\mu), and U is an integral operator on H given by a kernel k\,:\, X\times X\rightarrow \mathbf{C}, so that
for \varphi \in L^2(X,\mu).
Intuitively, the values k(x,y) of the kernel form a kind of “continuous matrix” representing U. The question is: are its columns orthogonal? In other words, given y\not=z in X, do we have

If one remembers the fact that “nice” kernels define trace class integral operators in such a way that the trace can be recovered as the integral
over the diagonal (the basis of the trace formula for automorphic forms…), this sounds rather reasonable. There is however a difficulty: it is not so easy to write kernels k(x,y) which both define a unitary operator, and are such that the integrals
(\star)\quad\quad\quad\quad \int_{X}k(x,y)\overline{k(x,z)}d\mu(x)
are well-defined in the usual sense! For instance, the most important unitary integral operator is certainly the Fourier transform, defined on L^2(\mathbf{R},dx), and its kernel is
k(x,y)=e^{2i\pi xy},
for which the integrals above are all undefined in the Lebesgue sense. This is natural: if the kernel k(x,y) were square integrable on X\times X, for instance, the corresponding integral operator on L^2(X,\mu) would be compact, and its spectrum could not be contained in the unit circle (excluding the degenerate case of a finite-dimensional L^2-space.)

This probably explains why this question of orthogonality of column vectors is not to be found in standard textbooks. There are some examples however where things do work.

We consider the space H=L^2(\mathbf{R}^*,|x|^{-1}dx), and as in the previous post, we look at the unitary operator
where \rho is the principal series representation with eigenvalue 1/4 of \mathrm{PGL}_2(\mathbf{R}). The result of Cogdell and Piatetski-Shapiro already mentioned there shows that T is, indeed, a unitary operator given by a smooth kernel k(x,y)=j(xy) for some function j on \mathbf{R}^*. This function is explicit, and (as expected) not very integrable: we have
j(x)=\begin{cases}-2\pi \sqrt{x}Y_0(4\pi\sqrt{x})\text{ for } x>0,\\4\sqrt{|x|}K_0(4\pi\sqrt{|x|})\text{ for } x<0.\end{cases}.

Since it is classical that Y_0(x)\approx x^{-1/2} for x\rightarrow +\infty, this function is neither integrable nor square-integrable. But, the function K_0 on [0,+\infty[ decays exponentially at infinity! This means that the integrals (\star), which are given by
make perfect sense when y and z have opposite sign (this requires also knowing that there is no problem at 0, but that is indeed the case, because the Bessel functions here have just a logarithmic singularity there, and the factors \sqrt{|x|} eliminate the |x|^{-1} in the integral.)

It should not be a surprise then that we have
for yz<0. This boils down to an identity for integrals of Bessel functions that can be found in (combinations of) standard tables, or it can be proved more conceptually by viewing
as limit of
\frac{1}{2\epsilon}\int_{|u-y|<\epsilon} k(x,u)du,
which is T(f_{y,\epsilon}) for the function f_{y,\epsilon} which is the normalized characteristic function of the interval of radius \epsilon around y, and similarly for z. Since
\langle f_{y,\epsilon},f_{z,\epsilon}\rangle =0
when \epsilon is small enough, the unitarity gives
\int_{\mathbf{R}^*} Tf_{y,\epsilon}(x)\overline{Tf_{z,\epsilon}(x)}\frac{dx}{|x|}=0,
and one must take the limit \epsilon\rightarrow 0, which is made relatively easy by the exponential decay of K_0 at infinity…

This is nice, but here comes a challenge: if one spells out this identity in terms of Bessel functions, what needs to be done is equivalent to showing that the function
K(a, b)=\int_{0}^{+\infty}{Y_0(ax)K_0(bx)xdx}
defined for a,b>0, is antisymmetric: we have
Now, this fact is an “elementary” property of classical functions. Can one prove it directly? (By which I mean, without using the operator interpretation, but also without using an explicit formula for the integral…) For the moment, I have not succeeded…

I’ll conclude by correcting a mistake in my previous post (it should not be a surprise to anyone that if I attempt to be as clever as Euler, I may stumble rather badly, and the correction is in some sense rather small compared with one might expect)… There I claimed that the integral transform w\mapsto W appearing in the Voronoi formula for the divisor function is given by
But this is not the case: the proper formula is
where \tilde{w}(x)=w(x) if x>0, but \tilde{w}(x)=0 if x<0. This affects the final formula: we have
instead of the claimed
(the "proof" using the Fourier transform has the same mistake of using w(|xy|) instead of \tilde{w}(xy), so there is no contradiction between the informal argument and the rigorous one.)

Published by


I am a professor of mathematics at ETH Zürich since 2008.

Leave a Reply

Your email address will not be published.