In my previous post, I said I would write some account of what Kolmogorov and Barzdin did in their paper, where unnamed expanders first appear. I had not really started writing this second post, which turns out to be a good thing, since the arXiv carried this morning a new paper of M. Gromov and L.Guth where the Kolmogorov-Barzdin paper is discussed and expanded, all certainly done with much greater insight than I would have brought…
Orthogonality of matrix coefficients in positive characteristic
The course I am teaching on representation theory is essentially the first time I’ve taught a topic which I myself never had the occasion to learn from a regular class. Because of this, some things are already coming up which I did not really know, or did not appreciate properly, before. Here is one I find amusing: one of the basic uses of representation theory, for a finite group , is to analyze in a convenient way the complex-valued functions on the group. Indeed, suitable “matrix coefficients”
of irreducible unitary representations
of turn out to form an orthonormal basis of this space of functions, for the inner product associated to the uniform probability measure on
:
This fact is obtained from the inner-product relations
where
(the representations are the same here because, if one uses non-isomorphic ones, the inner product of their respective matrix coefficients are all zero).
Now suppose one considers the irreducible representations of over an algebraically-closed field
of possibly positive characteristic, coprime to
however. Then things work out quite similarly as in the complex case, and the analogue of the inner product for
-valued functions is
which makes sense since one can divide by the order of in
. The matrix coefficients are defined now by
for some -linear form
on the space of
, and some vector
in that space, using the duality-bracket notation.
The inner product formula — with obvious notation — is now
The cute thing is that this makes sense because the dimensions of irreducible representations of
over
are also invertible in
. (The proof really proceeds by showing the identity
where the dimension of arises as the trace of the identity matrix). And these dimensions are indeed invertible in
because it is known that they divide the order of
, and so our assumption on the field gives the result.
Now for the query: is there a simpler way to see that is non-zero in
? My quick look at the books on my shelf show how to derive this from (the easy part of) Brauer characters (which shows in particular that the (multi)set of dimensions over
and over
are the same) and the corresponding fact for the complex numbers (where the divisibility is proved using simple facts about algebraic integers), but it seems to me that a more straightforward argument might be possible…
[Update (18.03.2011): as many readers certainly realized, this is indeed the case: the start of the proof for the divisibility of by the dimension of irreducible representations is to show something like
for some by taking the trace of a suitable average of the
. This shows that if the order of the group is invertible in the base field, so is the dimension.]
Representation theory course
The Spring semester at ETH is starting next week, and I will be teaching an introductory course on representation theory (for third-year and later students). I am looking forward to it, since this is the first time I teach a class identified as “algebra” (except for linear algebra, of course).
My lecture notes will be available as I prepare them (the first chapter will be ready soon, at least in draft form) and it will be seen that (partly because of my own bias) I think of a representation of a group as a homomorphism , and not as modules over the group algebra. I certainly intend to mention the latter approach at some point (indeed, I have in preparation a long “Variants” chapter, where I mention also, e.g., topological representations, unitary representations, permutation representations, projective representations, Lie algebra representations, etc, with basic examples), but I find it useful to go through the elementary theory of representations completely “by hand”, for a course at this level. In some sense, this is because this is what I can do best, with my own knowledge; but also, going through these basic facts purely with the tools and language of representations-as-homomorphisms does provide excellent opportunities to start seeing various ways of using representation theory (I will for instance prove Burnside’s irreducibility criterion and the linear independence of matrix coefficients using my understanding of Burnside’s original arguments). And I do intend to use this course to introduce some functorial language to the students, and I feel that abstract nonsense will be quite appealing after trying to make sense of the confusion that may arise from proving the transitivity formula for induction strictly using the “subset of functions on the group” model for induced representations…
Here’s a question about the module-over-the-group-algebra approach: what is the first (or simplest, or most fundamental) argument where it is better? The one I can think of for the moment — after going through the first chapter of my notes — is that it seems to give the simplest way to argue that a semisimple representation always contains an irreducible subrepresentation. (And of course this is only relevant for infinite-dimensional representations, since otherwise one can argue using dimension.)
[Update (23.2.2011): the first chapters are now online.]
Numerical oddities…
If, like me, you consider yourself a “serious” ( ™ ) arithmetician, you may also have started spurning at an early age the type of numerical coincidences, sometimes found in dictionaries of remarkable numbers, that state that, for instance, is the only integral solution to the equation
However, maybe I shouldn’t be so dismissive. Today, after a discussion with P-O Dehaye, which was certainly serious enough since the Ramanujan function featured prominently, the question came to understand the solutions in
of the system
(everything modulo ).
This sounds rather innocuous, but something I find rather amusing happens: you can check for yourself that the solutions are either (i) diagonal (all are equal) or (ii) a permutation of
(i.e., no two
coincide). Why is that? Well, first both (i) and (ii) do describe solutions. Clearly (( ™ ) again) there are
solutions in total; there are
diagonal ones, and
permutations; since
there can be no other solution.
Is there a more direct proof?
Kolmogorov and expanders, I
As I already mentioned, A. Lubotzky pointed out, in his recent Colloquium Lectures, that expander graphs (which were usually said to go back to a paper of Pinsker in 1973) already appeared in a paper of Barzdin and Kolmogorov from 1967. This was naturally in Russian, but the paper is reproduced in English translation in the selected works of Kolmogorov.
The paper is very interesting. It contains, indeed, a definition of what are recognizably expanders; it contains, before Pinsker, and with a similar probabilistic argument, a proof that “most” graphs, in a certain sense, are expanders; and it contains, most interestingly for me, an application of expanders which is of great interest, and which I didn’t know before. (I’d be interested to know if other readers had already heard of it).
In a second post, I will try to explain what Barzdin and Kolmogorov did in some detail, as far as my understanding goes. For the moment, before a few general remarks, here is the problem that they solve. The question is roughly: “how much volume does one need to embed a graph in?” More precisely, they consider a finite oriented graph ; this may not be planar, but it is clear that
can be represented (without edge-crossings) in
. Of course, once this is done, scaling leads to a copy of the graph in an arbitrary small volume, but what happens if the vertices are fattened-up a bit, and the edges are constrained to not come too close? Then the question becomes interesting. The motivation mentioned in the paper, attributed to Kolmogorov, has to do with neuron networks in the brain, but I think it has obvious mathematical interest. And the main results of Barzdin and Kolmogorov give the following answer: (1) any “fat” network with
vertices can be embedded in a sphere of radius
in space; (2) this can not be improved in general, and most strikingly, a sequence
of expander graphs requires at least volume
asymptotically as
grows (up to a fixed multiplicative constant). To show that (2) is not an empty statement, Barzdin and Kolmogorov show, as already mentioned, that expanders exist, and even better, with a suitable model, that “most” graphs are expanders. This is the result that was (it turns out) re-discovered by Pinsker.
Amusingly, 1967 is also the year when D. Kazhdan published his paper introducing Property (T). As is well-known, and explained for instance in the recent book of Bekka, de la Harpe and Valette, the first explicit construction of expanders, due to Margulis, relied on Property (T); in hindsight, it is rephrased as saying that any infinite discrete group with Property (T), for instance
, is such that the Cayley graphs of all finite quotients of
, constructed with respect to an arbitrary finite generating
, forms an expander family of graphs. (Note that it is a basic property — indeed, one of the motivating ones — that a discrete group with Property (T) is finitely generated).
Now here comes an idle discussion of priority and the like, which platonists might want to avoid. The paper of Barzdin and Kolmogorov contains a footnote saying that the details and sharpest results are due to Barzdin; the selected works of Kolmogorov contains a comment by Barzdin that says that the basic ideas are entirely due to Kolmogorov. So it’s not completely clear (to a random reader like me) which of the two authors precisely did come up first with the idea of expanders. One is tempted to suspect that it was Kolmogorov; if that is the case, he must have been past 60 when the idea came up — the paper appeared when he was 64 –, and that would be a very striking counterexample to the common perception that mathematics is a young person’s game… (Rigorous chronologists might like to point out that Selberg’s “3/16” theorem predates the Kolmogorov-Barzdin paper as well as Kazhdan’s Property (T) paper, so that an explicit construction of expanders was, in some sense, known before they were defined; unless, that is, it is discovered that Gauss, Euler or Kepler, in a particularly abstruse private letter or notebook, had anticipated the discovery…)
[Query: the proof of existence of expanders in the paper of Barzdin and Kolmogorov goes by computations with binomial coefficients and the like; now, I don’t want to be more of a probabilist than Kolmogorov, but does anyone know if someone has written somewhere a “clean” proof, using instead the basic properties of binomial random variables and standard analytic/probabilistic inequalities? I would be surprised if this wasn’t possible, and I’ll certainly look for something like that before teaching — probably next semester — about expanders.]
Final remark: for a fairly wide-ranging discussion of the mathematical contributions of Kolmogorov, there is a nice book, available either in French or in English translation (I was one of the translators, which was very interesting since I had no idea previously that Kolmogorov had been influential in mathematical logic or in algebraic topology…) The Barzdin-Kolmogorov paper is mentioned in the bibliography of one chapter, but is not discussed in detail.