This is a blog post about the thesis of my student Zoey Guo, who is graduating this year. (For a blog post on the thesis of my other student graduating this year, see this.)

Let be a finite graph. Associated to is an adjacency matrix such that the largest eigenvalue is totally real. Let us call *abelian* if the extension is abelian. For example, all the Dynkin diagrams are abelian.

Several years ago, Scott Morrison and Noah Sydner (of secret blogging seminar fame) asked the following question. Given a finite graph , let be the graph obtained by adjoining a 2-valent tree of length to some fixed vertex of Then can one classify all for which is abelian? It turns out can be abelian for only finitely many unless the graphs happen to be one of the two infinite families of Dynkin diagrams. The argument was effective, although not effectively effective. (We did, however, prove a slightly weaker theorem which was sufficient for the intended application which was effectively effective.)

What Zoey does in her thesis is consider the following generalization. Let be a finite graph, and choose vertices of Now adjoin two-valent graphs of varying lengths to latex . Call the resulting graph a *-spider graph*. The main result of her thesis is the following:

**Theorem:** For any and any fixed , only finitely many of the corresponding spiders are both abelian and not Dynkin diagrams.

What is more, the theorem is effectively effective. One key ingredient in the finiteness results for was the fact that, for characteristic polynomials of the graphs one can control the factors corresponding to Chebyshev polynomials. Or, if one writes

one can control the cyclotomic factors of This follows from a theorem of Hironaka-Gross-McMullen, who exploit results by Mann on vanishing sums for roots of unity. However, when , this breaks down completely. In fact, in many examples, the corresponding polynomials will be divisible by cyclotomic polynomials of arbitrarily large degree. Here’s an example which Zoey pointed out to me. Consider the disconnected graph consisting of a two copies of the Dykin diagrams and a third component for any integer (in fact, the construction is much more general, but we will be very explicit here). Since for is an eigenvalue of each connected component corresponding to it will be an eigenvalue of multiplicity (at least) two of their union. Now join the three graphs by adding a vertex which is connected to the end of all three graphs; this will be the -spider on a point corresponding to the triple By the interlacing lemma for graph eigenvalues, will be an eigenvalue of the resulting connected graph. So finding all the cyclotomic factors even for the explicit polynomials coming from -spiders on a point seems like a real pain, and so one needs a new argument to deal with roots of unity.

The proof of the theorem comes down to two key steps. First, for any sequence of spiders, all but a uniformly bounded number of eigenvalues will lie in the interval and all the eigenvalues will lie in some uniform interval Of course, this means that the squares of the eigenvalues lie always in for some and mostly in Now imagine the largest such number We know that it is algebraic, and we are assuming that it is abelian. The key quantity to control turns out to be the normalized trace of Work of Cassels shows that, if is cyclotomic, one can classify all cyclotomic integers for which the normalized trace of is small. What kind of an upper bounds do we have? Well, if the degree of is very large, then the bulk of the contribution has to come from , or (this is the reason why occurs above — it is a Chebyshev polynomial). The worst case scenario is that all of the conjugates of are near , which will give an estimate on the normalized trace of of plus a quantity that goes to zero with the degree. However, it is hard for an algebraic number to have too many of its conjugates near any particular integer (in this case, To exploit this, one can note, for example, that

If we denote the conjugates of by and suppose that this has degree , then we deduce that

The term (which is completely explicit) comes from the fact that finitely many of the are outside and so one can not use the previous inequality. Let us consider the inequality. As long as , the logarithmic factor is a norm of the algebraic integer and hence non-negative. So we get an upper bound for the normalized trace which is now plus some explicit error term which tends to zero as the degree goes to infinity. This type of idea was first used by Chris Smyth when studying the trace problem of Siegel (see this post). Now is still too big to apply the results of Cassels. So one has to also exploit that is not too close to either (this is the where the inequality above is an equality). In fact, one ends up using not just and but different algebraic integers which are all of the form and where one uses logarithms weighted by various real constants. The precise constants and algebraic integers were optimized by simulated annealing. In the end, one gets an upper bound of the form plus a very explicit error term, which is enough for the Cassels machine. This gives a complete answer as long as the degree of is big enough.

If has small degree, then one is also in good shape — given a bound for and a bound for the degree, there are only finitely many such algebraic integers, which is enough to prove the theorem. However, this last step — whilst effective — is not at all effectively effective. So another argument is required to make everything work in practice. Note that the degree of the polynomial defining certainly goes to infinity, but it may be ~~irreducible~~ reducible, and in particular divisible by many cyclotomic (Chebyshev) factors all of whose roots are in Let’s explain how to overcome this issue in the case of -spiders coming from the trivial graph (which is typical of the general case). If you take the -spider with legs of length then, for big enough, one finds that

(The limit of the largest eigenvalues of a sequence of infinite spiders will always be an algebraic number.) Importantly, this convergence is exponential. However, it’s easy to see that any algebraic integer all of whose conjugates are uniformly bounded can not be extremely close to any fixed algebraic number, and it easy to give effective bounds to this effect. So one wins in high degree by Cassels type arguments and in low degree by the fact that the eigenvalues converge rapidly to computable algebraic numbers. One annoying issue is that the convergence requires all of the to tend to infinity, so one has to inductively reduce to the case of -spiders with some finite list of possible which entails a certain amount of combinatorial explosion. However, as a complete worked example, one has the following:

** Theorem:** The complete list of abelian -spiders on a point is given by:

- The Dynkin diagrams whose largest eigenvalue is of the form
- The -spiders and with

- The -spiders and with and

- The -spiders and with

Now all of this is quite amusing, but you may complain that is doesn’t really have any practical application. However, as it happens, Scott Morrison asked me whether it was possible to find all abelian -spiders for some very explicit graph (omitted here), in order to further the classification of finite index subfactors, because the all the current non-number theoretic obstructions could not rule out this family of examples as coming from subfactors. Zoey’s method could be applied to show that every -spider in the corresponding family was not abelian. So Zoey’s results have *already* been used outside her field (see forthcoming work of Morrison and his collaborators) to complete the classification of subfactors of index between and All this, of course, while having the thesis with the best title.

Happily, we’ve found yet another application of Zoey’s results in the classification of subfactors, and indeed our almost finished paper on the classification up to index will make two independent, and rather different uses, of this work.

The first is the example you talk about above and that Zoey used as an illustration of the method.

The second is even more exciting — and we haven’t yet had a chance to see if this method could help pushing the classification even further.

The basic idea is that we have a family of graphs as follows: fix some finite graph , and mark one vertex “x”, and name all the vertices at the maximal radius from x “Y”. We now consider adding a chain of n edges to x, and gluing on some arbitrary finite graph to Y.

For the we’re interested in (and possibly ‘often’) we can show by the theory of ‘connections’ that any graph in this family which is the principal graph of a subfactor must have graph norm satisfying some particular polynomial depending just on n (and not on what we glue to Y).

Now we don’t know that this polynomial has anything to do with any particular finite graph (in fact, it’s a multiple of the minimal polynomial for the norm of a certain infinite graph) so we have to work a bit harder in places to apply Zoey’s method, but happily it all works out, and one can show that the graph norm cannot possibly be cyclotomic, thereby ruling out all graphs in the family as principal graphs of a subfactor.

Sounds great! Make sure to post a link here when the paper is done.

In the para. beginning “If has small degree, then one is also in good shape … “, I think “may be irreducible” should read “may be reducible”.

Corrected!