The results which generate the most buzz in mathematics are usually those which can be expressed in an elementary (or at least pithy) way to a general mathematical audience. It is certainly true that such results may be profound (see Wiles, Andrew), but this is not always the case. An indirect consequence of this phenomenon is that there are mathematicians who are considered absolute titans of their own field, but who are less well-known by the broader mathematical community. Fontaine, who died this year, might be considered one of these people. Fontaine will forever be associated with p-adic Hodge theory, a subject which is absolutely central to algebraic number theory today. While the initial seed of this subject came from Tate’s paper on p-divisible groups, a huge part of its development was due to Fontaine over a period of 30 years (both in his solo papers and in his joint work). The usual audience for my posts is experts, but on the rare chance that someone who knows less p-adic Hodge theory than me reads this post, let me give the briefest hint of an introduction to the subject.

For a smooth manifold M, de Rham’s Theorem gives an isomorphism

which can more naturally be phrased as that the natural pairing between (classes of) closed forms and (classes of) paths given by

induces a perfect pairing on the corresponding (co-)homology groups. The class of paths in homology has a very natural integral basis coming from the paths themselves. For a general M, the de Rham cohomology has no such basis. On the other hand, if M is (say) the complex points of an algebraic variety over the rational numbers, then there are more algebraic ways to normalize the various flavours of differential forms. To take an example which doesn’t quite fit into the world of compact manifolds, take X to be the projective line minus two points, so M is the complex plane minus the origin. There is a particularly nice closed form on this space which generates the holomorphic differentials. But now if one pairs the rational mutiples of this class with the rational multiples of the loop around zero, the pairing does *not* land in the rational numbers, since

In particular, to compare de Rham cohomology over the rationals with the usual Betti cohomology over the rationals, one first has to tensor with a bigger ring such as or at least with a ring big enough to see all the integrals which arise in this form. Such integrals are usually called periods, so in order to have a comparison theorem between de Rham cohomology and Betti cohomology over one first has to tensor with a ring of periods.

It is too simplistic to say that p-adic Hodge theory (at least rationally) is a p-adic version of this story, but that is not the worst cartoon picture to keep in your mind. Returning to the example above, note that the period is a purely imaginary number. This is a reflection of the fact that some arithmetic information is still retained, namely, an action of complex conjugation on the complex points of a variety over the rationals is compatible (with a suitable twist) with the de Rham pairing. A fundamental point is that, in the local story, something similar occurs where now the group generated by complex conjugation is replaced by the much bigger and more interesting group Very (very) loosely, this is related to the fact that p-adic analysis behaves much better with respect to the Galois group, for example, the conjugate of an infinite (convergent) sum of p-adic numbers is the sum of the conjugates. In particular, there is a Galois action on the ring of all p-adic periods. So now there is a much richer group of symmetries acting on the entire picture. Moreover, the structure of the p-adic differentials can be related to how the variety X looks like when reduced modulo-p, because smoothness in algebraic geometry can naturally be interpreted in terms of differential forms.

So now if one wants to make a p-adic comparison conjecture between (algebraic) de Rham cohomology on the one side, and etale cohomology (the algebraic version of Betti cohomology) on the other side, one (optimally) wants the comparison theorem to respect (as much as possible) all the extra structures that exist in the p-adic world, in particular, the action of the local Galois group on etale cohomology, and the algebraic structures which exist on de Rham cohomology (the Hodge filtration and a Frobenius operator), and secondly, involve tensoring with a ring of periods B which is “as small as possible”.

Identifying the correct mechanisms to pass between de Rham cohomology and etale cohomology in a way that is compatible with all of this extra structure is very subtle, and one of the fundamental achievements of Fontaine was really to identify the correct framework in which to phrase the optimal comparison (both in this and also in many related contexts such as crystalline cohomology). (Of course, his work was also instrumental in proving many of these comparison theorems as well.) I think it is fair to say that often the most profound contributions to mathematics come from revealing the underlying structure of what is going on, even if only conjecturally. (To take another random example, take Thurston’s insight into the geometry of 3-manifolds.) Moreover, the reliance of modern arithmetic geometry on these tools can not be overestimated — studying global Galois representations without p-adic Hodge theory would be like studying abelian extensions of without using ramification groups.

Two further points I would be remiss in not mentioning: One sense in which the ring is “as small as possible” is the amazing conjecture of Fontaine-Mazur which captures which *global* Galois representations should come from motives. Secondly, Fontaine’s work on *all* local Galois representations in terms of modules which is crucial even in understanding Motivic Galois representations though p-adic deformations, the fields of norms (with his student Wintenberger, who also sadly died recently), the proof of weak admissibility implies admissibility (with Colmez, another former student, who surprisingly to me only wrote this one joint paper with Fontaine), and the Fargues-Fontaine curve. (I guess this is more than two.)

Probably the first time I talked with Fontaine was at a conference in Brittany (Roscoff) in 2009. That was the first time I ever gave a talk on my work on even Galois representations and the Fontaine-Mazur conjecture, about which Fontaine had some very kind words to say. (One of the most rewarding parts of academia is getting the respect of people you admire.) I never got to know him too well, due (in equal parts) to my ignorance of the French language and p-adic Hodge theory. But he was always a regular presence at conferences at Luminy with his distinct sense of humour. Over a long career, his work continued to be original and deep. He will be greatly missed.

According to reports, there may be some inaccuracies on math genealogy. Matt reminded me the other day that Fontaine’s advisor was

~~Poitou~~(Pisot, my error) rather than Serre, and now I am confused as to whether PC’s official advisor was Fontaine or Coates or both.Only Coates.

None of the above…. and Fontaine’s advisor was definitely not Poitou, but in some sense Pisot.

https://webusers.imj-prg.fr/~pierre.colmez/cv.html claims Coates+Fontaine…

That is the difference between “officiel” and “officieux”… Not clear who the official advisor was: maybe Luna, maybe Fontaine…

Can you make an example of a theorem that generated big buzz but, according to you, is “undeep”? Can you also articulate what makes it “undeep”? Thanks

I am, of course, not so foolish as to answer the first question. The second question is certainly interesting but also quite complex, and starts to overlap with a number of other threads that are too complicated to fit in a comment box. I might come back to it in a later post, but not today.

I’ll take a stab, though this is just my opinion: the four-color theorem. What makes it not deep is that the theorem itself has not seen a wide variety of applications to other areas of math, and the techniques used to prove it are ad-hoc and narrowly tailored to the theorem itself.

If you want some further reading on the subject of “deepness”, I recommend Gowers’s “Two Cultures” essay (https://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf) and a rebuttal from this very blog (https://galoisrepresentations.wordpress.com/2012/12/12/the-two-cultures-of-mathematics-a-rebuttal/).

Disclaimer: The views and opinions expressed in the comments are those of the commenters and do not necessarily reflect the official policy or position of this blog.As someone working in graph theory…

Right, the four colour theorem is a cute puzzle, but at least the explanation we currently have is: (somewhat intuitively, and not entirely accurately) a counterexample, if it exists, should not be incredibly large; and in fact there turns out not to exist any small counterexample by computer check. If the second part failed, we’d probably just observe that apart from containing a short list of five-colourable graphs, the theorem holds – I think this counterfactual world would have very few serious consequences, as far as we currently understand.

Broadly, the larger you try to make a minimal hard-to-colour graph, the more you expect (_not_ always, as there are counterexamples!) its average degree to go up – but planar graphs have average degree a little below six. This is only a heuristic justification for ‘no large counterexamples’; it can be made rigorous (with significant work) by discharging, which, by the way, has seen a lot of use in other graph theory problems. And this also explains why it’s much easier to deal with graphs on more complicated surfaces: the average degree can only get much above six for very small graphs, so (since you can embed big cliques) it’s rather trivial to argue that the hardest-to-colour graphs have to be either cliques or very close.

If one is looking for deep results in combinatorics – better people than me already explained why the graph minors theorem should count. One I’d add is Szemeredi’s theorem, for which all the known proofs need to do something essentially non-trivial, either in a limit setting or a discrete setting, which when developed gives you the higher-order Fourier analysis. (Szemeredi’s original proof, in this sense, is a bit of a hack and one does not easily see where the magic comes in..!)

33 =

8 866 128 975 287 528^3 + (−8 778 405 442 862 239)^3 + (−2 736 111 468 807 040)^3.

The total computation used approximately 15 core-years over three weeks of real time.

Pingback: Some Quick Items | Not Even Wrong