The ABC conjecture has (still) not been proved.

Five years ago, Cathy O’Neil laid out a perfectly cogent case for why the (at that point recent) claims by Shinichi Mochizuki should not (yet) be regarded as constituting a proof of the ABC conjecture. I have nothing further to add on the sociological aspects of mathematics discussed in that post, but I just wanted to report on how the situation looks to professional number theorists today. The answer? It is a complete disaster.

This post is not about making epistemological claims about the truth or otherwise of Mochizuki’s arguments. To take an extreme example, if Mochizuki had carved his argument on slate in Linear A and then dropped it into the Mariana Trench, then there would be little doubt that asking about the veracity of the argument would be beside the point. The reality, however, is that this description is not so far from the truth.

Each time I hear of an analysis of Mochizuki’s papers by an expert (off the record) the report is disturbingly familiar: vast fields of trivialities followed by an enormous cliff of unjustified conclusions. The defense of Mochizuki usually rests on the following point: The mathematics coming out of the Grothendieck school followed a similar pattern, and that has proved to be a cornerstone of modern mathematics. There is the following anecdote that goes as follows:

The author hears the following two stories: Once Grothendieck said that there were two ways of cracking a nutshell. One way was to crack it in one breath by using a nutcracker. Another way was to soak it in a large amount of water, to soak, to soak, and to soak, then it cracked by itself. Grothendieck’s mathematics is the latter one.

While rhetorically expedient, the comparison between Mochizuki and Grothendieck is a poor one. Yes, the Grothendieck revolution upended mathematics during the 1960’s “from the ground up.” But the ideas coming out of IHES immediately spread around the world, to the seminars of Paris, Princeton, Moscow, Harvard/MIT, Bonn, the Netherlands, etc. Ultimately, the success of the Grothendieck school is not measured in the theorems coming out of IHES in the ’60s but in how the ideas completely changed how everyone in the subject (and surrounding subjects) thought about algebraic geometry.

This is not a complaint about idiosyncrasy or about failing to play by the rules of the “system.” Perelman more directly repudiated the conventions of academia by simply posting his papers to the arXiV and then walking away. (**Edit:** Perelman **did** go on an extensive lecture tour and made himself available to other experts, although he never submitted his papers.) But in the end, in mathematics, ideas always win. And people were able to read Perelman’s papers and find that the ideas were all there (and multiple groups of people released complete accounts of all the details which were also published within five years). Usually when there is a breakthrough in mathematics, there is an explosion of new activity when other mathematicians are able to exploit the new ideas to prove new theorems, usually in directions not anticipated by the original discoverer(s). This has manifestly not been the case for ABC, and this fact alone is one of the most compelling reasons why people are suspicious.

The fact that these papers have apparently now been accepted by the Publications of the RIMS (a journal where Mochizuki himself is the managing editor, not necessary itself a red flag but poor optics none the less) really doesn’t change the situation as far as giving anyone a reason to accept the proof. If anything, the value of the referee process is not merely in getting some reasonable confidence in the correctness of a paper (not absolute certainty; errors do occur in published papers, usually of a minor sort that can be either instantly fixed by any knowledgeable reader or sometimes with an erratum, and more rarely requiring a retraction). Namely, just as importantly, it forces the author(s) to bring the clarity of the writing up to a reasonable standard for professionals to read it (so they don’t need to take the same time duration that was required for the referees, amongst other things). This latter aspect has been a complete failure, calling into question both the quality of the referee work that was done and the judgement of the editorial board at PRIMS to permit papers in such an unacceptable and widely recognized state of opaqueness to be published. We do now have the ridiculous situation where ABC is a theorem in Kyoto but a conjecture everywhere else. (**edit:** a Japanese reader has clarified to me that the newspaper articles do not definitively say that the papers have been accepted, but rather the wording is something along the lines of “it is planned that PRIMS will accept the paper,” whatever that means. This makes no change to the substance of this post, except that, while there is still a chance the papers will not be accepted in their current form, I retract my criticism of the PRIMS editorial board.)

So why has this state persisted so long? I think I can identify three basic reasons. The first is that mathematicians are often very careful (cue the joke about a sheep *at least one side of which is black*). Mathematicians are very loath to claim that there is a problem with Mochizuki’s argument because they can’t point to any definitive error. So they tend to be very circumspect (reasonably enough) about making any claims to the contrary. We are usually trained as mathematicians to consider an inability to understand an argument as a failure on our part. Second, whenever extraordinary claims are made in mathematics, the initial reaction takes into account the past work of the author. In this case, Shinichi Mochizuki was someone who commanded significant respect and was considered by many who knew him to be very smart. It’s true (as in the recent case of Yitang Zhang) that an unknown person can claim to have proved an important result and be taken seriously, but if a similarly obscure mathematician had released 1000 pages of mathematics written in the style of Mochizuki’s papers, they would have been immediately dismissed. Finally, in contrast to the first two points, there are people willing to come out publicly and proclaim that all is well, and that the doubters just haven’t put in the necessary work to understand the foundations of inter-universal geometry. I’m not interested in speculating about the reasons they might be doing so. But the idea that several hundred hours at least would be required even to scratch the beginnings of the theory is either utter rubbish, or so far beyond the usual experience of how things work that it would be unique not only in mathematics, but in all of science itself.

So where to from here? There are a number of possibilities. One is that someone who examines the papers in depth is able to grasp a key idea, come up with a major simplification, and transform the subject by making it accessible. This was the dream scenario after the release of the paper, but it becomes less and less likely by the day (and year). But it is still possible that this could happen. The flip side of this is that someone could find a serious error, which would also resolve the situation in the opposite way. A third possibility is that we have (roughly) the status quo: no *coup de grâce* is found to kill off the approach, but at the same time the consensus remains that people can’t understand the key ideas. (I should say that whether the papers are accepted or not in a journal is pretty much irrelevant here; it’s not good enough for people to attest that they have read the argument and it is fine, someone has to be able to explain it.) In this case, the mathematical community moves on and then, whether it be a year, a decade, or a century, when someone ultimately does prove ABC, one can go back and compare to see if (in the end) the ideas were really there after all.

Thanks for posting this, Frank.

Thanks for posting this Frank! A fourth possibility is that the fairly strong form of ABC that Mochizuki claims to have proved turns out to not be true…

I haven’t been following this closely — can one extract from Mochizuki’s arguments an explicit effective inequality (not an asymptotic) that is falsifiable?

I believe that this paper (https://arxiv.org/pdf/1601.03572.pdf) by Vesselin Dimitrov shows that that one

canformally extract such a quantity, although completely explicit bounds are not found in that paper.I asked about this over at Peter Woit’s blog, but no one followed up. I find it highly suspicious that none of the experts claiming a thorough understanding of IUT has (it seems) even commented on Dimitrov’s work or gone on to actually derive explicit bounds for the Mordell conjecture; something that would have a good deal of interest

andprovide a much needed cross-check on the validity of Mochizuki’s proof (this is all the more important since Dimitrov earlier found an incorrect inequality in the first version of Mochizuki’s papers).In IUTT-IV it is actually stated [Remark 1.10.1] that the original motivation for IUT was construct a geometrical framework in which to carry out computations that Mochizuki knew about already in 2000, so one wonders why explicit computations suddenly seem to have fallen out of interest with experts on IUT now? Earlier it was claimed that the Belyi map reduction step made the result non-effective, but if Dimitrov’s work is correct we now have the case of a claimed proof of Szpiro where no explicit bound is obtained and also no explanation is given of where in the proof this non-effective step is performed.

If the proof is correct but non-effective, it would be expected that the experts who claim to understand it could point to a definite issue that prevents a computation of explicit bounds (as in the case of the infamous Siegel zeros of analytic number theory).

Bravo!

Great post! It does indeed do an excellent job of summarizing the situation from the perspective of professional number theorists.

Thank you very much for posting this!

Well done Frank !

I couldn’t agree more.

Pingback: Latest on abc | Not Even Wrong

Amen.

Yamashita posted a survey since August (http://www.kurims.kyoto-u.ac.jp/~gokun/DOCUMENTS/abc2017Dec18.pdf). I have been wondering – did it affect the state of abc at all? Someone asked this at MO as well, but unfortunately that isn’t quite a suitable venue..

Thanks for this. I do not have the expertise to have an informed first-hand opinion on Mochizuki’s work, but on comparing this story with the work of Perelman and Yitang Zhang you mentioned that I am much more familiar with, one striking difference to me has been the presence of short “proof of concept” statements in the latter but not in the former, by which I mean ways in which the methods in the papers in question can be used relatively quickly to obtain new non-trivial results of interest (or even a new proof of an existing non-trivial result) in an existing field. In the case of Perelman’s work, already by the fifth page of the first paper Perelman had a novel interpretation of Ricci flow as a gradient flow which looked very promising, and by the seventh page he had used this interpretation to establish a “no breathers” theorem for the Ricci flow that, while being far short of what was needed to finish off the Poincare conjecture, was already a new and interesting result, and I think was one of the reasons why experts in the field were immediately convinced that there was lots of good stuff in these papers. Yitang Zhang’s 54 page paper spends more time on material that is standard to the experts (in particular following the tradition common in analytic number theory to put all the routine lemmas needed later in the paper in a rather lengthy but straightforward early section), but about six pages after all the lemmas are presented, Yitang has made a non-trivial observation, which is that bounded gaps between primes would follow if one could make any improvement to the Bombieri-Vinogradov theorem for smooth moduli. (This particular observation was also previously made independently by Motohashi and Pintz, though not quite in a form that was amenable to Yitang’s arguments in the remaining 30 pages of the paper.) This is not the deepest part of Yitang’s paper, but it definitely reduces the problem to a more tractable-looking one, in contrast to the countless papers attacking some major problem such as the Riemann hypothesis in which one keeps on transforming the problem to one that becomes more and more difficult looking, until a miracle (i.e. error) occurs to dramatically simplify the problem.

From what I have read and heard, I gather that currently, the shortest “proof of concept” of a non-trivial result in an existing (i.e. non-IUTT) field in Mochizuki’s work is the 300+ page argument needed to establish the abc conjecture. It seems to me that having a shorter proof of concept (e.g. <100 pages) would help dispel scepticism about the argument. It seems bizarre to me that there would be an entire self-contained theory whose only external application is to prove the abc conjecture after 300+ pages of set up, with no smaller fragment of this setup having any non-trivial external consequence whatsoever.

Thank you so much for weighing in so clearly and unambiguously on the situation. The mathematical community needs to speak up more clearly about it.

(As an aside to those who leave comments, possibly relevant only for this post: the comments are moderated, and generally comments from cranks will be ignored. Similarly, as a general rule, I do not allow anonymous posts.)

Pingback: El estado actual de la prueba de Mochizuki de la conjetura abc | Ciencia | La Ciencia de la Mula Francis

Pingback: ABC 猜想仍然没有被证明-时讯快报

Pingback: New top story on Hacker News: The ABC conjecture has still not been proved – ÇlusterAssets Inc.,

This is an excellent post. Terry’s comment (from the outside of number theory) is particularly telling. For those of us inside of it, the situation is infuriating. Shortly after Faltings announced his proof of Tate’s isogeny conjecture and the Mordell conjecture, he lectured on it at the Arbeitstagung, explaining the new tools he had introduced. Everyone in the audience who had thought about the problem was immediately convinced. Instead of producing 300+ pages of manuscript, Mochizuki needs to give one or two lectures (in Bonn, or Paris, or Boston, or..) clearly explaining the new ideas in his argument and showing how they lead to a proof of ABC. This shouldn’t be difficult — I have no idea why he refuses to do so.

Thanks for the wonderful post! I agree with everything that was said.

One small thing I would like to add is that most accounts indicate that no experts have been able to point to a place where the proof would fail. This is in fact not the case; since shortly after the papers were out I am pointing out that I am entirely unable to follow the logic after Figure 3.8 in the proof of Corollary 3.12 of Inter-universal Teichmüller theory part III: “If one interprets the above discussion in terms of the notation introduced in the statement of Corollary 3.12, one concludes [the main inequality].” Note that this proof is in fact the *only* proof in parts II and III that is longer than a few lines which essentially say “This follows from the definitions”. Those proofs, by the way, are completely sound, very little seems to happen in those two papers (to me). Since then, I have kept asking other experts about this step, and so far did not get any helpful explanation. In fact, over the years more people came to the same conclusion; from everybody outside the immediate vicinity of Mochizuki, I heard that they did not understand that step either. The ones who do claim to understand the proof are unwilling to acknowledge that more must be said there; in particular, no more details are given in any survey, including Yamashita’s, or any lectures given on the subject (as far as they are publicly documented). [I did hear that in fact all of parts II and III should be regarded as an explanation of this step, and so if I am unable to follow it, I should read this more carefully… For this reason I did wait for several years for someone to give a better (or any) explanation before speaking out publicly.]

One final point: I get very annoyed by all references to computer-verification (that came up not on this blog, but elsewhere on the internet in discussions of Mochizuki’s work). The computer will not be able to make sense of this step either. The comparison to the Kepler conjecture, say, is entirely misguided: In that case, the general strategy was clear, but it was unclear whether every single case had been taken care of. Here, there is no case at all, just the claim “And now the result follows”.

I agree entirely with your remark on computer verification, which is completely irrelevant to the question at hand. Any proof that can be spelled out at a level of detail sufficient to be analyzed by a computer is necessarily going to consist entirely of steps that are each completely comprehensible to a mathematician. Indeed, one cannot even begin to undertake such a verifiction until the details of the proof are thoroughly understood by humans.

The point of computer verification is to catch missing steps or special cases that might have been overlooked in a long and complicated proof. In the case of the Kepler conjecture, there was never a question of experts understanding any particular detail, the concern was simply that there were so many details to check. This bears no resemblance to the situation with IUT.

I love he word ‘verifiction’!

” I get annoyed …” I agree. Its an insult to computer generated/verified proofs right ? 😉

PS, thank you so much for writing in such specificity about your experience. In the spirit of stating things in public that have been known among some experts in arithmetic geometry for quite a while, I’d like to now share something in public (I think for the first time) concerning Corollary 3.12 in IUT3 that I have been bringing to the attention of many mathematicians in private during the past 2 years. Soon after I posted my essay on Cathy O’Neil’s blog summarizing my impressions about the Oxford IUT workshop in December 2015, I received unsolicited emails from people whom I knew in quite distant parts of the world (one in Europe, one in Asia, and one in North America). Each of them told me that they had worked through the IUT papers on their own and were able to more-or-less understand things up to a specific proof where they had become rather stumped. For each of these people, the proof that had stumped them was for 3.12 in IUT3. It was striking to get three independent unsolicited emails in a matter of days which all zeroed in on that same proof as a point of confusion.

A focus of concern on the proof of 3.12 in IUT3 never came up in discussions during the Oxford IUT workshop; my first awareness about it was from those three unsolicited emails. Since that time, the number of people whom I know that have invested tremendous effort reading the IUT papers (some giving talks at IUT workshops) and became stumped by that proof has grown further. (I will not reveal the identities of any of these people, since they communicated their concerns to me in private. It is also entirely unnecessary, since PS’s comment addresses the matter quite well.)

One reason that I have never before discussed this experience in “public” (= Internet posting) is that I assumed the referee process would ultimately lead to a revision that completely clarified the proof of 3.12 in IUT3 and thereby made the matter disappear (so the earlier concerns would be rendered moot). I know from much experience as an editor at various journals that it is very common that papers submitted to math journals has errors that are caught during the refereeing process and then fixed by the author(s) before acceptance. Thus, there is generally no purpose in publicizing such matters; we are all human, after all, and (as Frank notes) part of the referee’s job is to make a reasonable attempt to ferret out mistakes.

I was therefore very surprised when I heard recently (incorrectly, as it turns out) that the IUT papers had been accepted, since the public version of IUT3 still did not have a revision to the proof of 3.12 that cleared up the matter (as I immediately confirmed with several who have invested a lot of time on the IUT papers). Of course, referees are human too and may sometimes overlook something; this is why authors sometimes publish an erratum afterwards, and it is also why it is imperative for papers to be written with a degree of clarity about the techniques so that other can explore the ideas further. Clarity of communication is an essential part of progress in mathematical research. I sincerely hope that wider awareness of the genuine concern about the proof of 3.12 in IUT3 will finally lead to greater collective understanding about what is going on there.

*Thank you*, BCnrd and PS. Finally, people willing to discuss specifics! I pretty much retract all previous comments on this topic. And will point people here.

Pingback: Friday links: holiday caRd, final exam via Twitter, and more | Dynamic Ecology

Could a specialist indicate for us what the importance is of the missing Corollary? The FLT proof as initially developed in public had a gap; a missing lemma concerning finiteness of the Selmer group involved, which at least temporarily required abandoning the use of Euler systems, and the finding of a substitute. Assume for discussion that the necessary Corollary is not available. Is what remains more like a damaged proof or no proof at all?

I’m not entirely sure how to distinguish epistemologically between a damaged proof and no proof at all. But to the extent that I do have a sense of what you are trying to ask; without any proof of the relevant statement, then no, there is no proof at all. It is a critical step.

I shared this post on Twitter and brought attention to the comments mentioning Corollary 3.12. Twitter user @math_jin, who makes a lot of tweets related to Mochizuki’s work, replied with, “The answer to that point is Remark 3.12.2 (ii) of newest version of IUT-III”. This newest version was updated on December 14, 2017.

My knowledge about Mochizuki’s work on the abc conjecture is extremely limited (I’m still currently reading Mochizuki’s survey “The Mathematics of Mutually Alien Copies” in hopes of understanding some of the concepts), but maybe others might find these remarks useful, or otherwise comment on it.

This is the type of clarification which has the chance to move the process several epsilons forward, for some value of epsilon.

I am very glad that someone of note has put in writing, and rather articulately at that, what many have long said or suspected.

The comment section of this post has now served its purpose. I believe we all share the wish that the situation will be clarified one way or the other.