The LMFDB has gone live!

I previously expressed on this blog a somewhat muted opinion about certain aspects of the website’s functionality, and it seems that my complaints have mainly been addressed in the latest version. On the other hand, this live version has been released with a certain amount of fanfare, not to mention press releases. (Is AIM involved? yes it is.) First of all, there’s something about press releases in science (or mathematics) which I find deeply troubling. What is the point of such a press release? To drum up future funding? To generate mainstream press articles and thus communicate a sense of wonder and amazement to the public, who rarely get to glimpse the excitement of modern mathematics in action? Hopefully it’s the later which is true. But if so, does it really require that we stretch the truth about what we are doing in order to generate such excitement? (To see that the answer to the latter question is no, one need only look back on Scientific American articles on mathematics and physics from the ‘60s and ‘70s.) To be fair, I should also link to a more modest description of the project here. But then again, I dare you to click on the following website.

But back to the topic at hand. I have asked the opinion of at least three mathematicians about either the LMFDB or on computational aspects of the Langlands Programme more generally. For reasons of anonymity, I will not mention their names here. The first comment addresses a widely circulated quote from John Voight in the press release: “*Our project is akin to the first periodic table of elements*.” One source offered the following take on this (literally copied from my email and modified only by adjusting the spelling of the Langlands Programme to the preferred Canadian spelling):

*The periodic table was a fantastic synthesis of decades, maybe even centuries, of empirical investigation, that led to profound new theoretical insights into chemistry and the physical world. The LMFDB is a rag-tag assortment of empirical facts in the Langlands Programme which lag far behind the theoretical advances of the past decade.*

A second source, speaking more broadly on the question of computations in the Langlands programme, wondered if there had been any fundamental discoveries made via numerical computations since the BSD conjecture. While these are certainly pretty strong statements, I feel comfortable agreeing at least that, in our field, the theory is way in advance of the computations. We can prove potential modularity theorems for self-dual representations of any dimension, and yet it’s barely possible to compute even weight zero forms for U(3) (David Loeffler did some computations once). In part, I think this actually *provides* justification for effort into understanding how to actually compute these objects. After all, a really nice aspect of number theory is having a collection of beautiful concrete examples, from X_0(11) to Q(sqrt(-23)) (take that, Geometric Langlands!). Yet these statements also provide an alternate framework with which to view the LMFDB, which is less glorious than the press releases suggest. To continue with the periodic table analogy, the LMDFB is less a construction of a periodic table, but more a collection of sample elements from the periodic table neatly contained in small glass vials. Let me make the following point clear: some of the samples took a great deal of effort and ingenuity to extract. But it’s not entirely obvious to me how easy it will be to actually use the data to do fundamental new mathematics. This was the main point of my third commentator: the problem is that, if one actually wants to undertake a fairly serious computation, the complete set of data one has to compute will either not be available on the LMFDB (however big the LMDFB grows), or not available in a format which is at all practical to extract for actually doing computations. So, in the end, if you need some serious data, you are probably going to have to compute it again yourself. In some cases you may be able to do this, and in other cases not.

This leads to probably the most frustrating thing about the website from my perspective. I thought quite a bit about what the most useful format for some of the data might be (for me). Here is a typical thing that I might want to do: find a Hecke eigenform of some particular weight and level, and then compute information about the mod-p representations for various primes p (as well as congruences between forms). This is a little tricky to do at the moment, in part because the data for the coefficients is given in terms of a primitive element in the Hecke field (often the eigenvalue of T_2), and then the order generated by this element has (almost always) huge index (divisible by many small primes) inside the ring of integers, which makes computing the reductions slightly painful. There are certainly ways to address this specific problem and incorporate such functionality into the website. But it ultimately would be silly to customize the LMFDB for my particular needs. Instead, in the end, what I think would actually be most useful would be if the webpages on modular forms included enough pari/gp/sage/magma code to allow me to go away and compute the q-expansion myself. This is why I think that, even within computational number theory, the impact of programs like pari/sage/mamga will be far greater than the LMFDB.

If I think of the three most serious computations I have been involved with recently, they include partial weight one Hilbert modular forms, non-liftable classes of low weight Siegel modular forms, and Artin L-functions of S_5 extensions. The first required customized programs in magma (some written in part by John Voight), and the tables of higher weight HMFs in the LMFDB would not have been of any use. The computations of low weight Siegel modular forms in finite characteristic, which are absolutely terrifying, require completely custom computations (which, by the way, are completely beyond my capability of doing and involve getting Cris Poor and David Yuen to do them). Finally, the computation which would be the closest to an off the shelf computation was proving that the Artin L function of some extension provably had no zero in However, the LMFDB tables only go up to degree four representations. Fortunately I knew that Andy Booker was an expert in this sort of thing and he did it for me. (Then again, even if the data in this case was included, it’s totally unclear to me the extent to which the data in the tables has been “proved.”) And my point here is not to complain about the lack of certain computations being in the LMFDB, just to caution against the idea that its existence will be particularly useful for future computations.

The bigger point, however, is surely this: Is hype in science or mathematics a necessary evil to generate public enthusiasm, or is it an ultimately corrosive influence?

Some of the claims being made for LMFDB are perhaps somewhat overdone, but perhaps one should see that in context: there’s a bit of a “cultural bias” among parts of the wider number theory community towards regarding original computational work as being less valuable than original theoretical work. If you hang around at computational number theory conferences, you’ll hear stories about people working for months or years computing examples or testing conjectures at the request of other mathematicians, and this eventually being rewarded with no more than a footnote in the resulting papers. I’m not personally involved in the LMFDB project so this is no more than a guess, but perhaps the hype around LMFDB is partially intended to redress the balance, by giving some publicity to computational number theory in its own right.

I am on some committee which oversees the large EPSRC ( = UK science funding agency) grant which Cremona et al obtained to run (amongst other things) the LMFDB. I also have some mathematical data about weight one modular forms ( currently available at http://people.maths.ox.ac.uk/lauder/weight1/ ) which one could argue that would be a really good fit for LMFDB. Let me make some comments which are not addressed by Frank’s post and also some comments which reinforce some of the things he’s said. After that I’ll explain why we have a separate website for our weight 1 modular form data.

First let me talk about LMFDB and the first thing I want to say is that whether it’s doing the job Frank wants it to do or not, at the end of the day they have some EPSRC money to run this database and they have done it. There is information up there. Frank raises the issue as to whether the information is easily accessible in sage/magma/pari but there might be a misunderstanding here — the LMFDB people have gone out of their way to make every (as far as I know) object in their database easily downloadable into sage or magma or pari. If you click on an object in the database then in the top right you should see a link “Show commands for: sagemath, pari/gp, magma” and this should enable the end user to be able to download the object into any of these maths packages. On the other hand Frank might be more concerned with whether entire lists of data can be simultaneously downloaded and I don’t know whether this sort of thing is possible.

Next let me make the simple point that just because Frank is concerned that some calculations that he wants to do, can’t be done easily via the site or via downloading (e.g. “list all eigenforms of level <= 1000 and congruent mod 2 to this one") this doesn't mean that other people will have the same problems — maybe some other people only want to do things that can be done already. I don't really know. I've not used the site myself yet for anything serious. Frank's point remains though — if you want to look through a mass of data for some phenomenon then surely it is better to have all that data on your own computer than to have it all on some remote database (so you'll be restricted by the search facilities that the database has to offer rather than the typically more powerful options available to you if you have it all on a maths package locally).

Finally let me explain that my impression is that LMFDB is not as easily administered as one might imagine. This is a flaw in the funding model and there's very little that one can do about it. Behind the scenes of LMFDB is a database managing the queries and it's running some standard database software. This raises three problems, which I'll go through individually.

The first is that any new object that goes into the database needs a unique name, so there is a general problem of how to name objects and this is harder than it might sound. For example how would you even find a good naming system for Dirichlet characters? You want something fairly concise, but a Dirichlet character of level N can either be named by a big list of where it sends all 1<=i<=N (this does not scale well) or by where it sends only some of these things (this scales well but now you have the issue of figuring out the name of a Dirichlet character that you have because you don't know which i they chose).

The second is that standard database software is not designed for storing complex algebraic objects. Standard databases store things like strings, integers which are at most 2^31 in absolute value, or reals to 10 or so decimal places. They do not store elliptic curves, number fields etc, and it is not remotely uncommon for the discriminant of a number field to be greater than 2^31 (in fact I guess the probability is zero that it's any less :P). This introduces all kinds of problems and once I'd had some sort of insight from Cremona as to the problems that were there and saw that they had been to a certain extent overcome, I realised that actually the very existence of LMFDB was far more of a feat that one might imagine. One might ask why they didn't write their own database program! Well, they were not in a position to. Why not? That brings to me to third point.

The third and final thing I wanted to say about the problems LMFDB has had to overcome is that because of the funding model, they cannot hire a computer scientist to run the hardware and software and deal with database issues. The money for running LMFDB comes out of the Maths part of the EPSRC budget so can *only* be spent on mathematicians; in particular they cannot hire a software engineer on their grant so they are having to muddle through themselves. Of course this is just one of these things which is daft but we in the UK have to live with.

***

Now let me go on to talk about how I tried to interact with LMFDB as someone willing to offer them data rather than to use the data they already have. Years ago now I wrote some computer code to compute weight 1 eigenforms, and Alan Lauder in Oxford has recently run it for all levels N<=1500 (this was a non-trivial endeavour — it took many months). So now if you want to compute a space of modular forms of level <= 1500 and any character at all, there is an O(1) algorithm — you just look it up on Alan's website http://people.maths.ox.ac.uk/lauder/weight1/ . We offered this data to LMFDB and they said "sure we're interested — the first thing we need to do is to come up with a naming system". I was not really interested in this type of question. So Alan and I just came up with our own system — sage and magma readable files not just of each eigenform but of **the entire database**, all up on the website that I mentioned above. Now if Frank wants to find congruences between weight 1 modular forms he'll still have to write the code himself, but at least he has access to all the eigenforms at once — he doesn't have to search through a database and be limited by the database's capabilities — he can download all of our data into one sage or magma session and then search through the information himself.

I stopped chasing Cremona about putting the information into LMFDB. I honestly felt that our website served the community better. Furthermore, if LMFDB want that information they can just **take it themselves** from Alan's website and then they can worry about naming it themselves.

I am hoping that our weight 1 modular forms website addresses some of the issues that Frank raises about accessing raw mathematical data. Alan and I would quite happily hear from people who would be interested in presenting the data in different ways.

Some other brief comments: (1) I See now that e.g. the Delta function page does _not_ seem to include a pari/magma/sage link. (2) (to your second source) — I thought that Serre says in his 1987 paper where he formulates the strong form of his conjecture that if it weren’t for Mestre doing all the computer calculations which convinced him (e.g. the totally real A_5 example coming from a mod 2 modular form), he would have been reluctant to make the conjecture. Isn’t this a reasonable example?

Indeed, Serre states, in his acknowledgements in his Duke paper:

“enfin, et tout spécialement :

Jean-François Mestre, qui a réussi à programmer et vérifier un nombre

d’exemples suffisant pour me convaincre que la conjecture méritait d’être prise au

sérieux .”

or

“and, especially:

Jean-François Mestre, who succeeded in programming and checking sufficiently many examples to convince me that the conjecture deserved to be taken seriously.”

There are computations which reveal new phenomena, computations which confirm our suspicions and help indicate that we are on the right track, and confirmations that confirm conjectures everyone believes. I think the sweet spot that computational number theory should aim for is (to a certain extent) the second spot, and I think that is where your example lies. (I should stress again that I do love a good computation, and this was a very nice one.) But it’s a little different from (say) BSD. The weak form of Serre’s conjecture dates back to the ’70s, and it was only after the link to Fermat became clear that Serre formulated a strong form of the conjecture. This is not a case where the conjecture sprung forth from the computation.

Here is another example which I think is interesting. Say you are Goldfeld and find that you can prove the class number problem if you have an elliptic curve of analytic rank provably at least three, and then Gross and Zagier tell you all you need to do is find an elliptic curve of geometric rank three and prove that the Heegner point vanishes. Then it would be nice if you could look up some table and find such a curve without having to find it yourself. (I wonder who first found the N = 5077 curve; I guess y^2 = 4x^3 – 28x + 25 does not exactly have huge Faltings height, so it’s probably pretty old.) That’s the type of example where a “simple lookup” table like the LMFDB could prove useful.

I agree with most of the comments, that the LMFDB is OK for browsing if that’s what you want, but for en masse extraction it is not very good (maybe deliberately). And hype is my pet philosophical peeve. If you go for funding, ultimately you play their game. See MacLane responding to Koblitz in a Math Intelligencer about the NSF oversight of the Harvard number theory seminar, circa the 80s. He recalls that back in the Sputnik era he was already warning about this (and didn’t Eisenhower have a commentary)?

On a math note, I don’t think you quite get class number going to infinity by 5077, as it fails for d divisible by this. Instead, they worked with the -139 twist of 37b, which is also nicely automatically modular, and the root number mechanics work out here to give a theorem even when 37 or 139 divides d.

And if I recall, it was Mestre who did the computation to show that the 5077 curve is Weil (as they called modular in those days).

For 5077, one might look at the letters from Serre to Tate, 05/10/84 and 26/10/84.

The Serre-Tate correspondence is what I want for christmas.

Frank,

You raise some interesting points, and I fully embrace collegial discussion about our endeavors: in particular, I wanted to say thank you for your comments and critical feedback. By any standard, the LMFDB has both succeeded and failed; and yes, press releases are tricky, as too much or too little promotion is usually bad, and finding a sweet spot in between can be hard to achieve.

I can’t speak for all of the people involved with the LMFDB, but since I’m mentioned twice in your post, I wanted to try to explain at least how I view it playing a role in the mathematical landscape.

First, you dare us to click on a link that is no longer live. The link was to an internal document, not intended for public dissemination. Please do not judge us by our early attempts at brainstorming.

Second, it was our intention with the press release to try to get nonzero traction in the public domain: to face the world and to try capture some imaginations on the subject of computational number theory. Personally, I worry that public understanding and appreciation of mathematics research (especially in the US) is low, and this has a myriad of bad effects on society. At the same time, I find it very hard to communicate what I do in my research to the public. The LMFDB release provided what I thought was a unique opportunity to try my hand at getting some computational arithmetic geometry out there in the world. In all honesty, we struggled to find a way to actually do this: to communicate something of our excitement, but using language that can be immediately be grasped. I’m not sure if any of the analogies we came up with was so great (“periodic table”, “atlas”, “mathematical objects as friends”, etc.), but I didn’t think they were terrible, either. Why do you think taking liberties with these kinds of descriptions is corrosive? They are approximations that attempted to capture the human imagination. It is my hope we will try this more, not less, so eventually we’ll get good at it! And then maybe we can rekindle the mathematical interest of the public (a la ’60s and ’70s Scientific American) in a way that will allow an increasingly substantive conversation. What exactly do you have in mind for this problem going forward? If it’s just “less hype”, I think I can be on board with that: but then we should be trying this more often, not less, with a better sense of what is faithful, what works, what has traction.

Third, here is my (really, our) quote in full. “Our project is akin to the first periodic table of elements. We have found enough of the building blocks that we can see the overall structure and begin to glimpse the underlying relationships. The LMFDB provides a coherent picture of a vast web of interconnections in clear, explicit and navigable terms. The worlds being explored are ones of particular interest: they cross a wide variety of domains, guided by a network of conjectures at the cutting edge of mathematical research.” I didn’t mean to imply that the LMFDB is equal to the periodic table. Rather, I was trying to explain how seeing the pieces put together, connecting them and navigating among them, helps to see more structure, like when you see the elements organized in the periodic table. Is the complaint that this is an imperfect analogy? OK, granted. Or did you mean to say that the actual point of the quote is wrong, and there is no point to navigating among examples to try to see relationships between objects? I hope you don’t mean that.

Fourth, I’m well aware of the prejudices against computational mathematics. I never pay them much mind: I hear mathematical prejudices of all sorts, pitting one area of research against another (sometimes one colleague against another), and they almost never serve to further the goal of advancing science. Every time I hear someone say “X is way ahead of Y” so Y is not needed, I cringe a little bit. How many times have statements like these been wrong? How do you know in advance how things will proceed, where new avenues will open up? The theoretical advances in the Langlands Program(me) in the past decade have been incredible and thrilling, yes. Algorithmic aspects (sometimes ahead of the theory, sometimes behind)–both theoretical and practical–should catch up, so we can see more deeply and explicitly. In my mathematical life, I care about all points along this domain between theory and computation: the theorems themselves, their more explicit versions down the road, the theorems about the existence of algorithms, the analysis of running time of these algorithms, efforts undertaken for their first implementations, and then the actual computation of large amounts of data and its dissemination. I think our mathematical life is made most rich when we have all of these available to us. The LMFDB is one point along this spectrum, and it requires a very narrow view of science to say that one point is only good if it affects one other point. (And then I would contend that they all still do, even if some effects are weak or indirect.) In other words, in a deeper sense, I am concerned about the epistemological assumptions behind these criticisms of the LMFDB: why is yours better?

Finally, I would encourage you to reconsider your comments about how easy it is to extract the data to do new mathematics. We have developed an api (http://lmfdb.warwick.ac.uk/api/, still in beta) so that you can make advanced and specific queries to the modular forms database–you’re not the only one who wants to carry out this kind of “serious data” gathering. Moreover, for the specific issue you raise, “mod p” modular forms are currently under development. But I’m not sure we will ever have you as a completely satisfied customer (nor would I be), as you note: I think we both share a love of finding interesting math problems, when relevant figuring out new (or modified) algorithms for testing them, and then feeding what we learn back in to the theorem. It’s not the only thing we do, but it’s one way to do research in mathematics. I don’t see the LMFDB playing a role in this kind of computational mathematics, and I don’t think it is less glorious because it does not. But let’s say you always need to do something new enough that you won’t ever be able to use something right out of the box. Let me turn this back on you: what, then, do you do with the data you compute? Keep it to yourself? Post it on your website (or a colleague’s)? But then how do researchers keep track of this data, who acts as a caretaker, and how would we ever link these databases together to see relational structure (the point I was trying to make above)? Or are you saying we shouldn’t bother with any of these steps, because at the point the paper is already written? Here I think the LMFDB has an important, if not glorious, role to play.

So in the end, even you continue to say “the LMFDB is not useful to me”, I hope you will begin to see some contexts in which it can be useful for others to explore mathematics and love the Langlands program(me), especially once the hype diminishes.

JV

Dear JV, Thanks for commenting! You made some interesting points, but let me just remark that, in the end, our feelings on many of these matters are not too dissimilar. I like the LMFDB, I’m sure I will use it, I’m all in favor of dissemination, I love examples, and have computed enough myself to know that some computations can be extremely subtle, and I can appreciate (at least in some vague sense) the technical difficulties of running a database.

As for the possible dangers of hype, I could not do better than the well-timed episode of John Oliver, which someone pointed out to me after I wrote this post:

OMG, that’s genius–I love John Oliver. He makes a compelling case that overhyped scientific studies seriously misrepresent the findings, leading to dangerous, “a la carte”, sweeping, wrong conclusions. (I don’t think you meant to say that this applies to the LMFDB release, and indeed I don’t see how that applies to attempts to popularize mathematics using nontechnical language. We just need to find a way to get rid of the cringe factor.)

It wasn’t clear from your post that we come out more aligned than not, but I’m glad to hear it! And whatever we disagree on, I hope we’ll turn it in to making the mathematics–and the computations–better and more useful.

Cheers, JV

Interesting discussion. I realize you’re not advocating the position “what did computation do for the Langlands program since BSD?” but I wanted to try to rebut this idea further anyway.

Here are some examples that come to mind in the Langlands program (broadly interpreted), perhaps somewhere between your first and second categories (just as for BSD):

1) As far as I can tell from the literature, the idea that algebraic K-theory should be related to values of L-functions (eventually the conjectures of Beilinson and Bloch-Kato) starts with hand computations of K_2 for rings of integers by Birch and Tate. (See Section 4 of Tate’s 1970 ICM talk.)

2) The Birch-Stephenson numerical computations of heights of Heegner points, and

the Gross–Zagier formula: these numerical computations and their influence are described in the contributions of Birch and Gross to the volume “Heegner Points and Rankin L-Series.”

These are older examples, but there are plenty of more recent examples

which appear to have a rich interaction of theory and numerics (the conjectures of

Bertolini-Darmon come to mind at once — I’d also be interested in your thoughts about Gouvea-Mazur in this regard.)

I think computation also influences the development of the field in more subtle ways. I wonder if anybody would have to bothered to think hard about attaching Galois representations to torsion classes without the various numerics showing that this was not a rare and pathological phenomenon.

I think there is the chicken-egg syndrome with computation versus database. What are the odds that a 1982 database of elliptic curves would have the necessary data for Birch and Stephens? Indeed, likely it would be the other way around, such info would be appended to the DB after it was seen to be useful (and thus computed in many cases).

Choosly : Yes, what you say is reasonable. However, having tables at hand is useful, since when you want to test conjecture XYZ you like to try out the examples of low conductor, and who could guess the equation of elliptic curve 11C? The Jones–Roberts tables of number fields has proved useful for me many times for this reason, even though I would write my own PARI script to compute what I was actually interested in.

Above I am arguing only against the claim that the only substantial contribution of computational mathematics to the Langlands program was BSD.

Kevin,

I won’t lie to you: labels are a surprisingly controversial topic, but only some people in the LMFDB. I’m pretty relaxed about this and I’m not alone, and it has not stopped (or really even slowed down) a massive amount of data from making its way into the database. Same for your data in weight 1–don’t read too much into this.

The data you computed with Alan is amazing and we would be thrilled to be able to include it into the LMFDB. This requires a bit of effort, and someone has to put forth that effort: maybe, as you say, it’s up to one of us. But then if we haven’t gotten to it yet, it’s only because of time constraints, and it should not be read as a lack of interest (or because we’re squabbling about labels).

Have you really loaded the whole database into one magma session? The files are pretty big, especially if you want to compute L-functions and need many Hecke eigenvalues. Can you see how a “big data” take on this set would soon make this way of working with the data infeasible?

But more to the point, there are so many cool things that I would like to do with weight 1 forms: they have L-functions, explicit Galois representations, congruences, etc. The thing that the LMFDB could add would be the links. The advantage of separate databases is that you can do what you need to do, and actually I do that too (e.g. https://math.dartmouth.edu/~jvoight/hmf.html). But my Hilbert modular forms really started to come alive when I could search and browse among them, see their L-functions, match them with elliptic curves, etc. That is what I would hope for weight 1 as well.

(Apologies for the “downloadable” for classical modular forms still not being live: it is a top priority, and there is something available on almost every other webpage. Just because we’re 1.0 doesn’t mean our work has ended; quite to the contrary!)

I mentioned this in reply to Frank’s post, but just to be clear: the api would go beyond the download link and allow the LMFDB to be used in a way beyond browsing. The api makes the results of a query fully available for whatever analysis you like in Python. The api is in beta, and currently it requires a high level of technical know-how, but I think it could be really useful to researchers. “Find me all weight 1 modular forms with a_2 = 0 and …, and then in Sage I see if x happens by doing some extra computation”. We’re already thinking this way for the individual objects, but we could take that farther, too: click a button on a page or a search and it gives you what you need to play with the objects further (e.g., compute an L-value other than at a critical point or whatever). I don’t see this is as a restriction, and in fact sometimes it is really helpful *not* to have an enormous database on your own computer (the HMFs are 250 GB compressed, the whole LMFDB is many TBs, and growing–you want someone else to be responsible for this).

Finally, Edgar Costa and I received a CompX grant from the Neukom Institute for Computer Science at Dartmouth College specifically to host the LMFDB from the cloud, and our departmental system administrator Sarunas Burdulis who is amazing. In other words, we have money to pay computer scientists in the future. So it’s true that we rely a lot on each other for technical know-how, but we also have people involved in the project who are not mathematicians who are helping with the backend.

So in sum I’m very grateful to you and Alan for computing this data, I think it is and will continue to be a step (and an enduringly useful one) to have direct access to data files–but I also think we share an interest in displaying data in different ways, and in this respect the LMFDB has a role to play.

JV

As a computational number theorist who recently got involved with the LMFDB, let me begin by saying that I find much of the press coverage every bit as cringe-worthy as you do. With regards to the MIT announcement, which is the one I was most directly involved with, it was written solely with the aim of stimulating interest in the mathematics surrounding the LMFDB and to advertise the LMFDB as a resource to researchers. Since the announcement I have had students (and even colleagues) ask me about elliptic curves, modular forms, L-functions, and the Langlands programme, so from this perspective I consider it to have been worthwhile.

As to your larger question of whether hype is a necessary evil or a corrosive influence, I think the answer is a bit of both. I feel similarly about the ever-growing list of research prize announcements and the ever-larger cash awards that are attached to them in the interest of drawing attention. I’d love to live in a world where the public (and funding agencies) valued research in pure mathematics for its own sake, but having recently spent a week in Washington serving on a grant review panel, I don’t think we live in such a world (at least not in the US). I’m not certain things were all that different in the past; I expect cold war politics had more to do with funding priorities in the 60s and 70s than articles in Scientific American.

Regarding the discussion of theoretical versus computational progress, at least in the areas I work in, I’m not so sure I agree that the theory is necessarily ahead of the computations. As far as I know (and please correct me if I am wrong!) we are still far from proving modularity for abelian varieties in general, or even in dimension 2, and even further from being able to prove the Sato-Tate conjecture for abelian varieties (at least this was Richard Taylor’s view when I asked him about it).

To take a more concrete example, we currently don’t have a provably complete analog of even the first page of the Antwerp tables for abelian surfaces (even assuming modularity, it is not enough to just know the isogeny classes, which we do in a handful of cases, we also need a complete enumeration of the isomorphism classes in each isogeny class). These remarks are not meant to diminish the work of Brumer and Kramer, or the heroic efforts of Poor and Yuen, they just indicate how far we still have to go.

In the meantime, one can undertake computations in an attempt to obtain a complete list (at least of Jacobians) up to a given conductor bound. The LMFDB contains a far more complete list of genus 2 curves over Q of small discriminant (whose Jacobians necessarily have small conductor) than was previously available. As described here, these were obtained by searching through some 10^17 curves of bounded height. You can browse the results here, and at the bottom of the page you will find buttons that allow you to download every genus 2 curve in the datbase to Pari, Sage, or Magma, along with invariants that are stored in the LMFDB but difficult or impossible to compute in Pari, Sage or Magma (notably, this includes even the conductor). I agree with Kevin Buzzard that anyone interested in seriously analyzing the data is going to want to do this, and while there is not yet such a button on every page of the LMFDB (people are working on it), in the meantime you can directly access *all* the data in the LMFDB via the API. You can also access (and modify to suit your purposes) the source code to the LMFDB on GitHub.

Thank you for your interest and your comments.

Here is the github link: https://github.com/LMFDB

Looking forward to your contribution.

I find this a rather rude comment. Much of the whole purpose of the LMFDB is to make information available to the mather, not the coder. Admittedly FC diverged into this at the end of his spiel, but a data dump of a github link, with the snide “contribution” remark (can one have an opinion w/o being a “contributor”? — or are all userscontributors and vice-versa, as some CS philosophes like to muse?), is exactly the sort of attitude that unfortunately is too pervasive (and if experience in other cases is anything, as time goes on, the LMFDB will likely become *more* in-circle and expert-oriented, not less). Returning to DL’s initial point, this comment is a clarion example of why computational NT is likely not to be funded as highly as it would prefer. Alienating non-boffins is not the way to go.

I’ll let this individual speak for him or herself, but you’re reading an awful lot into what I took to be a welcoming reply, with a link to get involved.

The LMFDB is open to everyone, and people are welcome to contribute in any way they can, mathematically or otherwise. The github link allows you as a “mather”, without any coding, to comment on the project, and make suggestions and follow issues.

The LMFDB takes great pains, in knowls and background matter, to have maximum accessibility for nonexperts. I don’t understand how what you’re saying relates to David Loeffler’s comment.

You seem to be taking an active interest in the project, which is great! Tell us your feedback. But I would also kindly request that we move at least the technical discussion off of this page (it is hard to track)–and that’s what the link above does.

On a different point, I think these sort of projects sort of become their own justification in the end. Back in Bristol 2012, Andy Pollington from NSF was there, wanting to have something he could show funders “on the hill”. Before that, it was under AIM from 2007 (Rubinstein’s grant?). They boast of however many mathematicians from however many different countries are involved, because that pulls in the big bucks. OTOH, one of the guys who was with it (and became disillusioned since) opined that he and a couple of others with the right incentives, could throw together the whole thing in 2 or 3 (maybe 6) months, at much less cost than the EPSRC 2.2 million pounds (which I realize is largely accounting shenanigans, to pay for 15-20 postdoc-years with exorbitant overhead). But that’s not the “real” priority, of course. As Buzzard’s Lauder example shows, mathers already are able to put their own data available, but the glueing together, maintenance, presentations, those are the time and money sucks (and why it’s taken nearly 10 years from the first AIM meeting, and 3 years into the EPSRC grant, to get something marginally useable).

Dear Choosly,

I was at the Bristol meeting, I was privy to at least part of this opining, and I think there is some misquotation or other mistake here. My memory is that this anonymous individual was talking about one specific part of the LMFDB, and actually I think he or she was right at the time: it was in bad shape and needed to be redone at the cost of many months of intense work.

On the other hand, the idea that one person could recreate the content of the entire LMFDB in 2 or 3–or even 6–months is super duper false.

JV

Maybe I can be more clear. The remembered quote came later (in late 2013), and was not really talking about the maths end (content) of computing all the data, but putting into a presentable form. Which is at least half of what the meetings are about, at considerable expenditure of flying everyone in, etc. I think the person was annoyed that nothing seemed to happen outside meeting time, and having a select team work more consistently (which would be an idea for the postdocs, but I think they instead largely preferred theoretical work, cf. the above EPSRC funding issue) would finish “the basics” sooner (like the arithmetic normalization), maybe liaisoning with the data providers, etc. Considering the cost of multiple confs versus 3 individs for 3-6 months (1 year total), I think it was a cost-benefit analysis, particularly with the time constraints. (Lanuch the BETA, now!)

The person(s) involved also said that the sloooowwww movement of LMFDB issues was one reason to be pessimistic about it. In short, have the groundwork done by say mid-late 2014, rather than mid 2016 (or later). Wasn’t there supposed to a grand launch around Sarnak’s 60th birthday IIRC (late 2014)? Well, I guess you can probably surmise who the anon-individ is (or are), and perhaps you just have to take it as a casualty of the process, one (or more) less contributor(s).

Here is a test case. Find the first modular form of weight 4 and analytic rank 2, using the LMFDB. Unfortunately, 127 exceeds the table limit. Not exactly deep, but not there. Lucky for them, in weight 6 there is a level below 100 (see paper of Dummigan, Stein, Watkins), which is there. But I don’t know how to search for it easily in LMFDB, without prior knowledge of its existence.

For extra credit, find the first Gamma1(N) form of weight 2 and analytic rank 1. If I already know what this is, I don’t need to look it up (Sage/Magma is easier). And if I don’t, I’d like to be able to find it in the first place. Otherwise it is, like was said, just a hodgepodge collection of varied interest. The “arithmetic normalization” button not being available, is also a pain (why even have it??!).

Dear Choosly,

I’ll report these right away to our issue tracker:

(1) We should extend the scope of the weight 4 data,

(2) We should add the ability to search by analytic rank.

These are good ideas! (The classical modular forms need some very serious continued work, but I encourage you to visit other areas of the database to see the real scope of the project.)

Unfortunately, I couldn’t follow what you were trying to say in your second paragraph. Would you prefer that we hide the arithmetic normalization until we get it working?

JV

Given that such functionality was supposed to be done in 2013 (or even 2012), I guess I find the button annoying. Well, at least the HTML guy doesn’t have to add it when the other end is done (smiley). Didn’t the Dokchitser crowd implement Hodge structure somewhere in the code (2 years ago), to allow this to be “easy” to renormalize? Or is it a printing problem with sqrt’s and decimal approx’s? I know some “arithmetic” people have fled from (or eye-rolled over) the LMFDB due precisely with this, but I guess it is not a priority.

I might point out: two weeks ago, the real Dirichlet characters were broken (now fixed), and the modular form of level 61 with analytic rank 1 had no associated L-function (I guess those with coeffs in quadratic field got computed/linked in the interim), but now it’s there (with indeed the lowest zero having 0 as its imaginary part). And I was griping like usual with every time I try to use the LMFDB (maybe this is just me, every time I use Pari, Sage, Magma, I end up griping too…) So there is some progress.

Dear Choosly,

I am personally appreciative of your comments, and more so with your patience! I also gripe a lot about databases and compute algebra systems, so I think we’re at least on the same page about that. 🙂

With respect to normalizations: I’m an “arithmetic” person, but I’m also not in charge. The exact way to display L-functions is something we’re trying to figure out–especially when the modular form has large Hecke field, so the coefficients would be unwieldy to express exactly. We were able to come to what I think is a good compromise for genus 2: check out http://www.lmfdb.org/L/Genus2Curve/Q/58492/b/ . There the arithmetic normalization button works, and we’re trying to display the information in a maximally useful form. Let us know what you think–does it unroll your eyes?

JV

Particularly with modular forms, the arithmetic normalization is *already* on the modular form page (admittedly often in hideous number field form), so it’s a bit odd it’s *not* on the L-function page (smiley). I guess at the back end, it’s all how the data was computed in the first place, that the genus 2 has arithmetic, and the modular forms not.

Pingback: Back | Not Even Wrong

Pingback: Central Extensions and Weight One Forms | Persiflage

Pingback: 月旦 IV | Fight with Infinity