Eclipse Day: 8 April 2024

Eclipse Day: 8 April 2024

The day of doom approaches, and the moon is cleft in half!

Ayah al-Qamar 54:1

Perhaps the most compelling astronomical phenomenon accessible to a naked-eye observer is a total eclipse of the sun. These rare events have always fascinated us, and often terrified us. It is abnormal and disturbing for the sun to be blotted from the sky!

A solar eclipse will occur on Monday, 8 April 2024. A partial eclipse will be visible from nearly every part of North America. The path of totality will sweep from Mexico through Texas, the Midwest, New England, and across the maritime provinces of Canada. If you are anywhere where this event is visible, go out, don a pair of eclipse glasses, and look up. This is especially true in the path of totality. Partial eclipse are cool. Total eclipses are so much more that they have inspired science, art, and literature, with descriptions frequently evincing the deep emotion of profound religious experience*.

The American Astronomical Society has posted lots of useful information, including a map of the path of totality and advice about proper eclipse glasses. These are super-cheap, but that doesn’t preclude bad actors from selling ineffective versions. Simple rule of thumb: don’t look straight at the sun. A proper pair of eclipse glasses enable you to comfortably do so. If it hurts, stop+: close your eyes and look away. Listen to the messages from your pain receptors.

If you can get to the path of totality, it is worth doing so. Expect crowds and plan accordingly. This is a draw of epic proportions, and for many will be the only practical opportunity of their lifetime. Totality is brief, only a few minutes, so be sure to be in the right place at the right time$.

The AAS provides a good list of the phenomena to expect. Most of the action is around and during totality. The partial eclipse is a long (hour+) build up to the brief main show (a few minutes of totality). In addition to seeing the corona, the diamond ring and Baily’s Beads effects, this should be a good time to see solar prominences as the sun is nearing the maximum in its eleven year sunspot cycle. What we will see is unknown, as this is the solar analog of weather phenomena. The forecast calls for a high chance of prominences, but that doesn’t guarantee they’ll show.

One last thing I’ll note is that all the planets are relatively close to the sun on the sky at present, and some might be visible during the eclipse. Venus and Jupiter will be most prominent and easy to spot. Uranus and Neptune, not so much. The others maybe. Also present is Comet 12P/Pons-Brooks (aka the devil comet) in the vicinity of Jupiter. It is quite a temporal coincidence for this comet with a 71 year period to be in the inner solar system during this eclipse. It is unlikely to put on much of a show: comets are notoriously fickle, and the odds are that it will be invisible to the naked eye. But it is there, so keep a weather eye out, just in case.

All the planets and even a comet will be in the sky during the eclipse.

Now go forth this Monday and witness one of nature’s greatest marvels.


*There are many myths and monsters associated with eclipses. Until the light pollution of recent times, the motions of the sky were very much in our faces. People cared deeply about these things. They were well aware of more than the daily rising and setting of the sun. The phases of the moon, the patterns in the stars, and the wandering of the planets was obvious to everyone who looked up. People learned long ago to keep close track of these events, even those as rare as eclipses. Some of the earliest tablets unearthed from ancient Babylon are elaborate tables of eclipse seasons recognizing lengthy periods like the roughly 18 year Saros cycle. One doesn’t just up and write down this sort of knowledge on a whim one day, as it requires centuries of careful observation and record keeping to recognize the recurrence of events with such long periods, especially for solar eclipses that do not visit exactly the same spot every exeligmos cycle. I suspect there was a strong oral tradition of astronomical record keeping for long ages before we learned to write. Astronomy is the oldest science: this was important knowledge to acquire, preserve, and pass on.

The ancients managed to deduce cycles of eclipse seasons, so they could forecast the chance for eclipses, but only with the same precision as a weather forecast: there is a chance of rain, but we can’t be sure exactly when and where. Now we have measured planetary motions accurately enough and understand the geometry of what is going on so we can forecast exactly when and where eclipses will occur. This is a staggering achievement of human intellect and communal effort.

+There are a lot of misconceptions about the dangers of eclipse viewing. Looking straight at the sun is uncomfortable and dangerous at any time. The only thing special about a total eclipse is that it becomes truly dark for a few minutes, and your pupils start to expand to adapt to the darkness. Consequently, the most dangerous moment is at the end of totality, when your eyes have grown wide and the sun suddenly reappears. Be sure to don your eclipse glasses or look away right before the sun reappears; you don’t want to look straight into the sun at that moment.

Time and Date is a great resource for getting the timing of the eclipse for your specific location, accurate to within a few seconds.

$As with any astronomical observation, no guarantee is made that the skies will be clear of clouds. I have spent many a night at observatories wishing for the sky to clear and obsessively refreshing the satellite maps to discern when it might do so. It doesn’t help – it’s almost as if nature doesn’t care that we want to witness one of its greatest displays. So my advice is to go where you can and don’t sweat the weather forecast. Either the sky cooperates or it doesn’t.

I’ve agreed to serve on a discussion panel about the eclipse on campus, so I’ll be here in Cleveland. We are right in the path of totality, but the weather statistics here are… not good. To make matters worse for the superstitious, April 8 is also the home opener for the Cleveland Guardians. Opening day is always a joyous time with a packed stadium, but the weather is inevitably miserable. Nevertheless, all we need is a brief opening in the clouds at just the right time. At an observatory we would call a that a sucker hole – a gap in the clouds big enough to get the inexperienced observer to run around prepping the instrument and the telescope, an intense amount of work, to open up and observe just in time for the clouds to cover up the sky again. Come Monday, I’ll happily accept a well-timed sucker hole.

Required dark matter properties

Required dark matter properties

I was on vacation last week. As soon as I got back, the first thing I did was fall off my bike onto a tree stump, breaking my wrist. I’ll be okay, but I won’t be typing a lot. This post is being dictated to software; I hope I don’t have to do too much editing. I let the software generate the image above based on the prompt “dark matter properties illustrated” and I don’t think we should hold our breath for AI to help us out with this.

There were some good questions to the last post that I didn’t get to address. I went back and tried to answer some of them. Siriusactuary asked about the properties required for dark matter for galaxies vs. large scale structure. That’s a very deep question that requires a long answer with some historical perspective. Please bear with me as I attempt a quasi-coherent, off-the-cuff narrative that doesn’t invite a lot of editing, which it surely will.

I thought about this long and hard when I first encountered the problem. Which was almost thirty years ago now. So it is probably worth a short refresher.

We have been assuming all along, I think reasonably, that cosmological dark matter and galaxy dark matter are the same stuff, just different manifestations of the same problem. Perhaps they’re not, but there is a huge range of systems that show acceleration discrepancies, and it isn’t always trivial to split them into one camp or another. It seems common to talk about large and small scale problems, but I don’t think size is the right way to think about it. It’s more a difference between gravitationally bound systems that are in equilibrium and the dynamics of the expanding universe as an evolving entity that contains structures that develop within it.

The problem in bound systems is not just galaxy dynamics. It’s also clusters of galaxies. It’s also a star clusters that don’t show a discrepancy. The problem extends over a dynamic range of at least a billion in baryonic mass. It involves all sorts of dynamical questions where we do sometimes need to invoke dark matter or MOND or whatever. The evidence in bound systems is inevitably that when we apply the law of gravity as we know it to the stuff we can see, the visible baryons, then the dynamical mass doesn’t add up. We need something extra to explain the data.

The simple answer early on was that there was simply more mass there, i.e., dark matter. But that much is ambiguous. It could be that we infer the need for dark matter because the equations are inadequate and need to be generalized, i.e., something like MOND. But to start, at the beginning of the dark matter paradigm, there was no particular restriction on what the dark matter needed to be or what its properties needed to be. It could be baryonic, it could be non-baryonic. It could be black holes, brown dwarfs, all manner of things.

From a cosmological perspective, it became apparent in the early 1980s that we needed something extra – not just dark, but non-baryonic. By this time it was easy to believe because people like Vera Rubin and Albert Bosma had already established that we needed more than meets the eyes in galaxies. So dark matter was no longer a radical hypothesis, which it had been in 1970. The paradigm kinda snowballed – it had been around as a possibility since the 1930s, but it was only in the 1970s that it became firmly established dynamically. Even then it was like a factor of two and could be normal if hard to see baryons like brown dwarfs. By the early 1980s it was clear we needed more like a factor of ten, and it had to be something new: the cosmological constraint was that the gravitating mass density is greater than the baryon density allowed by big bang nucleuosynthesis. That means that there is a requirement on the nature of dark matter beyond there just being more mass.

The cosmic dark matter has to be something non-baryonic. That is to be say, it has to be some new kind of beast, presumably some kind of a particle that is not already in the standard model of particle physics. This was received with eagerness by particle physicists who felt that their standard model was complete and yet unsatisfactory and there should be something deeper and more to it. This was an indication in that direction. From a cosmological perspective, the key fact was that there was something more out there than met the eye. Gravitation gave a mass density then was higher than allowed in normal matter. Not only did you need dark matter, but you needed some kind of novel, new particle that’s not in the standard model of particle physics to be that dark matter.

The other cosmological imperative was to grow large scale structure. The initial condition that we see in the early universe is very smooth. That is the microwave background on the sky, with its very small temperature fluctuations, only one part in a hundred thousand. That’s the growth factor reached by redshift zero: structure has grown by a factor of a hundred thousand. Normal gravity will grow structure at a rate that is proportional to the rate at which the universe expands, which is basically a factor of a thousand since the microwave background was imprinted.

So we have another big discrepancy. We can only grow structure by a factor of a thousand, but we observe that it has grown by a factor of a hundred thousand. So we need something to goose the process. That something can be dark matter, provided that it does not interact with photons directly. It can be a form of particle that does not interact via the electromagnetic force. It can interacts through gravity and perhaps through the weak nuclear force, but not through the electromagnetic force.

Those are properties that are required of dark matter by cosmology. It has to be non-baryonic and not interact through electromagnetism. These properties are not necessary for galaxies. And that’s basically the picture that persists today. One additional constraint that we need from a cosmological perspective is that the dark matter needs to be slow-moving – dynamically cold so that structure can form. If you make it dynamically hot, like neutrinos that are born moving at very nearly the speed of light, those are not going to clump up and form structure even if they have a little mass.

So that was the origin of the cold dark matter paradigm. We needed some form of completely novel particle that had the right relic density – this is where the wimp miracle comes in. That worked fine for galaxies at the time. All you needed for galaxies early on was extra mass. It was cosmology that gave us these extra indications of what the dark matter needs to be.

We’ve learned a lot more about galaxies since then. I remember in the early nineties when I was still a staunch proponent of cold dark matter being approached at conferences by eminent dynamicists who confided in hushed tones so that the cosmologists wouldn’t hear that they thought the dark matter had to be baryonic, not non-baryonic.

I had come to this from the cosmological perspective that I had just described above. The total mass density had to be a lot bigger than the baryonic mass density. Therefore the dark matter had to be non-baryonic. To say otherwise was crazy talk, which is why they were speaking about it in hushed tones. But here were these very eminent people who were very quietly suggesting to me that their work on galaxies suggested that the dark matter had to be made a baryons not something non-baryonic. I asked why, and basically it boiled down to the fact that they could see clear connections between the dynamics and the baryons. It didn’t suffice just to have extra mass; the dark and luminous component seemed to know about each other*.

The data for galaxies showed that the stuff we could see, the distribution of stars and gas, was clearly and intimately related to the total distribution of mass, including the dark matter. This led to a number of ideas, that do not sit well with the cold dark matter paradigm. One was HI scaling: basically, if you took the distribution of atomic gas, and scaled it up by a factor of roughly 10, then that was a decent predictor of what the dark matter was doing. Given that, one could imagine that maybe the dark matter was some form of unseen baryons that follow the same distribution as the atomic gas. There was even an elaborate paradigm built up around very cold molecular gas to do this. That seemed problematic for me, because if you have cold molecular gas, it should clump up and form stars, and then you see it. Even if you didn’t see it in it’s cold form you need a lot of it. Interestingly, you do not violate the BBN baryon density, just in galaxies. But you would on a cosmic scale, if that was the only form of dark matter. So then we we need multiple forms of dark matter, which violates parsimony.

Another important and frequent point is the concept of maximum disk. This came up last time in the case of NGC 1277, where the inner regions of that galaxy have its dynamics completely explained by the stars that you see. This is a very common occurrence in high surface brightness galaxies. In regions where the stars are dense, that’s all the mass that you need. It’s only when you get out to a much larger radius, where the accelerations become low, that you needed something extra, the dark matter effect.

It was pretty clear and widely accepted that the inner regions of many bright galaxies were star dominated. You did not need much dark matter in the center, only at the edges. So you had this picture of a pseudoisothermal halo with a low density central core. But by the mid-nineties, a lot of simulations all showed that cold dark matter halos should have cusps: they predicted there to be a lot of dark matter near the centers of galaxies.

This contradicted the picture that had been established. And so people got into big arguments as to whether or not high-surface brightness galaxies were indeed maximal. The people who actually worked on galaxies said Yes, we have established that they are maximal – we only need stars in the central regions; the dark matter only becomes necessary farther out. People who were coming at it from the cosmological perspective without having worked on individual galaxies saw the results of the simulations, saw that there’s always a little room to trade off between the stellar mass and the dark mass by adjusting the mass to light ratio of the stars, and said galaxies cannot be maximal.

I was perplexed by this contradiction. You had a strong line of evidence that galaxies were maximal and their centers. You had a completely different line of evidence, a top down cosmological view of galaxies that said galaxies should not and could not be maximal in nurse centers. Which of those interpretations you believe seemed to depend on which camp you came out of.

I came out of both camps. I was working on low surface brightness galaxies at the time and was hopeful that they would help to resolve the issue. Instead they made it worse, sticking us with a fine-tuning problem. I could not solve this fine-tuning problem. It caused me many headaches. It was only after I had suffered those headaches that I began to worry about the dark matter paradigm. And then by chance, I heard a talk by this guy Milgrom who, in a few lines on the board, derived as a prediction all of the things that I was finding problematic to interpret in terms of dark matter. Basically, a model with dark matter has to look like MOND to satisfy the data.

That’s just silly, isn’t it?

MOND made predictions. Those predictions came true. What am I supposed to report? That it had these predictions com true – therefore it’s wrong?

I had made my own prediction based on dark matter. It failed. Other people had different predictions based on dark matter. Those also did not come true. Milgrom was only the only one to correctly predict ahead of time what low surface brightness galaxies would do.

If we insist on dark matter, what this means is that we need, for each and every galaxy, the precise that looks like MOND. I wrote the equation for the required effects of dark matter in all generality in McGaugh (2004). The improvements in the data over the subsequent decade enable this to be abbreviated to

gDM = gbar/(e√(gbar/a0) -1).

This is in McGaugh et al. (2016), which is a well known paper (being in the top percentile of citation rates). So this should be well known, but the implication seems not to be, so let’s talk it through. gDM is the force per unit mass provided by the dark matter halo of a galaxy. This is related to the mass distribution of the dark matter – its radial density profile – through the Poisson equation. The dark matter distribution is entirely stipulated by the mass distribution of the baryons, represented here by gbar. That’s the only variable on the right hand side, a0 being Milgrom’s acceleration constant. So the distribution of what you see specifies the distribution of what you can’t.

This is not what we expect for dark matter. It’s not what naturally happens in any reasonable model, which is an NFW halo. That comes from dark matter-only simulations; it has literally nothing to do with gbar. So there is a big chasm to bridge right from the start: theory and observation are speaking different languages. Many dark matter models don’t specify gbar, let alone satisfy this constraint. Those that do only do so crudely – the baryons are hard to model. Still, dark matter is flexible; we have the freedom to make it work out to whatever distribution we need. But in the end, the best a dark matter model can hope to do is crudely mimic what MOND predicted in advance. If it doesn’t do that, it can be excluded. Even if it does do that, should we be impressed by the theory that only survives by mimicking its competitor?

The observed MONDian behavior makes no sense whatsoever in terms of the cosmological constraints in which the dark matter has to be non-baryonic and not interact directly with the baryons. The equation above implies that any dark matter must interact very closely with the baryons – a fact that is very much in the spirit of what earlier dynamicist had found, that the baryons and the dynamics are intimately connected. If you know the distribution of the baryons that you can see, you can predict what the distribution of the unseen stuff has to be.

And so that’s the property that galaxies require that is pretty much orthogonal to the cosmic requirements. There needs to be something about the nature of dark matter that always gives you MONDian behavior in galaxies. Being cold and non-interacting doesn’t do that. Instead, galaxy phenomenology suggests that there is a direct connection – some sort of direct interaction – between dark matter and baryons. That direct interaction is anathema to most ideas about dark matter, because if there’s a direct interaction between dark matter and baryons, it should be really easy to detect dark matter. They’re out there interacting all the time.

There have been a lot of half solutions. These include things like warm dark matter and self interacting dark matter and fuzzy dark matter. These are ideas that have been motivated by galaxy properties. But to my mind, they are the wrong properties. They are trying to create a central density core in the dark matter halo. That is at best a partial solution that ignores the detailed distribution that is written above. The inference of a core instead of a cusp in the dark matter profile is just a symptom. The underlying disease is that the data look like MOND.

MONDian phenomenology is a much higher standard to try to get a dark matter model to match than is a simple cored halo profile. We should be honest with ourselves that mimicking MOND is what we’re trying to achieve. Most workers do not acknowledge that, or even be aware that this is the underlying issue.

There are some ideas to try to build-in the required MONDian behavior while also satisfying the desires of cosmology. One is Blanchet’s dipole or dark matter. He imagined a polarizable dark medium that does react to the distribution of baryons so as to give the distribution of dark matter that gives MOND-like dynamics. Similarly, Khoury’s idea of superfluid dark matter does something related. It has a superfluid core in which you get MOND-like behavior. At larger scales it transitions to a non-superfluid mode, where it is just particle dark matter that reproduces the required behavior on cosmic scales.

I don’t find any of these models completely satisfactory. It’s clearly a hard thing to do. You’re trying to mash up two very different sets of requirements. With these exceptions, the galaxy-motivated requirement that there is some physical aspect of dark matter that somehow knows about the distribution of baryons and organizes itself appropriately is not being used to inform the construction of dark matter models. The people who do that work seem to be very knowledgeable about cosmological constraints, but their knowledge of galaxy dynamics seems to begin and end with the statement that rotation curves are flat and therefore we need dark matter. That sufficed 40 years ago, but we’ve learned a lot since then. It’s not good enough just to have extra mass. That doesn’t cut it.

So in summary, we have two very different requirements on the dark matter. From a cosmological perspective, we need it to be dynamically cold. Something non baryonic that does not interact with photons or easily with baryons.

From a galactic perspective, we need something that knows intimately about what the baryons are doing. And when one does one thing, the other does a corresponding thing that always adds up to looking like MOND. If it doesn’t add up to looking like MOND, then it’s wrong.

So that’s where we’re at right now. These two requirements are both imperative – and contradictory.


* There is a knee-jerk response to say “mass tells light where to go” that sound wise but is actually stupid. This is a form of misdirection that gives the illusion of deep thought without the bother of actually engaging in it.

By the wayside

By the wayside

I noted last time that in the rush to analyze the first of the JWST data, that “some of these candidate high redshift galaxies will fall by the wayside.” As Maurice Aabe notes in the comments there, this has already happened.

I was concerned because of previous work with Jay Franck in which we found that photometric redshifts were simply not adequately precise to identify the clusters and protoclusters we were looking for. Consequently, we made it a selection criterion when constructing the CCPC to require spectroscopic redshifts. The issue then was that it wasn’t good enough to have a rough idea of the redshift, as the photometric method often provides (what exactly it provides depends in a complicated way on the redshift range, the stellar population modeling, and the wavelength range covered by the observational data that is available). To identify a candidate protocluster, you want to know that all the potential member galaxies are really at the same redshift.

This requirement is somewhat relaxed for the field population, in which a common approach is to ask broader questions of the data like “how many galaxies are at z ~ 6? z ~ 7?” etc. Photometric redshifts, when done properly, ought to suffice for this. However, I had noticed in Jay’s work that there were times when apparently reasonable photometric redshift estimates went badly wrong. So it made the ganglia twitch when I noticed that in early JWST work – specifically Table 2 of the first version of a paper by Adams et al. – there were seven objects with candidate photometric redshifts, and three already had a preexisting spectroscopic redshift. The photometric redshifts were mostly around z ~ 9.7, but the three spectroscopic redshifts were all smaller: two z ~ 7.6, one 8.5.

Three objects are not enough to infer a systematic bias, so I made a mental note and moved on. But given our previous experience, it did not inspire confidence that all the available cases disagreed, and that all the spectroscopic redshifts were lower than the photometric estimates. These things combined to give this observer a serious case of “the heebie-jeebies.”

Adams et al have now posted a revised analysis in which many (not all) redshifts change, and change by a lot. Here is their new Table 4:

Table 4 from Adams et al. (2022, version 2).

There are some cases here that appear to confirm and improve the initial estimate of a high redshift. For example, SMACS-z11e had a very uncertain initial redshift estimate. In the revised analysis, it is still at z~11, but with much higher confidence.

That said, it is hard to put a positive spin on these numbers. 23 of 31 redshifts change, and many change drastically. Those that change all become smaller. The highest surviving redshift estimate is z ~ 15 for SMACS-z16b. Among the objects with very high candidate redshifts, some are practically local (e.g., SMACS-z12a, F150DB-075, F150DA-058).

So… I had expected that this could go wrong, but I didn’t think it would go this wrong. I was concerned about the photometric redshift method – how well we can model stellar populations, especially at young ages dominated by short lived stars that in the early universe are presumably lower metallicity than well-studied nearby examples, the degeneracies between galaxies at very different redshifts but presenting similar colors over a finite range of observed passbands, dust (the eternal scourge of observational astronomy, expected to be an especially severe affliction in the ultraviolet that gets redshifted into the near-IR for high-z objects, both because dust is very efficient at scattering UV photons and because this efficiency varies a lot with metallicity and the exact gran size distribution of the dust), when is a dropout really a dropout indicating the location of the Lyman break and when is it just a lousy upper limit of a shabby detection, etc. – I could go on, but I think I already have. It will take time to sort these things out, even in the best of worlds.

We do not live in the best of worlds.

It appears that a big part of the current uncertainty is a calibration error. There is a pipeline for handling JWST data that has an in-built calibration for how many counts in a JWST image correspond to what astronomical magnitude. The JWST instrument team warned us that the initial estimate of this calibration would “improve as we go deeper into Cycle 1” – see slide 13 of Jane Rigby’s AAS presentation.

I was not previously aware of this caveat, though I’m certainly not surprised by it. This is how these things work – one makes an initial estimate based on the available data, and one improves it as more data become available. Apparently, JWST is outperforming its specs, so it is seeing as much as 0.3 magnitudes deeper than anticipated. This means that people were inferring objects to be that much too bright, hence the appearance of lots of galaxies that seem to be brighter than expected, and an apparent systematic bias to high z for photometric redshift estimators.

I was not at the AAS meeting, let alone Dr. Rigby’s presentation there. Even if I had been, I’m not sure I would have appreciated the potential impact of that last bullet point on nearly the last slide. So I’m not the least bit surprised that this error has propagated into the literature. This is unfortunate, but at least this time it didn’t lead to something as bad as the Challenger space shuttle disaster in which the relevant warning from the engineers was reputed to have been buried in an obscure bullet point list.

So now we need to take a deep breath and do things right. I understand the urgency to get the first exciting results out, and they are still exciting. There are still some interesting high z candidate galaxies, and lots of empirical evidence predating JWST indicating that galaxies may have become too big too soon. However, we can only begin to argue about the interpretation of this once we agree to what the facts are. At this juncture, it is more important to get the numbers right than to post early, potentially ill-advised takes on arXiv.

That said, I’d like to go back to writing my own ill-advised take to post on arXiv now.

New web domain

I happened to visit this blog as a visitor from a computer not mine. Seeing it that way made me realize how obnoxious the ads had become. So WordPress’s extortion worked; I’ve agreed to send them a few $ every month to get rid of the ads. With it comes a new domain name: tritonstation.com. Bookmarks to the previous website (tritonstation.wordpress.com) should redirect here. Let me know if a problem arises, or the barrage of ads fails to let up. I may restructure the web page so there is more here than just this blog, but that will have to await my attention in my copious spare time.

As it happens, I depart soon to attend an IAU meeting on galaxy dynamics. This is being held in part to honor the career of Prof. Jerry Sellwood, with whom I had the pleasure to work while a postdoc at Rutgers. He hosted a similar meeting at Rutgers in 1998; I’m sure that some of the same issues discussed then will be debated again next week.

DTM’s Remembering Vera

DTM’s Remembering Vera

I wrote my own recollection of Vera Rubin recently. Her long time home institution, the Department of Terrestrial Magnetism (DTM) of the Carnegie Institution of Washington recently held a lunch in her honor. Unfortunately my travel schedule precluded me from attending. However, they have put together a wonderful website that I recommend to everyone. The depth and variety of the materials published there – testimonials, photos, her list of published papers – is outstanding.

Of historical interest are a series of papers written in the mid-60s in collaboration with Margaret Burbidge. These show some early rotation curves. Many peter out around the turn-over of the rotation curve. With the benefit of hindsight, one can see what the data will do – extend more or less flat from the last measured points.

Here is an example from Burbidge et al. (1964). In this case, NGC 3521, they got a bit further than the turnover. You may judge for yourself how convincing the detection of flat rotation is.

ngc3521_brbidgerubun1964

As it happens, NGC 3521 is a near kinematic twin to the Milky Way. Here is the modern rotation curve from THINGS compared with an estimate of the Milky Way rotation curve.

mw_ngc3521_twins

Hopefully it is obvious why it helps to have extended data (usually from 21 cm data, as in the example from THINGS).

This reminds me of something Vera frequently said. Early Days. In many ways, we are far down the path of dark matter. But we still have no idea what it is, or even whether what we call dark matter now is merely a proxy for some more general concept.

Vera always appreciated this. In many ways, these are still Early Days.

Reckless disregard for the scientific method

Reckless disregard for the scientific method

There has been another attempt to explain away the radial acceleration relation as being fine in ΛCDM. That’s good; I’m glad people are finally starting to address this issue. But lets be clear: this is a beginning, not a solution. Indeed, it seems more like a rush to create truth by assertion than an honest scientific investigation. I would be more impressed if these papers were (i) refereed rather than rushed onto the arXiv, and (ii) honestly addressed the requirements I laid out.

This latest paper complains about IC 2574 not falling on the radial acceleration relation. This is the galaxy that I just pointed out (about the same time they must have been posting the preprint) does adhere to the relation. So, I guess post-factual reality has come to science.

Rather than consider the assertions piecemeal, lets take a step back. We have established that galaxies obey a single effective force law. Federico Lelli has shown that this applies to pressure supported elliptical galaxies as well as rotating disks.

rar_todo_raronly
The radial acceleration relation, including pressure supported early type galaxies and dwarf Spheroidals.

Lets start with what Newton said about the solar system: “Everything happens… as if the force between two bodies is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.” Knowing how this story turns out, consider the following.

Suppose someone came to you and told you Newton was wrong. The solar system doesn’t operate on an inverse square law, it operates on an inverse cube law. It just looks like an inverse square law because there is dark matter arranged just so as to make this so. No matter whether we look at the motion of the planets around the sun, or moons around their planets, or any of the assorted miscellaneous asteroids and cometary debris. Everything happens as if there is an inverse square law, when really it is an inverse cube law plus dark matter arranged just so.

Would you believe this assertion?

I hope not. It is a gross violation of the rule of parsimony. Occam would spin in his grave.

Yet this is exactly what we’re doing with dark matter halos. There is one observed, effective force law in galaxies. The dark matter has to be arranged just so as to make this so.

Convenient that it is invisible.

Maybe dark matter will prove to be correct, but there is ample reason to worry. I worry that we have not yet detected it. We are well past the point that we should have. The supersymmetric sector in which WIMP dark matter is hypothesized to live flunked the “golden test” of the Bs meson decay, and looks more and more like a brilliant idea nature declined to implement. And I wonder why the radial acceleration relation hasn’t been predicted before if it is such a “natural” outcome of galaxy formation simulations. Are we doing fair science here? Or just trying to shove the cat back in the bag?

74117526

I really don’t know what the final answer will look like. But I’ve talked to a lot of scientists who seem pretty darn sure. If you are sure you know the final answer, then you are violating some basic principles of the scientific method: the principle of parsimony, the principle of doubt, and the principle of objectivity. Mind your confirmation bias!

That’ll do for now. What wonders await among tomorrow’s arXiv postings?

Natural Law

Natural Law
or why Vera Rubin and Albert Bosma deserve a Nobel Prize

Natural Law: a concise statement describing some aspect of Nature.

In the sciences, we teach about Natural Law all the time. We take them for granted. But we rarely stop and think what we mean by the term.

Usually Natural Laws are items of textbook knowledge. A shorthand all in a particular field known and agree to. This also brings with it an air of ancient authority, which has a flip side. The implicit operating assumption is that there are no Natural Laws left to be discovered, which implies that it is dodgy to even discuss such a thing.

The definition offered above is adopted, in paraphrase, from a report of the National Academy which I can no longer track down. Links come and go. The one I have in mind focussed on biological evolution. To me, as a physical scientist, it seems a rather soft definition. One would like it to be quantitative, no?

Lets consider a known example: Kepler’s Laws of planetary motion. Everyone who teaches introductory astronomy teaches these, and in most cases refers to them as Laws of Nature without a further thought. Which is to say, virtually everyone agrees that Kepler’s Laws are valid examples of Natural Law in a physical science. Indeed, this sells them rather short given their importance in the Scientific Revolution.

Kepler’s Three Laws of Planetary Motion:

  1. Planetary orbits are elliptical in shape with the sun at one focus.
  2. A line connecting a planet with the sun sweeps out equal areas in equal times.
  3. P2 = a3

In the third law, P is the sidereal period of a planet’s orbit measured in years, and a is the semi-major axis of the ellipse measured in Astronomical Units. This is a natural system of units for an observer living on Earth. One does not need to know the precise dimensions of the solar system: the earth-sun separation provides the ruler.

To me, the third law is the most profound, leading as it does to Newton’s Universal Law of Gravity. At the time, however, the first law was the most profound. The philosophical prejudice/theoretical presumption (still embedded in the work of Copernicus) was that the heavens should be perfect. The circle was a perfect shape, ergo the motions of the heavenly bodies should be circular. Note the should be. We often get in trouble when we tell Nature how things Should Be.

By abandoning purely circular motion, Kepler was repudiating thousands of years of astronomical thought, tradition, and presumption. To imagine heavenly bodies following elliptical orbits that are almost but not quite circular must have seemed to sully the heavens themselves. In retrospect, we would say the opposite. The circle is merely a special case of a more general set of possibilities. From the aesthetics of modern physics, this is more beautiful than insisting that everything be perfectly round.

It is interesting what Kepler himself said about Tycho Brahe’s observations of the position of Mars that led him to his First Law. Mars was simply not in the right place for a circular orbit. It was close, which is why the accuracy of Tycho’s work was required to notice it. Even then, it was such a small effect that it must have been tempting to ignore.

If I had believed that we could ignore these eight minutes [of arc], I would have patched up my hypothesis accordingly. But, since it is not permissible to ignore, those eight minutes pointed the road to a complete reformation in astronomy.

This sort of thing happens all the time in astronomy, right up to and including the present day. Which are the important observations? What details can be ignored? Which are misleading and should be ignored? The latter can and does happen, and it is an important part of professional training to learn to judge which is which. (I mention this because this skill is palpably fading in the era of limited access to telescopes but easy access to archival data, accelerated by the influx of carpetbaggers who lack appropriate training entirely.)

Previous to Tycho’s work, the available data were reputedly not accurate enough to confidently distinguish positions to 8 arcminutes. But Tycho’s data were good to about ±1 arcminute. Hence it was “not permissible to ignore” – a remarkable standard of intellectual honesty that many modern theorists do not meet.

I also wonder about counting the Laws, which is a psychological issue. We like things in threes. The first Law could count as two: (i) the shape of the orbit, and (ii) the location of the sun with respect to that orbit. Obviously those are linked, so it seems fair to phrase it as 3 Laws instead of 4. But when I pose this as a problem on an exam, it is worth 4 points: students must know both (i) and (ii), and often leave out (ii).

The second Law sounds odd to modern ears. This is Kepler trying to come to grips with the conservation of angular momentum – a Conservation Law that wasn’t yet formally appreciated. Nowadays one might write J = VR = constant and be done with it.

The way the first two laws are phrased is qualitative. They satisfy the definition given at the outset. But this phrasing conceals a quantitative basis. One can write the equation for an ellipse, and must for any practical application of the first law. One could write the second law dA/dt = constant or rephrase it in terms of angular momentum. So these do meet the higher standard expected in physical science of being quantitative.

The third law is straight-up quantitative. Even the written version is just a long-winded way of saying the equation. So Kepler’s Laws are not just a qualitative description inherited in an awkward form from ancient times. They do in fact quantify an important aspect of Nature.

What about modern examples? Are there Laws of Nature still to be discovered?

I have worked on rotation curves for over two decades now. For most of that time, it never occurred to me to ask this question. But I did have the experience of asking for telescope time to pursue how far out rotation curves remained flat. This was, I thought, an exciting topic, especially for low surface brightness galaxies, which seemed to extend much further out into their dark matter halos than bright spirals. Perhaps we’d see evidence for the edge of the halo, which must presumably come sometime.

TACs (Telescope Allocation Committees) did not share my enthusiasm. Already by the mid-90s it was so well established that rotation curves were flat that it was deemed pointless to pursue further. We had never seen any credible hint of a downturn in V(R), no matter how far out we chased it, so why look still harder? As one reviewer put it, “Is this project just going to produce another boring rotation curve?”

Implicit in this statement is that we had established a new law of nature:

The rotation curves of disk galaxies become approximately flat at large radii, a condition that persists indefinitely.

This is quantitative: V(R) ≈ constant for R → ∞. Two caveats: (1) I do mean approximately – the slope dV/dR of the outer parts of rotation curves is not exactly zero point zero zero zero. (2) We of course do not trace rotation curves to infinity, which is why I say indefinitely. (Does anyone know a mathematical symbol for that?)

Note that it is not adequate to simply say that the rotation curves of galaxies are non-Keplerian (V ∼ 1/√R). They really do stay pretty nearly flat for a very long ways. In SPARC we see that the outer rotation velocity remains constant to within 5% in almost all cases.

Never mind whether we interpret flat rotation curves to mean that there is dark matter or modified gravity or whatever other hypothesis we care to imagine. It had become conventional to refer to the asymptotic rotation velocity as V well before I entered the field. So, as a matter of practice, we have already agreed that this is a natural law. We just haven’t paused to recognize it as such – largely because we no longer think in those terms.

Flat rotation curves have many parents. Mort Roberts was one of the first to point them out. People weren’t ready to hear it – or at least, to appreciate their importance. Vera Rubin was also early, and importantly, persistent. Flat rotation curves are widely known in large part to her efforts. Also important to the establishment of flat rotation curves was the radio work of Albert Bosma. He showed that flatness persisted indefinitely, which was essential to overcoming objections to optical-only data not clearly showing a discrepancy (see the comments of Kalnajs at IAU 100 (1983) and how they were received.)

And that, my friends, is why Vera Rubin and Albert Bosma deserve a Nobel prize. It isn’t that they “just” discovered dark matter. They identified a new Law of Nature.

The Central Density Relation

I promised more results from SPARC. Here is one. The dynamical mass surface density of a disk galaxy scales with its central surface brightness.

This may sound trivial: surface density correlates with surface brightness. The denser the stars, the denser the mass. Makes sense, yes?

Turns out, this situation is neither simple nor obvious when dark matter is involved. The surface brightness traces stellar surface density while the dynamical mass surface density traces stars plus everything else, including dark matter. The latter need not care about the former when dark matter dominates.

Nevertheless, we’ve known there was some connection for some time. This first became clear to me in the mid-90s, when we discovered that low surface brightness galaxies did not shift off of the Tully-Fisher relation as expected. The only way to obtain this situation was to fine-tune the dynamical mass enclosed by the disk with the central surface brightness. Galaxies had to become systematically more dark matter dominated as surface brightness decreased (see Zwaan et al 1995). This was the genesis of the now common statement that low surface brightness galaxies are dark matter dominated (see also de Blok & McGaugh 1996; McGaugh & de Blok 1998).

This is an oversimplification. A more precise statement would be that dark matter dominates to progressively smaller radii in ever lower surface brightness galaxies. Even as dark matter comes to dominate, the dynamics “know” about the stellar distribution.

The rotation curve depends on the enclosed mass, dark as well as luminous. The rate of rise of the rotation curve from the origin correlates with surface brightness. Low surface brightness galaxies have slowly rising rotation curves while high surface brightness galaxies have steeply rising rotation curves. This is very systematic (e.g., Lelli et al 2013).

LellidVdRSB

The rate of rise of rotation curves (dV/dR) as a function of central surface brightness (from Lelli 2014).

Recently, Agris Kalnajs pointed out to us that in a paper written before even I was born, Toomre (1963) had shown how to obtain the central mass surface density of a thin disk from the rotation curve. This has largely been forgotten because dark matter complicates matters. However, we were able to show that Toomre’s formula returns the correct dynamical surface density within a factor of two even in the extreme case of complete domination by a spherical halo component. This breakthrough was enabled by Kalnajs pointing out a straightforward way to include disk thickness (Toomre assumed a razor thin disk) and Lelli pursuing this to the extreme case of a “spherical” disk with a flat rotation curve.

A factor of two is not much to quibble about when one has the large dynamic range of a sample like SPARC. After all, the data cover four orders of magnitude in central surface brightness. Variations in disk thickness and halo domination will only contribute a bit to the scatter.

Without further ado, here is the result:

CentralDensityRelation

The central dynamical mass surface density as a function of the central stellar surface density (left) and stellar mass (right). From Lelli et al (2016). Points are color coded by morphological type. The dashed line shows the 1:1 relation expected in the absence of dark matter.

The data show a clear correlation between mass surface density and surface brightness. At high surface brightness, the data have a slope and normalization consistent with stars being the dominant form of mass present. This is the long-known result of “maximum disk” (van Albada & Sancisi 1986). The observed distribution of stars does a good job of matching the inner rotation curve. It is only as you go further out, where rotation curves flatten, or to lower surface brightness that dark matter becomes necessary.

Something interesting happens as surface brightness declines. The data gradually depart from the line of unity. The dynamical surface density begins to exceed the stellar surface density, so we begin to need some dark matter. Stars and Newton alone can no longer explain the data for low surface brightness galaxies.

Note, however, that the data depart from the line of unity with a considerable degree of order. It is not like things go haywire as dark matter comes to dominate, as one might reasonably expect. After all, why should the mass surface density depend on the stellar surface density at all in the limit where the former greatly outweighs the latter? But it does: the correlation persists to the point that the mass surface density is predictable from the surface brightness.

It is not only rotation curves that show this behavior. A similar result was found from the vertical velocity dispersions of disks in the DiskMass survey. Specifically, Swaters et al (2014) show essentially the same plot. However, the DiskMass sample is necessarily restricted to rather high surface brightness galaxies. Consequently, those data show only a hint of the systematic departure from the line of unity, and one could argue that it is linear. A large number of low surface brightness galaxies with good data is key to our result (see the discussion of the sample in SPARC).

This empirical result sheds some light on the debate about dark matter halo profiles. The rotation curves of low surface brightness galaxies rise slowly. This is not consistent with NFW halos, as shown long ago (e.g., de Blok et al. 2001; Kuzio de Naray 2008, 2009, and many others). It is frequently argued that everything is OK with NFW, it is the data that are to blame. The basic idea is that somehow (usually for inadequate resolution, though sometimes other effects are invoked) rotation curves fail to show the steep predicted rise.  It is there! we are assured. We just can’t see it.

Beware of theorists blaming data that doesn’t do what they want. A recent example of this kind of argument is offered by Pineda et al. (2016). Their basic contention is that rotation curves cannot correctly recover the true mass distribution. They invoke a variety of effects to make it sound like one has no chance of recovering the central mass profile from rotation curves.

The figure above shows that this is manifestly nonsense. If the data were incapable of measuring the dynamical surface density, the figure would be a mess. In one galaxy we’d see one wrong thing, in another we’d see a different wrong thing. This would not correlate with surface brightness, or anything else: the y-axis would just be so much garbage. Instead, we see a strong correlation. Indeed, we understand the errors well enough to calculate that most of the scatter is observational. Yes, there are uncertainties. They do add scatter to the plot. A little. That means the true relation is even tighter.

There is a considerable literature in the same vein as Pineda et al. (2016). These concerns can be completely dismissed. Not only are they incorrect, they stem from a form of solution aversion: they don’t like the answer, so deny that it can be true. This attitude has no place in science.

SPARC

SPARC

We have a new paper that introduces SPARC: Spitzer Photometry & Accurate Rotation Curves. SPARC is a database of 175 galaxies with measured HI rotation curves and homogeneous near-infrared [3.6 micron] surface photometry obtained with the Spitzer Space Telescope. It provides the largest cohesive dataset currently available of disk galaxy mass models.

SPARC represents all known types of rotating galaxies. It spans a broad range in morphologies (S0 to Irr), luminosities (L[3.6] ~ 107 to ~1012 L, effective radii (~0.3 to ~15 kpc), effective surface brightnesses (~5 to ~5000 L pc-2), rotation velocities (~20 to ~300 km/s), and gas content (0.01 < M(HI)/L[3.6] < 10). This samples the full range of physical properties known for rotating disk galaxies. It is vastly superior to most “complete” samples in that it provides a much better representation of low mass and low surface brightness galaxies.

Let me emphasize that last point. Traditional galaxy surveys are great at finding bright objects. They are lousy at finding low luminosity and low surface brightness galaxies. For example, most studies based on the gold-standard Sloan Digital Sky Survey are restricted to massive galaxies with M* > 109 M☉. SPARC extends two decades lower in mass. Sloan misses low surface brightness galaxies entirely. SPARC includes many such objects. Ideally, a sample like this would provide a thorough sampling of all possible disk galaxy properties. We come as close to that ideal as is currently possible, without the usual bias against the faint and the dim.

The rotation curves of SPARC galaxies have been collected from the literature. While we have obtained some of these ourselves, the vast majority come from the hard work of many others. All SPARC galaxies have been observed in the 21cm line of atomic hydrogen with radio interferometers like the VLA or WSRT. These data represent the fruits of the labors of a whole community of radio astronomers spanning decades.

The surface photometry we have done ourselves. This represents the cumulative results of a decade of work. The near-IR images from Spitzer have been analyzed with the ARCHANGEL software to determine the surface brightness profiles of all sample galaxies. These have been used to construct mass models representing the gravitational potential generated by the observed distribution of stellar mass. The 21cm data provide the same information for the gas.

ngc6946picture

Optical (BVI), near-IR (JHK), and 21 cm images of the spiral galaxy NGC 6946. The images are shown on the same scale. So yes, the gas extends that much further out. This is typical, and emphasizes the importance of combining multiwavelength observations.

We now have three measured properties for all SPARC galaxies that are hard to find simultaneously in the literature. These are the rotation curve V(R), the portion of the rotation due to stars V*(R), and that due to gas Vg(R). These are what you need to study the missing mass problem in galaxies, as

V2(R) = V*2(R)+Vg2(R)+VDM2(R)

The mysterious “other” represented by VDM(R) is dark matter (whatever that means). It is now completely specified by the observations.

Of course, this has been true for a while, but with one important exception. Mass models for V*(R) have been constructed with the available data, which are usually in the optical. When we construct a mass model, we have to convert the observed light to a stellar mass by assuming some mass-to-light ratio for the stars, M*/L. Optical M*/L vary with age and metallicity in a way that precludes clarity in the correct stellar mass model. Near-IR data (the 2.2 micron K-band or [3.6] of Spitzer) are much, much, much better for this.

I don’t think I emphasized that enough. The near-IR image of a galaxy is as close as we’re likely to ever get to a map of the stellar mass. It isn’t perfect of course – nothing in astronomy ever is – but it is a sufficient improvement that all the freedom and uncertainty that we had in VDM(R) before basically goes away.

We’ll have a lot more to say about that. Look for big announcements, coming soon.

What is theory?

What is theory?

OK, I’m not even going to try to answer that one. But I am going to do some comparison exploration.

A complaint often leveled against MOND is that it is not a theory. Or not a complete theory. Or somehow not a proper one. Sometimes people confuse MOND with the empirical observations that display MONDian phenomenology.

I would say that MOND is a hypothesis, as is dark matter. We observe a discrepancy between the motions observed in extragalactic systems and what is predicted by application of the known law of gravity to the mass visible in ordinary baryonic matter. Either we need more mass (dark matter) or need to change the force law (modify dynamical laws, i.e., gravity). MOND is just one example of the latter type of hypothesis.

Put this way, dark matter is the more conservative hypothesis. It doesn’t require any change to well established, fundamental theory. There’s just more mass there than we see.

But what is it? Dark matter as so far stated is not a valid scientific hypothesis. It is a concept – there is unseen stuff out there. To turn it into science, we need to hypothesize a specific candidate.

An example of a dark matter candidate that most people would agree has been falsified at this point is brown dwarfs. These are very faint, sub-stellar objects – failed stars if you like, things not quite massive enough to ignite nuclear fusion in their cores to shine as stars. In the early days of dark matter, it was quite reasonable to believe there could be an enormous amount of mass in the sum of these objects. Indeed, the mass spectrum of stars as then known (via the Salpeter IMF) diverged when extrapolated to the low masses of brown dwarfs. It appeared that there had to be lots of them, and their integrated mass could easily add up to lots and lots – potentially enough to be the dark matter.

The hypothesis of brown dwarf-like dark matter, dubbed MACHOs (MAssive Compact Halo Objects), was tested by a series of microlensing experiments. Remarkably, if you stare at the stars in the Large Magellanic Cloud long enough, you should occasionally witness a MACHO pass in front of one of them. You don’t see the MACHO directly, but you can see an enhancement to the brightness of the background star due to the gravitational lensing effect of the MACHO.

Long story short: microlensing events are observed, but not nearly enough are seen for the dark matter halo of the Milky Way to be composed of brown dwarf MACHOs. Nowadays we have a better handle on the stellar mass spectrum. Lots of brown dwarfs are indeed known, but nothing like the numbers necessary to compose the dark matter.

Many of us, including me, never gave MACHOs much of a chance. In order to add up to the total mass density we need in dark matter cosmologically, we need an amount 5 or 6 times as great as the density allowed in baryons by Big Bang Nucleosynthesis. So MACHO dark matter would break some pretty fundamental theory after all.

The most popular hypothesis, then and now, is some form of non-baryonic dark matter. Most prominent among these are WIMPs (Weakly Interacting Massive Particles). This is a valid, specific hypothesis that can be tested in the laboratory. Indeed, it has been. If the WIMP hypothesis were correct, we really should have detected them by now. It only persists because it is very flexible: we can keep adjusting the interaction cross-section to keep them invisible.

It would be a long post to revisit all the ways in which the WIMP hypothesis has repeatedly disappointed. Here I’d like to point out merely that WIMPs are hypothetical particles that exist in a hypothetical supersymmetric sector. There are compelling theoretical arguments in favor of supersymmetry, but so far it too has repeatedly disappointed. Anybody else remember how the decay of the Bs meson was suppose to be the Golden Test of supersymmetry? No? Nobody seems to talk about it anymore because it flunked badly. So supersymmetry itself is in dire shape. No supersymmetry, no WIMPs.

Like WIMPs, supersymmetry can be made more complicated to avoid falsification. This allows it to persist, but it is not the sign of a healthy theory. Still, everybody seems to agree that it is a theory, and most people seem to think it is a good one.

Unlike MACHOs, WIMPs do require a fundamentally new theory. Supersymmetry is not a part of the highly successful Standard Model of particle physics. It is a hypothetical extension thereof. So they aren’t really as conservative as just saying there is some unseen mass. There have to be invisible particles that reside in an entirely novel and itself hypothetical dark sector. That they have never been detected in the laboratory, and so far we have zero laboratory evidence to support the existence of the supersymmetric sector in which they reside, despite enormous (and expensive) effort (e.g., the LHC), might strike some as cause for concern.

So why do WIMPs persist? Time lag and training. If you are an astronomer, you don’t really care what the dark matter particle is, just that it is there. You are unlikely to keep close tabs on the tribulations of dark matter detection experiments. If you are an astroparticle physicist, dark matter particles are your bread and butter. We all know the Standard Model is incomplete; surely the dark matter problem is just a sign of that. Suggesting that the problem might instead be with gravity is to admit that the entire field is an oxymoron. Yes, we need new physics. But that would be the wrong kind of new physics!

winnie-the-pooh-balloon-bees

The MOND hypothesis is an example of the wrong kind of new physics. No new particles; rather, new dynamics. The idea is to tweak the force law below a critical acceleration scale (of order 1 Å/s/s). Intriguingly, this can be interpreted as either a modification of gravity (which gets stronger) or of inertia (which gets less, so particles become easier to push around).

From such a hypothesis, one must construct a proper theory – whatever that is. One thing is for sure – the motivation is the opposite of supersymmetry. Supersymmetry is motivated by theory. It is a Good Idea that therefore ought to be true, even if it appears that Nature declined to implement it. MOND has no compelling theoretical motivation or basis. (Who ordered that?) Rather, it is empirically motivated. It started by seeking a possible explanation for a particular observation: the apparent flatness of spiral galaxy rotation curves. In this regard, it could be considered an effective theory, though it does have strong implications for what the underlying cause is.

The original (1983) MOND formula did not conserve energy or momentum. That’s not a property of a healthy theory. Some people seem to think it is still stuck there.

The first step towards building a proper theory was taken by Bekenstein and Milgrom in 1984 with AQUAL. They introduced an aquadratic Lagrangian that led to a modified Poisson equation, a form of modified gravity. Being derived form a Lagrangian, it automatically satisfies the conservation laws.

Since then, a variety of MOND theories have been posited. By this, I mean distinct theories that lead to the hypothesized behavior at low acceleration. These may be modifications of either gravity or inertia, and can lead to subtly different higher order predictions.

So far most MOND theories are extensions of Newtonian dynamics. MOND always contains Newton in the high acceleration limit, just as General Relativity contains Newton in the appropriate limit. The trick is to write a theory that does both. That’s the theoretical Holy Grail.

The following Venn diagram might help:

GravityTheoryVennDiagram

Both MOND and General Relativity encompass Newtonian dynamics. However, they do not contain each other. Since General Relativity came first, I think when people say MOND is not a theory they usually mean that it doesn’t capture all the previous theory that it needs to. We know General Relativity is correct – so far as we have tested it – so it doesn’t suffice to write down a theory that is merely an extension of Newton. We need a theory that does both – the Holy Grail.

Of course I agree that we want to have it all. I also think it is appropriate to take one step at a time. If Newtonian dynamics is in itself a valid theory – and I think it is – then so too is MOND, as it contains all of Newton in the appropriate limit. MOND is an incomplete theory, but it is certainly a theory.

For many years, an argument against MOND was that Bekenstein had sought the Holy Grail long and hard without success. Bekenstein was really smart, implying that if he couldn’t do it, it couldn’t be done.  In 2004, Bekenstein published TeVeS (for Tensor-Vector-Scalar), the first example of a theory that contained both General Relativity and MOND without obviously having some dreadful failing, like ghosts. The argument then became that TeVeS was inelegant.

It is not clear that TeVeS is the correct generalized version of General Relativity. Indeed, it is not the only such theory possible. Hence the question mark in the Venn diagram. If we falsify TeVeS, it doesn’t falsify the MOND hypothesis – that would be like saying Newton is wrong because Yilmaz gravity isn’t the right version of general relativity. There are many such theories that are possible; TeVeS is just one particular realization thereof.

What theory the question mark in the Venn diagram represents is what we should be trying to figure out. Unfortunately, most scientists interested in the subject are not trained nor equipped to do this sort of work, and for the most part are conditioned to be actively hostile to the project. That’s the wrong kind of new physics!

I find this a strange attitude. We all know that, as yet, there is no widely accepted theory of quantum theory. In this regard, General Relativity is itself incomplete. It is a noble endeavor to seek a quantum theory of gravity. How can we be sure that there is no intermediate step? Perhaps some of the difficulty in getting there stems from playing with an incomplete deck. I sometimes wonder if some string theorist has already come up with the correct theory but discarded it because it predicted this crazy low acceleration behavior he didn’t know might actually be desirable.

Whatever the final theory may be, be it dark matter based or a modification of dynamics, it must explain the empirical phenomena we observe. An enormous amount of galaxy phenomenology can be put down to one simple fact: galaxies behave as if MOND is the effective force law. We can write down a single formula that describes the dynamics of hundreds of measured galaxies and has had tremendous predictive success. If you don’t find that compelling, your physical intuition needs a check up.